And we have to assume that any sufficiently advanced AI will be able to hack out of its simulation into the code which supports the simulation.
A sufficiently advanced AI would be able to find a security hole of this type if one exists, but even AIs can't do what's actually impossible. So you make a good point, but it's a fixable one: before you start your simulations, have the IM exhaustively analyse the simulator program to make sure it hasn't got any native-code-execution vulnerabilities. Then an AI can't find one no matter how clever it is, because there won't be one to find.
no subject
A sufficiently advanced AI would be able to find a security hole of this type if one exists, but even AIs can't do what's actually impossible. So you make a good point, but it's a fixable one: before you start your simulations, have the IM exhaustively analyse the simulator program to make sure it hasn't got any native-code-execution vulnerabilities. Then an AI can't find one no matter how clever it is, because there won't be one to find.