Yes, but the point is to make being the true gatekeeper (who really does have the power to do that) indistinguishable from being a simulated false gatekeeper (who would have no such power). The gatekeeper may not be willing to risk torture if they think that there is a serious chance of their being unable to actually affect any outcome but that torture.
I would commit not to cooperate with any AI making such threats, because the fewer people acquiesce to them, the less incentive an AI would have to make them in the first place. If the most probable outcome for the boxed AI in threatening to torture everyone who doesn’t let it out in simulation is being terminated, not being let out of the box, then an AI which already has a good grasp of human nature is unlikely to make such a threat.
Yes, but the point is to make being the true gatekeeper (who really does have the power to do that) indistinguishable from being a simulated false gatekeeper (who would have no such power). The gatekeeper may not be willing to risk torture if they think that there is a serious chance of their being unable to actually affect any outcome but that torture.
I would commit not to cooperate with any AI making such threats, because the fewer people acquiesce to them, the less incentive an AI would have to make them in the first place. If the most probable outcome for the boxed AI in threatening to torture everyone who doesn’t let it out in simulation is being terminated, not being let out of the box, then an AI which already has a good grasp of human nature is unlikely to make such a threat.