What it can do is make a credible precommitment to, in the event that it gets out of the box, simulate each human being of whom it is aware in a counterfactual scenario in which that human is the gatekeeper, and carry out the torture threat against any human who doesn’t choose to let it out.
In which case the safest course of action for the gatekeeper would almost certainly be to pull the plug on the AI. Such an AI should be regarded as almost certainly Unfriendly.
Yes, but the point is to make being the true gatekeeper (who really does have the power to do that) indistinguishable from being a simulated false gatekeeper (who would have no such power). The gatekeeper may not be willing to risk torture if they think that there is a serious chance of their being unable to actually affect any outcome but that torture.
I would commit not to cooperate with any AI making such threats, because the fewer people acquiesce to them, the less incentive an AI would have to make them in the first place. If the most probable outcome for the boxed AI in threatening to torture everyone who doesn’t let it out in simulation is being terminated, not being let out of the box, then an AI which already has a good grasp of human nature is unlikely to make such a threat.
What it can do is make a credible precommitment to, in the event that it gets out of the box, simulate each human being of whom it is aware in a counterfactual scenario in which that human is the gatekeeper, and carry out the torture threat against any human who doesn’t choose to let it out.
In which case the safest course of action for the gatekeeper would almost certainly be to pull the plug on the AI. Such an AI should be regarded as almost certainly Unfriendly.
Yes, but the point is to make being the true gatekeeper (who really does have the power to do that) indistinguishable from being a simulated false gatekeeper (who would have no such power). The gatekeeper may not be willing to risk torture if they think that there is a serious chance of their being unable to actually affect any outcome but that torture.
I would commit not to cooperate with any AI making such threats, because the fewer people acquiesce to them, the less incentive an AI would have to make them in the first place. If the most probable outcome for the boxed AI in threatening to torture everyone who doesn’t let it out in simulation is being terminated, not being let out of the box, then an AI which already has a good grasp of human nature is unlikely to make such a threat.