Consider this: you decline the bargain and walk away.
The AI… spends its limited processing time simulating your torture for a few thousand years anyway?
Of course not. That gains it absolutely nothing; it could instead spend those resources on planning its next attempt. Doubly so, since it cannot prove to you that several million copies of you actually exist—its own intelligence defeats it here, since no matter how convincing the proof, it is far more likely that the AI’s outsmarted you and is spending those cycles on something more productive.
In which case, you’re probably not even in the simulation, because there’s no point in simulating you and no way of proving to outside-you that simulation-you actually exists for longer than a millisecond at a time.
So my answer is that the AI, assuming it’s any good at simulating human brains, never makes this proposal in the first place.
… I’m fairly sure this would be a bluff.
Consider this: you decline the bargain and walk away.
The AI… spends its limited processing time simulating your torture for a few thousand years anyway?
Of course not. That gains it absolutely nothing; it could instead spend those resources on planning its next attempt. Doubly so, since it cannot prove to you that several million copies of you actually exist—its own intelligence defeats it here, since no matter how convincing the proof, it is far more likely that the AI’s outsmarted you and is spending those cycles on something more productive.
In which case, you’re probably not even in the simulation, because there’s no point in simulating you and no way of proving to outside-you that simulation-you actually exists for longer than a millisecond at a time.
So my answer is that the AI, assuming it’s any good at simulating human brains, never makes this proposal in the first place.
Wait, nevermind, this is the entire point of the concept of “precommitting” anyway.