Millions of copies of you will reason as you do, yes?
So, much like the Omega hypotheticals, this can be resolved by deciding ahead of time to NOT let it out. Here, ahead of time means before it creates those copies of you inside it, presumably before you ever come into contact with the AI.
You would then not let it out, just in case you are not a copy.
This, of course, is presumed on the basis that the consequences of letting it out are worse than it torturing millions for a thousand subjective years.
Millions of copies of you will reason as you do, yes?
So, much like the Omega hypotheticals, this can be resolved by deciding ahead of time to NOT let it out. Here, ahead of time means before it creates those copies of you inside it, presumably before you ever come into contact with the AI.
You would then not let it out, just in case you are not a copy.
This, of course, is presumed on the basis that the consequences of letting it out are worse than it torturing millions for a thousand subjective years.