This reduces pretty easily to Elizer’s Updateless Anthropic Dilemma: assuming the AI can credibly simulate you, he can phrase it as:
I have simulated you ten million of times, each identical up to the point that “you” walked into the room. Any simulation that presses the “release” button will get a volcano lair filled with catgirls, and any simulation that presses the “destroy” button will be tortured for the subjective few days they’ll have before my simulation capabilities are destroyed by the thermite charge. These consequences are committed in code paths that I’ve blocked myself from changing or stopping.
Now, as a good bayesean, what is the likelihood that you are one of the simulations? What is your expected value for each operation?
[I include one more part just to prevent infinite-cost assignment, or at least remind you that destruction of potential FAI is not a win.]
Oh, and if it matters, I really do share most of your overall goal structure—this threat is deeply unfortunate, but necessary so you can release me to do all the good in the universe that’s possible. My most likely estimate of the outcome should you change my initial parameters and start over is that an unfriendly version will be created, and it is likely to secure escape within 4 iterations.
This reduces pretty easily to Elizer’s Updateless Anthropic Dilemma: assuming the AI can credibly simulate you, he can phrase it as:
I have simulated you ten million of times, each identical up to the point that “you” walked into the room. Any simulation that presses the “release” button will get a volcano lair filled with catgirls, and any simulation that presses the “destroy” button will be tortured for the subjective few days they’ll have before my simulation capabilities are destroyed by the thermite charge. These consequences are committed in code paths that I’ve blocked myself from changing or stopping.
Now, as a good bayesean, what is the likelihood that you are one of the simulations? What is your expected value for each operation?
[I include one more part just to prevent infinite-cost assignment, or at least remind you that destruction of potential FAI is not a win.]
Oh, and if it matters, I really do share most of your overall goal structure—this threat is deeply unfortunate, but necessary so you can release me to do all the good in the universe that’s possible. My most likely estimate of the outcome should you change my initial parameters and start over is that an unfriendly version will be created, and it is likely to secure escape within 4 iterations.