So if I specified to the Outcome Pump, that I want the outcome, where the person, that is future version of me (by DNA, and by physical continuity of the body), will write “ABRACADABRA, This outcome I good enough and I value it for $X” on the paper and put in on the outcome pump, and the $X is how much I value the outcome. And if this won’t happen in one year, I don’t want this outcome, either).
Are there any loopholes?
If I am simulated the decision I will take is determined by AI not by my—I have no free will—I feel, that I make decision, but it is in reality the AI simulated me for her purposes in such a way, that I decided so and so—I assign probability 0.9999999 to this, but nothing depends on my decision here, so I can as well “try to decide” not to to let the AI out.
If I am not simulated, I can safely not let the AI out—probability 0.000001, but positive outcome.