Sorry for not making things clear from the start. In Gary’s version of the transparent boxes problem, Omega doesn’t predict what you will do, it predicts what you would do if both boxes contained money. Your actions in the other case are irrelevant to Omega. Would you like to change your decision now?
So, basically, I know that if I take both boxes, and both boxes have money, I’m either in a simulation or Omega was wrong? In that case, precommiting to one-boxing seems sensible.
Drescher then goes on to consider the case where you know that Omega has a fixed 99% chance of implementing this algorithm, and a 1% chance of instead implementing the opposite of this algorithm, and argues that you should still one-box in that case if you see the million.
Sorry for not making things clear from the start. In Gary’s version of the transparent boxes problem, Omega doesn’t predict what you will do, it predicts what you would do if both boxes contained money. Your actions in the other case are irrelevant to Omega. Would you like to change your decision now?
So, basically, I know that if I take both boxes, and both boxes have money, I’m either in a simulation or Omega was wrong? In that case, precommiting to one-boxing seems sensible.
Drescher then goes on to consider the case where you know that Omega has a fixed 99% chance of implementing this algorithm, and a 1% chance of instead implementing the opposite of this algorithm, and argues that you should still one-box in that case if you see the million.