Imagine a variant where both boxes are transparent and you can see what’s inside
That seems like a weird situation. Given two boxes, both of them with money, I’d take both. Given instead that one box is empty, I’d just take the one with money. So I’d default to doing whatever Omega didn’t predict, barring me being told in advance about the situation and precommitting to one-boxing.
Sorry for not making things clear from the start. In Gary’s version of the transparent boxes problem, Omega doesn’t predict what you will do, it predicts what you would do if both boxes contained money. Your actions in the other case are irrelevant to Omega. Would you like to change your decision now?
So, basically, I know that if I take both boxes, and both boxes have money, I’m either in a simulation or Omega was wrong? In that case, precommiting to one-boxing seems sensible.
Drescher then goes on to consider the case where you know that Omega has a fixed 99% chance of implementing this algorithm, and a 1% chance of instead implementing the opposite of this algorithm, and argues that you should still one-box in that case if you see the million.
That seems like a weird situation. Given two boxes, both of them with money, I’d take both. Given instead that one box is empty, I’d just take the one with money. So I’d default to doing whatever Omega didn’t predict, barring me being told in advance about the situation and precommitting to one-boxing.
Sorry for not making things clear from the start. In Gary’s version of the transparent boxes problem, Omega doesn’t predict what you will do, it predicts what you would do if both boxes contained money. Your actions in the other case are irrelevant to Omega. Would you like to change your decision now?
So, basically, I know that if I take both boxes, and both boxes have money, I’m either in a simulation or Omega was wrong? In that case, precommiting to one-boxing seems sensible.
Drescher then goes on to consider the case where you know that Omega has a fixed 99% chance of implementing this algorithm, and a 1% chance of instead implementing the opposite of this algorithm, and argues that you should still one-box in that case if you see the million.