At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I’m not going to get the million. It seems like I want to decide “as if” I don’t know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads. But I don’t have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possessed by events I have since learned the outcome of...
Well, it seems to me that you always want to do this. According to timeless-reflectively-consistent-yada-yada decision theory, the best decision to make is to follow the strategy that you would have chosen at the very beginning.
The precise constraint this problem places on you is that the context you make your decision in is that there is a 50% chance that your decision results in you getting $1,000,000 instead of nothing.
Treat your observations as putting you in the context in which you make your decision.
Well, it seems to me that you always want to do this. According to timeless-reflectively-consistent-yada-yada decision theory, the best decision to make is to follow the strategy that you would have chosen at the very beginning.
The precise constraint this problem places on you is that the context you make your decision in is that there is a 50% chance that your decision results in you getting $1,000,000 instead of nothing.
Treat your observations as putting you in the context in which you make your decision.