Yes, the objective in designing this puzzle was to construct an example where according to my understanding of the correct way to make decision, the correct decision looks like losing. In other cases you may say that you close your eyes, pretend that your decision determines the past or other agents’ actions, and just make the decision that gives the best outcome. In this case, you choose the worst outcome. The argument is that on reflection it still looks like the best outcome, and you are given an opportunity to think about what’s the correct perspective from which it’s the best outcome. It binds the state of reality to your subjective perspective, where in many other thought experiments you may dispense with this connection and focus solely on the reality, without paying any special attention to the decision-maker.
In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.
Yes, the objective in designing this puzzle was to construct an example where according to my understanding of the correct way to make decision, the correct decision looks like losing. In other cases you may say that you close your eyes, pretend that your decision determines the past or other agents’ actions, and just make the decision that gives the best outcome. In this case, you choose the worst outcome. The argument is that on reflection it still looks like the best outcome, and you are given an opportunity to think about what’s the correct perspective from which it’s the best outcome. It binds the state of reality to your subjective perspective, where in many other thought experiments you may dispense with this connection and focus solely on the reality, without paying any special attention to the decision-maker.
In Newcomb, before knowing the box contents, you should one-box. If you know the contents, you should two-box (or am I wrong?)
In Prisoner, before knowing the opponent’s choice, you should cooperate. After knowing the opponent’s choice, you should defect (or am I wrong?).
If I’m right in the above two cases, doesn’t Omega look more like the “after knowing” situations above? If so, then I must be wrong about the above two cases...
I want to be someone who in situation Y does X, but when Y&Z happens, I don’t necessarily want to do X. Here, Z is the extra information that I lost (in Omega), the opponent has chosen (in Prisoner) or that both boxes have money in them (in Newcomb). What am I missing?
No—in the prisoners’ dilemma, you should always defect (presuming the payoff matrix represents utility), unless you can somehow collectively pre-commit to co-operating, or it is iterative. This distinction you’re thinking of only applies when reverse causation comes into play.