I don’t grasp why this problem seems so hard and convoluted.
Of course you have to one-box, if you two-box you’ll lose for sure.
From my perspective two-boxing is irrational...
If Omega can flawlessly predict the future, this confirms a deterministic world at the atomic scale. To be a perfect predictor Omega would also need to have a perfect model of my brain at every stage of making my “decision”—thus Omega can see the future and perfectly predict whether or not I’m gonna two-box or not.
If my brain is wired up in such a way as to choose two-boxing, then Omega will have predicted that. It doesn’t matter whether or not Omaga left already and box 1 already either contains 1M$ or 0$. No matter how long I ruminate back and forth, if I two-box I’ve lost because Omega is a perfect predictor and would thus have predicted it.
If Omega indeed has all the properties that are claimed, then there are only two possible outcomes: If you take one box, you’ll get 1M$, if you take two, then you get 1000$. It is true, that box 1 either contains 1M$ or nothing by the time Omega left—but what the box contains is still 100% correlated with my upcoming final decision and nothing is going to change that. End of story. Ergo, CDT is wrong and a model that’s at odds with reality.
PS: Interestingly, if opening the lid on these boxes is the trigger moment that counts as a “decision”, you could just put the opaque box into an X-ray and this act alone would instantly transform Omega into a liar, regardless of whether it contained 1M$ or nothing. It couldn’t possibly show an empty box without making Omega a liar, because contrary to what it said I could no longer actually decide to open only box 1 and get the 1M$. Conversely, if the box does contain 1M$, then I could just two-box, making Omega a liar with respect to its prediction.
So Omega would HAVE TO specifically forbid peeping into the opaque box. If it didn’t do that, Omega would risk being a liar one way or another, once I looked into the 1st box without opening it and either found 1M$ or nothing.
To perfectly model your thought processes, it would be enough that your brain activity be deterministic; it doesn’t follow that the universe is deterministic. The fact that my computer can model a Nintendo well enough for me to play video games does not imply that a Nintendo is built out of deterministic elementary particles, and a Nintendo emulator that simulated every elementary particle interaction in the Nintendo it was emulating would be ridiculously inefficient.
I don’t grasp why this problem seems so hard and convoluted. Of course you have to one-box, if you two-box you’ll lose for sure. From my perspective two-boxing is irrational...
If Omega can flawlessly predict the future, this confirms a deterministic world at the atomic scale. To be a perfect predictor Omega would also need to have a perfect model of my brain at every stage of making my “decision”—thus Omega can see the future and perfectly predict whether or not I’m gonna two-box or not.
If my brain is wired up in such a way as to choose two-boxing, then Omega will have predicted that. It doesn’t matter whether or not Omaga left already and box 1 already either contains 1M$ or 0$. No matter how long I ruminate back and forth, if I two-box I’ve lost because Omega is a perfect predictor and would thus have predicted it.
If Omega indeed has all the properties that are claimed, then there are only two possible outcomes: If you take one box, you’ll get 1M$, if you take two, then you get 1000$. It is true, that box 1 either contains 1M$ or nothing by the time Omega left—but what the box contains is still 100% correlated with my upcoming final decision and nothing is going to change that. End of story. Ergo, CDT is wrong and a model that’s at odds with reality.
PS: Interestingly, if opening the lid on these boxes is the trigger moment that counts as a “decision”, you could just put the opaque box into an X-ray and this act alone would instantly transform Omega into a liar, regardless of whether it contained 1M$ or nothing. It couldn’t possibly show an empty box without making Omega a liar, because contrary to what it said I could no longer actually decide to open only box 1 and get the 1M$. Conversely, if the box does contain 1M$, then I could just two-box, making Omega a liar with respect to its prediction.
So Omega would HAVE TO specifically forbid peeping into the opaque box. If it didn’t do that, Omega would risk being a liar one way or another, once I looked into the 1st box without opening it and either found 1M$ or nothing.
To perfectly model your thought processes, it would be enough that your brain activity be deterministic; it doesn’t follow that the universe is deterministic. The fact that my computer can model a Nintendo well enough for me to play video games does not imply that a Nintendo is built out of deterministic elementary particles, and a Nintendo emulator that simulated every elementary particle interaction in the Nintendo it was emulating would be ridiculously inefficient.