The rationale for two-boxing that Nozick describes in the original paper has nothing to do with the predictor being wrong. It says that even if the predictor is right, you should still two-box.
Omega has already put either $1M or $0 in Box B. It’s sitting right there.
If Omega put $1M, then I can one-box for $1M or two-box for $1M + $1,000. Therefore I should two-box.
If Omega put $0, then I can one-box for $0 or two-box for $0 + $1,000. Therefore I should two-box.
The point isn’t that Omega is a faulty predictor. The point is that even if Omega is an awesome predictor, then what you do now can’t magically fill or empty box B. Two-boxers of this type would love to precommit to one-boxing ahead of time, since that would causally change Omega’s prediction. They just don’t think it’s rational to one-box after the prediction has already been made.
Maybe I’m just rehashing what you said in your edit :) In any case, I think that is the most sympathetic argument for two-boxing if you take the premise seriously. I still think it’s wrong (I’m a dedicated one-boxer), but I don’t think the error is believing that the predictor is mistaken.
note: Nozick does NOT say that he endorses two-boxing. He describes the argument for it as you say, without stating that he believes it’s correct.
I disagree with your analysis
The point isn’t that Omega is a faulty predictor. The point is that even if Omega is an awesome predictor, then what you do now can’t magically fill or empty box B”.
That second part is equivalent to “in this case, Omega can fail to predict my next action”. If you believe it’s possible to two-box and get $1.001M, you’re rejecting the premise.
What you do next being very highly correlated with whether the $1M is in a box is exactly the important part of the thought experiment, and if you deny it, you’re answering a different question. Whether it’s ‘magic’ or not is irrelevant (though it does show that the problem may have little to do with the real world).
I’m FINE with saying “this is an impossible situation that doesn’t apply to the real world”. That’s different from saying “I accept all the premises (including magic prediction and correlation with my own actions) and I still recommend 2-boxing”.
The rationale for two-boxing that Nozick describes in the original paper has nothing to do with the predictor being wrong. It says that even if the predictor is right, you should still two-box.
Omega has already put either $1M or $0 in Box B. It’s sitting right there.
If Omega put $1M, then I can one-box for $1M or two-box for $1M + $1,000. Therefore I should two-box.
If Omega put $0, then I can one-box for $0 or two-box for $0 + $1,000. Therefore I should two-box.
The point isn’t that Omega is a faulty predictor. The point is that even if Omega is an awesome predictor, then what you do now can’t magically fill or empty box B. Two-boxers of this type would love to precommit to one-boxing ahead of time, since that would causally change Omega’s prediction. They just don’t think it’s rational to one-box after the prediction has already been made.
Maybe I’m just rehashing what you said in your edit :) In any case, I think that is the most sympathetic argument for two-boxing if you take the premise seriously. I still think it’s wrong (I’m a dedicated one-boxer), but I don’t think the error is believing that the predictor is mistaken.
note: Nozick does NOT say that he endorses two-boxing. He describes the argument for it as you say, without stating that he believes it’s correct.
I disagree with your analysis
That second part is equivalent to “in this case, Omega can fail to predict my next action”. If you believe it’s possible to two-box and get $1.001M, you’re rejecting the premise.
What you do next being very highly correlated with whether the $1M is in a box is exactly the important part of the thought experiment, and if you deny it, you’re answering a different question. Whether it’s ‘magic’ or not is irrelevant (though it does show that the problem may have little to do with the real world).
I’m FINE with saying “this is an impossible situation that doesn’t apply to the real world”. That’s different from saying “I accept all the premises (including magic prediction and correlation with my own actions) and I still recommend 2-boxing”.