Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.
Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.