I think it’s correct that talking about “choice” in the moment is misguided. If omega is a perfect predictor, you don’t really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense.
If omega knew everything you were going to ever do, would that throw decision theory out of the window as far as you are considered? If you somehow knew what you were going to do at some point in the future—as in omega actually told you specifically what you will do—then yeah it would be pretty pointless to try to apply decision theory to that choice that was even from your own perspective “already determined”. But the fact that omega knows doesn’t suddenly make the analysis of what’s rational to do useless.
If Omega tells you what you’ll do, you can still do whatever. If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual). You are in no way constrained by existence of a prediction, or by having observed what this prediction is. Instead, it’s Omega that is constrained by what your behavior is, it must obey your actions in its predictions about them. See also Transparent Newcomb’s Problem.
This is clearer when you think of yourself (or of an agent) as an abstract computation rather than a physical thing, a process formally specified by a program rather than a physical computer running it. You can’t change what an abstract computation does by damaging physical computers, so in any confrontation between unbounded authority and an abstract computation, the abstract computation is having the final word. You can only convince an abstract computation to behave in some way according to its own nature and algorithm, and external constructions aren’t going to be universally compelling to abstract algorithms (such as Omega being omniscient, or the thought experiment being set up in a certain way).
If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual).
This is true and it’s also true in general that there’s always technically a chance that Omega’s prediction is false - I don’t think there’s a conceivable epistemic situation where you could be literally 100% confident in its predictions. However by postulation, typically in Omega scenarios it is according to what you know exceedingly unlikely that its prediction is incorrect.
You could also perhaps just ignore Omega’s prediction and do whatever you’d do without this foreknowledge, or with the assumption that defying the prediction is still on the table. You wouldn’t necessarily feel “constrained by the prediction” but rather “constrained” just in the normal sense various factors constrain your decision—but for one reason or other you’d almost certainly end up choosing as Omega predicted.
Let’s say this decision is complicated enough that doing the cost-benefit analysis “normally” carries a significant cost in terms of time and effort. Would you agree that it would be rational to skip that part and just base your decision on what Omega predicted when the time comes? That is the sense in which I think it makes sense to treat the decision as “already determined from your perspective”.
If omega knew everything you were going to ever do, would that throw decision theory out of the window as far as you are considered? If you somehow knew what you were going to do at some point in the future—as in omega actually told you specifically what you will do—then yeah it would be pretty pointless to try to apply decision theory to that choice that was even from your own perspective “already determined”. But the fact that omega knows doesn’t suddenly make the analysis of what’s rational to do useless.
If Omega tells you what you’ll do, you can still do whatever. If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual). You are in no way constrained by existence of a prediction, or by having observed what this prediction is. Instead, it’s Omega that is constrained by what your behavior is, it must obey your actions in its predictions about them. See also Transparent Newcomb’s Problem.
This is clearer when you think of yourself (or of an agent) as an abstract computation rather than a physical thing, a process formally specified by a program rather than a physical computer running it. You can’t change what an abstract computation does by damaging physical computers, so in any confrontation between unbounded authority and an abstract computation, the abstract computation is having the final word. You can only convince an abstract computation to behave in some way according to its own nature and algorithm, and external constructions aren’t going to be universally compelling to abstract algorithms (such as Omega being omniscient, or the thought experiment being set up in a certain way).
This is true and it’s also true in general that there’s always technically a chance that Omega’s prediction is false - I don’t think there’s a conceivable epistemic situation where you could be literally 100% confident in its predictions. However by postulation, typically in Omega scenarios it is according to what you know exceedingly unlikely that its prediction is incorrect.
You could also perhaps just ignore Omega’s prediction and do whatever you’d do without this foreknowledge, or with the assumption that defying the prediction is still on the table. You wouldn’t necessarily feel “constrained by the prediction” but rather “constrained” just in the normal sense various factors constrain your decision—but for one reason or other you’d almost certainly end up choosing as Omega predicted.
Let’s say this decision is complicated enough that doing the cost-benefit analysis “normally” carries a significant cost in terms of time and effort. Would you agree that it would be rational to skip that part and just base your decision on what Omega predicted when the time comes? That is the sense in which I think it makes sense to treat the decision as “already determined from your perspective”.