There’s a range of interpretations for any counterfactual. One must open up the “suppose” and ask, “What am I actually being asked to suppose? How might the counterfactual circumstance have come to be?” We can accordingly do surgery on the causal graph in different places, depending on how far back from the event of interest we intervene.
To make X counterfactually have some value x, we might, in terms of causal graph surgery, consider do(X=x). Or we might intervene on some predecessors of X, and use do(Y=y) and do(Z=z), choosing values which cause X to take on the value x, but which may have additional effects. Or we could intervene further back than that, and create even more side-effects. We might discover that we are considering a counterfactual that makes no sense — for example, phosphorus matches that do not burn, yet human life continues.
In Newcomb’s Problem, the two-boxing argument intervenes on both the decision of the person faced with the problem, and Omega’s decision to fill the other box or not, as if there were hidden mechanisms that were to pre-empt both decisions in each of the four possible ways they might be made. (This obviously contradicts one of the hypotheses of the problem, which is that Omega is always right.) The one-boxing argument intervenes on the choice of policy that produces the subject’s decision, and does not intervene on Omega.
I could call these CDT and FDT respectively, except for the tendency of people to modify their preferred decision theory xDT in response to problems that it gets wrong, and claim to be still using xDT, “properly understood”. I just described the one-boxer’s argument in causal terms. That does not mean that CDT, “properly understood”, is FDT.
ETA: While googling something about counterfactuals, I came across Molinism, according to which God knows all counterfactuals, and in particular knows what the creatures that he created would do of their own free will in any hypothetical situation. Omega is probably an angel sent by God to test people’s rationality. (Epistemic status: jeu d’esprit.)
There’s a range of interpretations for any counterfactual. One must open up the “suppose” and ask, “What am I actually being asked to suppose? How might the counterfactual circumstance have come to be?” We can accordingly do surgery on the causal graph in different places, depending on how far back from the event of interest we intervene.
To make X counterfactually have some value x, we might, in terms of causal graph surgery, consider do(X=x). Or we might intervene on some predecessors of X, and use do(Y=y) and do(Z=z), choosing values which cause X to take on the value x, but which may have additional effects. Or we could intervene further back than that, and create even more side-effects. We might discover that we are considering a counterfactual that makes no sense — for example, phosphorus matches that do not burn, yet human life continues.
In Newcomb’s Problem, the two-boxing argument intervenes on both the decision of the person faced with the problem, and Omega’s decision to fill the other box or not, as if there were hidden mechanisms that were to pre-empt both decisions in each of the four possible ways they might be made. (This obviously contradicts one of the hypotheses of the problem, which is that Omega is always right.) The one-boxing argument intervenes on the choice of policy that produces the subject’s decision, and does not intervene on Omega.
I could call these CDT and FDT respectively, except for the tendency of people to modify their preferred decision theory xDT in response to problems that it gets wrong, and claim to be still using xDT, “properly understood”. I just described the one-boxer’s argument in causal terms. That does not mean that CDT, “properly understood”, is FDT.
ETA: While googling something about counterfactuals, I came across Molinism, according to which God knows all counterfactuals, and in particular knows what the creatures that he created would do of their own free will in any hypothetical situation. Omega is probably an angel sent by God to test people’s rationality. (Epistemic status: jeu d’esprit.)