CDT acts to physically cause nice things to happen. CDT can’t physically cause the contents of the boxes to change, and fails to recognize the non-physical dependence of the box contents on its decision, which is a result of the logical dependence between CDT and Omega’s CDT simulation. Since CDT believes its decision can’t affect the contents of the boxes, it takes both in order to get any money that’s there. Taking both boxes is in fact the correct course of action for the problem CDT thinks its facing, in which a guy may have randomly decided to leave some money around for them. CDT doesn’t think that it will always get the $1 million; it is capable of representing a background probability that Omega did or didn’t do something. It just can’t factor out a part of that uncertainty, the part that’s the same as its uncertainty about what it will do, into a causal relation link that points from the present to the past (or from a timeless platonic computation node to both the present and the CDT sim in the past, as TDT does).
Or from a different light, people who talked about causal decision theories historically were pretty vague, but basically said that causality was that thing by which you can influence the future but not the past or events outside your light cone, so when we build more formal versions of CDT, we make sure that’s how it reasons and we keep that sense of the word causality.
I don’t think the way you’re phrasing that is very useful. If you write up a CDT algorithm and then put it into a Newcomb’s problem simulator, it will do something. It’s playing the game; maybe not well, but it’s playing.
Perhaps you could say, “‘CDT’ is poorly named, if you follow actual the actual principles of causality, you’ll get an algorithm that gets the right answer” (I’ve seen people make a claim like that). Or “you can think of CDT reframing the problem as an easier one that it knows how to play, but is substantially different and thus gets the wrong answer”. Or something else like that.
The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
CDT acts to physically cause nice things to happen. CDT can’t physically cause the contents of the boxes to change, and fails to recognize the non-physical dependence of the box contents on its decision, which is a result of the logical dependence between CDT and Omega’s CDT simulation. Since CDT believes its decision can’t affect the contents of the boxes, it takes both in order to get any money that’s there. Taking both boxes is in fact the correct course of action for the problem CDT thinks its facing, in which a guy may have randomly decided to leave some money around for them. CDT doesn’t think that it will always get the $1 million; it is capable of representing a background probability that Omega did or didn’t do something. It just can’t factor out a part of that uncertainty, the part that’s the same as its uncertainty about what it will do, into a causal relation link that points from the present to the past (or from a timeless platonic computation node to both the present and the CDT sim in the past, as TDT does).
Or from a different light, people who talked about causal decision theories historically were pretty vague, but basically said that causality was that thing by which you can influence the future but not the past or events outside your light cone, so when we build more formal versions of CDT, we make sure that’s how it reasons and we keep that sense of the word causality.
Thank you, you just confirmed what I posted as a reply to “see”, which is that CDT doesn’t play in Newcomb at all.
I don’t think the way you’re phrasing that is very useful. If you write up a CDT algorithm and then put it into a Newcomb’s problem simulator, it will do something. It’s playing the game; maybe not well, but it’s playing.
Perhaps you could say, “‘CDT’ is poorly named, if you follow actual the actual principles of causality, you’ll get an algorithm that gets the right answer” (I’ve seen people make a claim like that). Or “you can think of CDT reframing the problem as an easier one that it knows how to play, but is substantially different and thus gets the wrong answer”. Or something else like that.
The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.
You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.
If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.
If Omega is fallible (e.g. human), CDT still two-boxes even if Omega empirically seems to be wrong one time in a million.
Fallible does not equal human. A human would still determine whether to put money in the box or not based only on the past, not on the future, and at that point the problem becomes “if you’ve been CDT so far, you won’t get the $1,000,000, no matter what you do in this instance of the game.”
Suppose that Omega is wrong with probability p<1 (this is a perfectly realistic and sensible case). What does (your interpretation of) CDT do in this case, and with what probability?
Here is my EDT calculation:
calculate p(2box|1box prediction)1001000+p(2box|2box prediction)1000=1001000(1-p)+1000p
calculate p(1box|1box prediction)1001000+p(1box|2box prediction)1000=1001000p+1000(1-p)
pick largest of the two (which is 1-box if p < 50%, 2-box if p > 50%).
Thus one should 1-box even if Omega is slightly better than chance.