Can anyone explain to me why CDT two-boxes?

I have read lots of LW posts on this topic, and everyone seems to take this for granted without giving a proper explanation. So if anyone could explain this to me, I would appreciate that.

This is a simple question that is in need of a simple answer. Please don’t link to pages and pages of theorycrafting. Thank you.

Edit: Since posting this, I have come to the conclusion that CDT doesn’t actually play Newcomb. Here’s a disagreement with that statement:

If you write up a CDT algorithm and then put it into a Newcomb’s problem simulator, it will do something. It’s playing the game; maybe not well, but it’s playing.

And here’s my response:

The thing is, an actual Newcomb simulator can’t possibly exist because Omega doesn’t exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.

You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn’t care about the rule of Omega being right, so CDT does not play Newcomb.

If you’re trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that’s hardly the point here.

Edit 2: Clarification regarding backwards causality, which seems to confuse people:

Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn’t spend too much ink on this.

But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don’t want to be the one who one-boxes.

Edit 3: Further clarification on the possible problems that could be considered Newcomb:

There’s four types of Newcomb problems:

  1. Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.

  2. Fallible Omega, but still backwards causality—CDT rejects this case, which cannot exist in reality.

  3. Infallible Omega, no backwards causality—CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.

  4. Fallible Omega, no backwards causality—CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.

That’s all there is to it.

Edit 4: Excerpt from Nozick’s “Newcomb’s Problem and Two Principles of Choice”:

Now, at last, to return to Newcomb’s example of the predictor. If one believes, for this case, that there is backwards causality, that your choice causes the money to be there or not, that it causes him to have made the prediction that he made, then there is no problem. One takes only what is in the second box. Or if one believes that the way the predictor works is by looking into the future; he, in some sense, sees what you are doing, and hence is no more likely to be wrong about what you do than someone else who is standing there at the time and watching you, and would normally see you, say, open only one box, then there is no problem. You take only what is in the second box. But suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what he did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made. So let us agree that the predictor works as follows: He observes you sometime before you are faced with the choice, examines you with complicated apparatus, etc., and then uses his theory to predict on the basis of this state you were in, what choice you would make later when faced with the choice. Your deciding to do as you do is not part of the explanation of why he makes the prediction he does, though your being in a certain state earlier, is part of the explanation of why he makes the prediction he does, and why you decide as you do.

I believe that one should take what is in both boxes. I fear that the considerations I have adduced thus far will not convince those proponents of taking only what is in the second box. Furthermore I suspect that an adequate solution to this problem will go much deeper than I have yet gone or shall go in this paper. So I want to pose one question. I assume that it is clear that in the vaccine example, the person should not be convinced by the probability argument, and should choose the dominant action. I assume also that it is clear that in the case of the two brothers, the brother should not be convinced by the probability argument offered. The question I should like to put to proponents of taking only what is in the second box in Newcomb’s example (and hence not performing the dominant action) is: what is the difference between Newcomb’s example and the other two examples which make the difference between not following the dominance principle, and following it?