Predictors exist: CDT going bonkers… forever
I’ve been wanting to get a better example of CDT (causal decision theory) misbehaving, where the behaviour is more clearly suboptimal than it is in the Newcomb problem (which many people don’t seem to accept as CDT being suboptimal), and simpler to grasp than Death in Damascus.
The “predictors exist” problem
So consider this simple example: the player is playing against Omega, who will predict their actions. The player can take three actions: “zero”, “one”, or “leave”.
If ever they do “leave”, then the experiment is over and they leave. If they choose “zero” or “one”, then Omega will predict their action, and compare this to their actual action. If the two match, then the player loses utility and the game repeats; if the action and the prediction differs, then the player gains utility and the experiment ends.
Assume that actually Omega is a perfect or quasi-perfect predictor, with a good model of the player. An FDT or EDT agent would soon realise that they couldn’t trick Omega, after a few tries, and would quickly end the game.
But the CDT player would be incapable of reaching this reasoning. Whatever distribution they compute over Omega’s prediction, they will always estimate that they (the CDT player) have at least a chance of choosing the other option, for an expected utility gain of at least .
Basically, the CDT agent can never learn that Omega is a good predictor of themselves. And so they will continue playing, and continue losing… for ever.
Omega will make this prediction not necessarily before the player takes their action, not even necessarily without seeing this action, but still makes the prediction independently of this knowledge. And that’s enough for CDT. ↩︎
For example, suppose the CDT agent estimates the prediction will be “zero” with probability , and “one” with probability 1-p. Then if , they can say “one”, and have a probability of winning, in their own view. If , they can say “zero”, and have a subjective probability of winning. ↩︎
The CDT agent has no problem believing that Omega is a perfect predictor of other agents, however. ↩︎