Omega will predict their action, and compare this to their actual action. If the two match...
For a perfect predictor the above simplifies to “lose 1 utility”, of course. Are you saying that your interpretation of EDT would fight the hypothetical and refuse to admit that perfect predictors can be imagined?
CDT would fight the hypothetical, and refuse to admit that perfect predictors of their own actions exist (the CDT agent is perfectly fine with perfect predictors of other people’s actions).
I’m using CDT as it’s formally stated (in, eg, the FDT paper).
The best defence I can imagine from a CDT proponent: CDT is decision theory, not game theory. Anything involving predictors is game theory, so doesn’t count.
For a perfect predictor the above simplifies to “lose 1 utility”, of course. Are you saying that your interpretation of EDT would fight the hypothetical and refuse to admit that perfect predictors can be imagined?
CDT would fight the hypothetical, and refuse to admit that perfect predictors of their own actions exist (the CDT agent is perfectly fine with perfect predictors of other people’s actions).
That… doesn’t seem like a self-consistent decision theory at all. I wonder if any CDT proponents agree with your characterization.
I’m using CDT as it’s formally stated (in, eg, the FDT paper).
The best defence I can imagine from a CDT proponent: CDT is decision theory, not game theory. Anything involving predictors is game theory, so doesn’t count.