a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions.
This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
See my comment here—though if this problem keeps coming up then a post should be written by someone I guess.