They’re no more artificial than the rest of Game Theory-
That’s an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time… but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.
there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that’s something that CDT analysis would treat as irrational even when beneficial.
That doesn’t follow from anything said in the article. Care to explain further?
One bit of relevance is that “rational” has been wrongly conflated with strategies akin to defecting in the Prisoner’s Dilemma,
Defecting is the right thing to do in the Prisoner’s Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player’s decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions.
This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
That’s an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time… but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.
That doesn’t follow from anything said in the article. Care to explain further?
Defecting is the right thing to do in the Prisoner’s Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player’s decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn’t. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an “advanced” decision theory?
From Wikipedia: “Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences.”
It seems to me that the source of the problem is in that phrase “causal consequences”, and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It’s worth mentioning that you can turn Pearl’s causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn’t a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term “model” here just refers to the joint probability distribution you use to represent your state of information.]
Where I’m getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb’s Paradox and the Cloned Prisoner’s Dilemma are easily resolved.
I think I’m going to have to write this up as an article of my own to really explain myself...
See my comment here—though if this problem keeps coming up then a post should be written by someone I guess.