Yes, that is a disadvantage to this formulation… as with real-world analogues of the Prisoner’s Dilemma, personal ethical principles tend to creep in and muddy the purely game-theoretic calculations. The key question, though, is not how well you can lie—it’s whether, once you’ve decided to be honest either due to ethics or because of the lie detector, you can still say you’ll stay and precommit to not changing your mind after the test is over.
As for why you should care, the truth is that for most situations where causal decision theory gives us a harmful answer, most people already tend not to use causal decision theory. Instead we use a set of heuristics built up over time and experience—things like altruism or desire for revenge. As long as the decisions you face more or less match the environment in which these heuristics were developed, they work pretty well, or at least better than CDT. For example, in the ultimatum game, the responses of the general population are pretty close to the recommendations of UDT, while economists do worse (sorry, can’t find the link right now).
Really understanding decision theory, to the extent that we can understand it, is useful when either the heuristics fail (hyperbolic discounting, maybe? plus more exotic hypotheticals) or when you need to set up formal decision-making rules for a system. Imagine a company, for instance, that has it written irrevocably into the charter that it will never settle a lawsuit. Lawyer costs per lawsuit go up, but the number of payouts goes down as people have less incentive to sue. Generalizing this kind of precommitment would be even more useful.
UDT might also allow cooperation between people who understand it, in situations where there are normally large costs associated with lack of trust. Insurance, for instance, or collective bargaining (or price-fixing: not all applications are necessarily good).
Yes, that is a disadvantage to this formulation… as with real-world analogues of the Prisoner’s Dilemma, personal ethical principles tend to creep in and muddy the purely game-theoretic calculations. The key question, though, is not how well you can lie—it’s whether, once you’ve decided to be honest either due to ethics or because of the lie detector, you can still say you’ll stay and precommit to not changing your mind after the test is over.
As for why you should care, the truth is that for most situations where causal decision theory gives us a harmful answer, most people already tend not to use causal decision theory. Instead we use a set of heuristics built up over time and experience—things like altruism or desire for revenge. As long as the decisions you face more or less match the environment in which these heuristics were developed, they work pretty well, or at least better than CDT. For example, in the ultimatum game, the responses of the general population are pretty close to the recommendations of UDT, while economists do worse (sorry, can’t find the link right now).
Really understanding decision theory, to the extent that we can understand it, is useful when either the heuristics fail (hyperbolic discounting, maybe? plus more exotic hypotheticals) or when you need to set up formal decision-making rules for a system. Imagine a company, for instance, that has it written irrevocably into the charter that it will never settle a lawsuit. Lawyer costs per lawsuit go up, but the number of payouts goes down as people have less incentive to sue. Generalizing this kind of precommitment would be even more useful.
UDT might also allow cooperation between people who understand it, in situations where there are normally large costs associated with lack of trust. Insurance, for instance, or collective bargaining (or price-fixing: not all applications are necessarily good).