I don’t know if these comments will be helpful or even pertinent to the underlying effort related to posing and answering these types of problems. I do have a “why care” type of reaction to both the standard Newcomb’s Paradox/Problem and the above formulation. I think that is because I fail to see how either really relates to anything I have to deal with in my life so seem to be “solutions in search of a problem”. That could just be me though....
I do notice, for me at least, a subtle difference in the two settings. Newcomb seems to formulate a problem that is morally neutral. The psychologist seems to be setting up the incentives to be along the lines of can I lie well enough to get the $200 and my 10 minutes. Once you take the test, the envelope’s content is set and waiting or not has no force—and apparently no impact on the experiments results from the psychologist’s perspective as well.
Is the behavior one adopts as their solution to the problem more about personal ethics and honesty than mere payoffs?
Yes, that is a disadvantage to this formulation… as with real-world analogues of the Prisoner’s Dilemma, personal ethical principles tend to creep in and muddy the purely game-theoretic calculations. The key question, though, is not how well you can lie—it’s whether, once you’ve decided to be honest either due to ethics or because of the lie detector, you can still say you’ll stay and precommit to not changing your mind after the test is over.
As for why you should care, the truth is that for most situations where causal decision theory gives us a harmful answer, most people already tend not to use causal decision theory. Instead we use a set of heuristics built up over time and experience—things like altruism or desire for revenge. As long as the decisions you face more or less match the environment in which these heuristics were developed, they work pretty well, or at least better than CDT. For example, in the ultimatum game, the responses of the general population are pretty close to the recommendations of UDT, while economists do worse (sorry, can’t find the link right now).
Really understanding decision theory, to the extent that we can understand it, is useful when either the heuristics fail (hyperbolic discounting, maybe? plus more exotic hypotheticals) or when you need to set up formal decision-making rules for a system. Imagine a company, for instance, that has it written irrevocably into the charter that it will never settle a lawsuit. Lawyer costs per lawsuit go up, but the number of payouts goes down as people have less incentive to sue. Generalizing this kind of precommitment would be even more useful.
UDT might also allow cooperation between people who understand it, in situations where there are normally large costs associated with lack of trust. Insurance, for instance, or collective bargaining (or price-fixing: not all applications are necessarily good).
I don’t know if these comments will be helpful or even pertinent to the underlying effort related to posing and answering these types of problems. I do have a “why care” type of reaction to both the standard Newcomb’s Paradox/Problem and the above formulation. I think that is because I fail to see how either really relates to anything I have to deal with in my life so seem to be “solutions in search of a problem”. That could just be me though....
I do notice, for me at least, a subtle difference in the two settings. Newcomb seems to formulate a problem that is morally neutral. The psychologist seems to be setting up the incentives to be along the lines of can I lie well enough to get the $200 and my 10 minutes. Once you take the test, the envelope’s content is set and waiting or not has no force—and apparently no impact on the experiments results from the psychologist’s perspective as well.
Is the behavior one adopts as their solution to the problem more about personal ethics and honesty than mere payoffs?
Yes, that is a disadvantage to this formulation… as with real-world analogues of the Prisoner’s Dilemma, personal ethical principles tend to creep in and muddy the purely game-theoretic calculations. The key question, though, is not how well you can lie—it’s whether, once you’ve decided to be honest either due to ethics or because of the lie detector, you can still say you’ll stay and precommit to not changing your mind after the test is over.
As for why you should care, the truth is that for most situations where causal decision theory gives us a harmful answer, most people already tend not to use causal decision theory. Instead we use a set of heuristics built up over time and experience—things like altruism or desire for revenge. As long as the decisions you face more or less match the environment in which these heuristics were developed, they work pretty well, or at least better than CDT. For example, in the ultimatum game, the responses of the general population are pretty close to the recommendations of UDT, while economists do worse (sorry, can’t find the link right now).
Really understanding decision theory, to the extent that we can understand it, is useful when either the heuristics fail (hyperbolic discounting, maybe? plus more exotic hypotheticals) or when you need to set up formal decision-making rules for a system. Imagine a company, for instance, that has it written irrevocably into the charter that it will never settle a lawsuit. Lawyer costs per lawsuit go up, but the number of payouts goes down as people have less incentive to sue. Generalizing this kind of precommitment would be even more useful.
UDT might also allow cooperation between people who understand it, in situations where there are normally large costs associated with lack of trust. Insurance, for instance, or collective bargaining (or price-fixing: not all applications are necessarily good).