Why do you hope I’d let a billion people die (from a proposed quantification in another comment)?
This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.
Payoffs: You(Creationist):
Defect(D): 1 Billion live
D(C): 3 Billion live
C(D): 0 live
C(C): 2 Billion live
Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn’t know it. Valuing his utility does not save this like it does the traditional PD. Assuming he’s vaguely rational, he will end up happier if you choose to defect, regardless of his choice. Furthermore, he thinks you will be happier if he defects, so he has absolutely no reason to cooperate.
If only by cooperating can you guarantee his cooperation, you should do so. However the PD generally assumes such prior commitments are not possible. And unlike the traditional PD, C(C) does not lead to the best possible collective outcome. Thus, you should try your hardest to convince him to cooperate, then you should defect. He’ll thank you for it when another billion people don’t die.
The medical situation is more confusing because I don’t think it’s realistic. I sincerely doubt you would have two vaguely rational doctors who would both put 99% confidence on a diagnosis knowing that another doctor was at least 99% confident that that diagnosis was incorrect. Thus, you should both amend your estimates substantially downwards, and thus should probably cooperate. If you take the hypothetical at face value, it seems like you both should defect, even though again D(C) would be the optimal solution from your perspective.
The real problem I’m having with some of these comments is that they assume my decision to defect or cooperate affects his decision, which does not seem to be a part of the hypothetical. Frankly I don’t see how people can come to this conclusion in this context, given that it’s a 1-shot game with a different collective payoff matrix than the traditional PD.
Why do you hope I’d let a billion people die (from a proposed quantification in another comment)?
This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.
Payoffs: You(Creationist): Defect(D): 1 Billion live D(C): 3 Billion live C(D): 0 live C(C): 2 Billion live
Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn’t know it. Valuing his utility does not save this like it does the traditional PD. Assuming he’s vaguely rational, he will end up happier if you choose to defect, regardless of his choice. Furthermore, he thinks you will be happier if he defects, so he has absolutely no reason to cooperate.
If only by cooperating can you guarantee his cooperation, you should do so. However the PD generally assumes such prior commitments are not possible. And unlike the traditional PD, C(C) does not lead to the best possible collective outcome. Thus, you should try your hardest to convince him to cooperate, then you should defect. He’ll thank you for it when another billion people don’t die.
The medical situation is more confusing because I don’t think it’s realistic. I sincerely doubt you would have two vaguely rational doctors who would both put 99% confidence on a diagnosis knowing that another doctor was at least 99% confident that that diagnosis was incorrect. Thus, you should both amend your estimates substantially downwards, and thus should probably cooperate. If you take the hypothetical at face value, it seems like you both should defect, even though again D(C) would be the optimal solution from your perspective.
The real problem I’m having with some of these comments is that they assume my decision to defect or cooperate affects his decision, which does not seem to be a part of the hypothetical. Frankly I don’t see how people can come to this conclusion in this context, given that it’s a 1-shot game with a different collective payoff matrix than the traditional PD.