If you can’t convince the creationist of evolution in the time available, but there is a way for both of you to bindingly precommit, it’s uncontroversial that (C,C) is the lifesaving choice, because you save 2 billion rather than 1.
The question is whether there is a general way for quasi-rational agents to act as if they had precommitted to the Pareto equilibrium when dealing with an agent of the same sort. If they could do so and publicly (unfakeably) signal as much, then such agents would have an advantage in general PDs. A ritual of cognition such as this is an attempt to do just that.
EDIT: In case it’s this ambiguity, MBlume’s strategy isn’t “cooperate in any scenario”, but “visibly be the sort of person who can cooperate in a one-shot PD with someone else who also accept this strategy, and try and convince the creationist to think the same way”. If it looks like the creationist will try to defect, MBlume will defect as well.
In case it’s this ambiguity, MBlume’s strategy isn’t “cooperate in any scenario”
Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes:
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart’s ignorance.
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme—hence my comment about merely adding to the reasons to weep.
But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection.
I read it as saying that if the creationist could have been convinced of evolution, then 3 billion rather than 2 billion could have been saved; after the door shuts, MBlume then follows the policy of “both cooperate if we still disagree” that he and the creationist both signaled they were genuinely capable of.
(But it is not at all clear from what he wrote.)
I have to agree— MBlume, you should have written this post so that someone reading it on its own doesn’t get a false impression. It makes sense within the debate, and especially in context of your previous post, but is very ambiguous if it’s the first thing one reads.
There’s perhaps one more source of ambiguity: the distinction between
the assertion that “cooperate without communication, given only mutual knowledge of complete rationality in decision theory” is part of the completely rational decision theory, and
the discussion of “agree to mutually cooperate in such a fashion that you each unfakeably signal your sincerity” as a feasible PD strategy for quasi-rational human beings.
If all goes well, I’d like to post on this myself soon.
I can see how it looks to you as if MBlume’s strategy prizes his ritual of cognition over that which he should protect— but be careful and charitable before you sling that accusation around here. This is a debate with a bit of a history on LW.
If you can’t convince the creationist of evolution in the time available, but there is a way for both of you to bindingly precommit, it’s uncontroversial that (C,C) is the lifesaving choice, because you save 2 billion rather than 1.
The question is whether there is a general way for quasi-rational agents to act as if they had precommitted to the Pareto equilibrium when dealing with an agent of the same sort. If they could do so and publicly (unfakeably) signal as much, then such agents would have an advantage in general PDs. A ritual of cognition such as this is an attempt to do just that.
EDIT: In case it’s this ambiguity, MBlume’s strategy isn’t “cooperate in any scenario”, but “visibly be the sort of person who can cooperate in a one-shot PD with someone else who also accept this strategy, and try and convince the creationist to think the same way”. If it looks like the creationist will try to defect, MBlume will defect as well.
Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes:
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme—hence my comment about merely adding to the reasons to weep.
But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)
I read it as saying that if the creationist could have been convinced of evolution, then 3 billion rather than 2 billion could have been saved; after the door shuts, MBlume then follows the policy of “both cooperate if we still disagree” that he and the creationist both signaled they were genuinely capable of.
I have to agree— MBlume, you should have written this post so that someone reading it on its own doesn’t get a false impression. It makes sense within the debate, and especially in context of your previous post, but is very ambiguous if it’s the first thing one reads.
There’s perhaps one more source of ambiguity: the distinction between
the assertion that “cooperate without communication, given only mutual knowledge of complete rationality in decision theory” is part of the completely rational decision theory, and
the discussion of “agree to mutually cooperate in such a fashion that you each unfakeably signal your sincerity” as a feasible PD strategy for quasi-rational human beings.
If all goes well, I’d like to post on this myself soon.