This is a good illustration of anthropic reasoning, but it’s an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying “I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision”. Hence it’s the total utilitarian (who is the “SIAish” agent) who is acting against what seems to be the objective probabilities.
This is a good illustration of anthropic reasoning, but it’s an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying “I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision”. Hence it’s the total utilitarian (who is the “SIAish” agent) who is acting against what seems to be the objective probabilities.
http://lesswrong.com/lw/8bw/anthropic_decision_theory_vi_applying_adt_to/