So far, neither the reasons for humanity’s potential future demise nor the reasons humanity has not been destroyed yet fit very well into the logical and predictive frames established by decision theory, game theory, or rationalist-coded Bayesian setups. We appear to both be spectacularly irrational and get away with it by terrific strokes of luck that beggar explanation (See 1 , 2 ).
Most interestingly, we seem to have evaded certain situations that, under standard assumptions of rational actors, would probably have resulted in most of human civilisation being wiped out. (I don’t think the RAND employees were wrong in saying that taking the MAD framework seriously implied that they shouldn’t take out pensions, for example). The standard answer to this is to say that “we got lucky” and then make some handwaving gestures towards evolution hard-coding irrational hacks into us. The rationalist then concludes that we should just work harder at becoming a super rational/optimal society, despite the fact that using rationalist methods does not seem to “win” even when playing against decidedly “irrational” opponents.
At this point, another answer might be that the theory needs an update to match observed evidence.
So far, neither the reasons for humanity’s potential future demise nor the reasons humanity has not been destroyed yet fit very well into the logical and predictive frames established by decision theory, game theory, or rationalist-coded Bayesian setups. We appear to both be spectacularly irrational and get away with it by terrific strokes of luck that beggar explanation (See 1 , 2 ).
Most interestingly, we seem to have evaded certain situations that, under standard assumptions of rational actors, would probably have resulted in most of human civilisation being wiped out. (I don’t think the RAND employees were wrong in saying that taking the MAD framework seriously implied that they shouldn’t take out pensions, for example). The standard answer to this is to say that “we got lucky” and then make some handwaving gestures towards evolution hard-coding irrational hacks into us. The rationalist then concludes that we should just work harder at becoming a super rational/optimal society, despite the fact that using rationalist methods does not seem to “win” even when playing against decidedly “irrational” opponents.
At this point, another answer might be that the theory needs an update to match observed evidence.