On the point about ‘Deterioration of collective epistemology’, and how it might interact with an impending risk, we have some recent evidence in the form of the Coronavirus response.
It’s important to note the Sleepwalk bias/Morituri Nolumus Mori effect’s potential role here—the way I conceptualised it, sufficiently terrible collective epistemology can vitiate any advantage you might expect from the MNM effect/discounting sleepwalk bias, but it has to be so bad that current danger is somehow rendered invisible. In other words, the MNM effect says the quality of our collective epistemology and how bad the danger is aren’t independent—we can get slightly smarter in some relevant ways if the stakes go up, though there do appear to be some levels of impaired collective epistemology it is hard to recover from even for high stakes—if the information about risk is effectively or actually inaccessible we don’t respond to it.
On the other hand, the MNM effect requires leaders and individuals to have access to information about the state of the world right now (i.e. how dangerous are things at the moment). Even in countries with reasonably free flow of information this is not a given. If you accept Eliezer Yudkowksy’s thesis that clickbait has impaired our ability to understand a persistent, objective external world then you might be more pessimistic about the MNM effect going forward. Perhaps for this reason, we should expect countries with higher social trust, and therefore more ability for individuals to agree on a consensus reality and understand the level of danger posed, to perform better. Japan and the countries in Northern Europe like Denmark and Sweden come to mind, and all of them have performed better than the mitigation measures employed by their governments would suggest.
On the point about ‘Deterioration of collective epistemology’, and how it might interact with an impending risk, we have some recent evidence in the form of the Coronavirus response.
It’s important to note the Sleepwalk bias/Morituri Nolumus Mori effect’s potential role here—the way I conceptualised it, sufficiently terrible collective epistemology can vitiate any advantage you might expect from the MNM effect/discounting sleepwalk bias, but it has to be so bad that current danger is somehow rendered invisible. In other words, the MNM effect says the quality of our collective epistemology and how bad the danger is aren’t independent—we can get slightly smarter in some relevant ways if the stakes go up, though there do appear to be some levels of impaired collective epistemology it is hard to recover from even for high stakes—if the information about risk is effectively or actually inaccessible we don’t respond to it.