The part of my post that is relevant to AI alignment is right at the end, but I say something similar to Rohin, that we actually have significantly mitigated the effects of Coronavirus but have still failed in a certain specific way -
The lesson to be learned is that there may be a phase shift in the level of danger posed by certain X-risks—if the amount of advance warning or the speed of the unfolding disaster is above some minimal threshold, even if that threshold would seem like far too little time to do anything given our previous inadequacy, then there is still a chance for the MNM effect to take over and avert the worst outcome. In other words, AI takeoff with a small amount of forewarning might go a lot better than a scenario where there is no forewarning, even if past performance suggests we would do nothing useful with that forewarning.
More speculatively, I think we can see the MNM effect’s influence in other settings where we have consistently avoided the very worst outcomes despite systematic inadequacy—Anders Sandberg referenced something like it when he was discussing the probability of nuclear war. There have been many near misses when nuclear war could have started, implying that we can’t have been lucky over and over. Instead that there has been a stronger skew towards interventions that halt disaster at the last moment, compared to not-the-last-moment:
The part of my post that is relevant to AI alignment is right at the end, but I say something similar to Rohin, that we actually have significantly mitigated the effects of Coronavirus but have still failed in a certain specific way -