This seems like a misleading example of doomers being wrong (agree denotationally, disagree connotationally), since I think it’s plausible that Y2K was not a big deal (to such an extent that “most people think it was a myth, hoax, or urban legend”) precisely because of the mitigation efforts stemmed by the doomsayers’ predictions.
IIRC however I heard it said that the Y2K bug didn’t cause serious problems even in countries where there wasn’t much effort to deal with it, and hence the doomsayers’ predictions were exaggerated (in that much lesser mitigation efforts would have served almost as well). I don’t know if this is true though
Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it’s not a problem there! You don’t decide on investments at random, after all.
And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn’t computerized at all. (“We don’t have a Y2K problem because we don’t have any computers” doesn’t imply Y2K prevention is a bad idea.)
Indeed, I had similar thoughts but didn’t type them up.
In any case I suspect it was a situation in which the cost-benefit analysis would show high risk-aversion (hence probable over-reaction to avoid under-reaction) was justified.
Yep. Put another way: With Y2K, the higher-quality “predictions of doom” were sufficiently specific that they were also a road map to preventing the doom.
(If nothing else, you could frequently test a system by running the system clock ahead to 1999-12-31 23:59:59 and waiting a moment to see if anything caught fire.)
This seems like a misleading example of doomers being wrong (agree denotationally, disagree connotationally), since I think it’s plausible that Y2K was not a big deal (to such an extent that “most people think it was a myth, hoax, or urban legend”) precisely because of the mitigation efforts stemmed by the doomsayers’ predictions.
IIRC however I heard it said that the Y2K bug didn’t cause serious problems even in countries where there wasn’t much effort to deal with it, and hence the doomsayers’ predictions were exaggerated (in that much lesser mitigation efforts would have served almost as well). I don’t know if this is true though
Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it’s not a problem there! You don’t decide on investments at random, after all.
And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn’t computerized at all. (“We don’t have a Y2K problem because we don’t have any computers” doesn’t imply Y2K prevention is a bad idea.)
Indeed, I had similar thoughts but didn’t type them up.
In any case I suspect it was a situation in which the cost-benefit analysis would show high risk-aversion (hence probable over-reaction to avoid under-reaction) was justified.
Consensus view is that they were shielded by those who did invest in it.
I’ve written more about Y2K at https://www.lesswrong.com/posts/zvQdgfFEDFQQhDDuS/y2k-successful-practice-for-ai-alignment
Yep. Put another way: With Y2K, the higher-quality “predictions of doom” were sufficiently specific that they were also a road map to preventing the doom.
(If nothing else, you could frequently test a system by running the system clock ahead to 1999-12-31 23:59:59 and waiting a moment to see if anything caught fire.)