Roman Malov links to a Hank Green video with much more mundane examples, like rumble strips on highways. The falling automobile death rate is evidence that car interventions are doing something right, if not proof of a particular example like rumble strips. But how do I know that the Y2K problem was not overblown? If a few systems had had big disasters, I could estimate how much all the other systems had accomplished by avoiding them. But if no one had disasters, I have to consider the possibility that the problem was overblown and the effort expending on the fix wasteful.
What I meant to point at was more like “ozone hole wasn’t real” kind of reaction, where by saying “overblown” people imply that the problem was completely made up. I’m not tying to make a point about whether the response scale matched the risk, and I don’t really have the expertise to judge.
Asking how exactly the counterfactual world would have looked like is absolutely reasonable, and honestly it’s a much harder question than the one I was trying to talk about. My focus was only that the full scale of possible consequences is counterfactual to us and therefore invisible.
Btw I belive that in principle it can be estimated. Major industries currently have risk assessment systems in place, nothing stops us from using them to analyze past near-misses. And regarding Y2K specifically: we actually dohave examples of software failures cascading through infrastructure—I was thinking specifically about the 2024 CrowdStrike thing while writing it.
I wasn’t out of school back then, but I can imagine the board meeting went something like this: Boss :”It’s going to cost 10mm dollars to fix this Y2K bug? Can you verify our systems crash by running a simulation?” Engineer: “you mean manually chaining the computer time and seeing if our code still works? If so, we already did that and our code failed.” Boss:”Thanks! you’re approved.”
How can we know that these examples are real?
Roman Malov links to a Hank Green video with much more mundane examples, like rumble strips on highways. The falling automobile death rate is evidence that car interventions are doing something right, if not proof of a particular example like rumble strips. But how do I know that the Y2K problem was not overblown? If a few systems had had big disasters, I could estimate how much all the other systems had accomplished by avoiding them. But if no one had disasters, I have to consider the possibility that the problem was overblown and the effort expending on the fix wasteful.
That one’s on me for the phrasing.
What I meant to point at was more like “ozone hole wasn’t real” kind of reaction, where by saying “overblown” people imply that the problem was completely made up. I’m not tying to make a point about whether the response scale matched the risk, and I don’t really have the expertise to judge.
Asking how exactly the counterfactual world would have looked like is absolutely reasonable, and honestly it’s a much harder question than the one I was trying to talk about. My focus was only that the full scale of possible consequences is counterfactual to us and therefore invisible.
Btw I belive that in principle it can be estimated. Major industries currently have risk assessment systems in place, nothing stops us from using them to analyze past near-misses. And regarding Y2K specifically: we actually do have examples of software failures cascading through infrastructure—I was thinking specifically about the 2024 CrowdStrike thing while writing it.
I wasn’t out of school back then, but I can imagine the board meeting went something like this: Boss :”It’s going to cost 10mm dollars to fix this Y2K bug? Can you verify our systems crash by running a simulation?” Engineer: “you mean manually chaining the computer time and seeing if our code still works? If so, we already did that and our code failed.” Boss:”Thanks! you’re approved.”