Yeah. It never would have occurred to me to file this under “conjunction fallacy” but you’re right—the conjunction fallacy seems to come into play when you try to rank probabilities but presumably melts away somewhat when you try to put hard numbers to them.
So there’s a two stage process here—firstly the ranking stage: “what are the most likely disasters you can think of?” which is flawed. Then there’s the quantitative stage which is more reliable. Combine them and you get flawed * reliable = flawed.
This could also be viewed in terms of the availability heuristic—the risk assessment team writes down the most available risks (which turn out to be detailed scenarios) and then stop when they feel they’ve done enough work. It can also obviously be viewed as a special case of Inside View.
A possible workaround would be to list a bunch of risks and estimate the probability of each. If the highest is 0.002 then say “can we think of any (possibly less detailed) risk scenarios with a probability similar or greater than 0.002?” Repeat this process until the answer is no.
The problem is that this method, by it’s very nature, works only on highly detailed scenarios.
I think the method’s domain needs to be refined. The method can, sometimes, demonstrate a deficiency in design. It pretty much can’t demonstrate absence of deficiency. It is a very strong result when something fails PRA—and it is a very weak result when something passes.
Thus the method absolutely can’t be used the way NRC uses it.
On top of that there may be good effort==good results fallacy happening as well. It just doesn’t fit into people’s heads that someone will be doing some ‘annoying math’ entirely meaninglessly. And the educational institutions, as well as the society, tends to reward efforts rather than results. People don’t think that lower apparent effort can at times result in better results.
With regards to the list of scenarios, it is very problematic. Say, I propose—what is the probability of contractor using sub-standard concrete? Okay you can’t do math on it until you break it down into contractor’s family starving, contractor needing money, or contractor being evil, or the like. At which point you pick out of thin air a zillion values and do some math on them, and it’s signalling time—if you bothered to do ‘annoying math’ you must truly believe your values mean something.
This is actually sort of seem to be widespread in nuclear ‘community’. On the nuclear section of physicsforums there were a couple of genuine nuclear engineering / safety kinda people who’d do all sorts of calculations about Fukushima—to much cheering of the innumerate crowd. For example calculating that there’s enough of isotope A in the spent fuel to explain concentration of it in the spent fuel pool water, never mind that there is also a lot of isotope B in the spent fuel and if you manage to break enough old fuel tubes open to get enough isotope A out, you get way more (~10 000x) isotope B into the water than there is, and you got some 10 000x refinement process here to get the observed ratio, which was the same as other reactors. Not only do you need to postulate some unspecific refinement process (which might happen—those were iodine and cesium and they have different chemistry—but didn’t happen beyond factor of 5 even through the food chain of the fish etc), you need it to magically compensate for difference in the source age and match the ratio seen everywhere else, which is just plain improbable by simple as day statistics. Bloody obviously that it probably just got pumped in with the contaminated cooling water, but you can’t really stick a lot of mathematics onto this hypothesis and get a cheer from innumerate crowd, while you can stick a lot of mathematics onto calculation of how much isotope A is left after n days in spent fuel.
Likewise with the risk estimates, you can’t stick a lot of mathematics onto something reasonable, but you can make nonsense with a lot of calculations on values picked out of thin air very easily, and signal your confidence in assumptions. Surely if you bother to do something complicated with assumptions, it isn’t garbage.
tl;dr; in some fields mathematics seem to be used for signalling purposes. When you make up a value out of thin air, and present it, that is, like, just your opinion, man. When you make up a dozen values out of thin air and crunch numbers to get some value, that’s immediately more trusted.
I think I agree with everything here. Would it be fair to summarize this as:
Proposals such as mine won’t do any good, because this is fundamentally a cultural problem not a methodological one
People know what “math” looks like but they don’t understand Bayes (in your isotope example)
Yea… well with math in general, you can quite effectively mislead people by computing some one out of context value which grossly contradicts their fallacious reasoning. Then the fallacious reasoning is still present and still doing strong, and something else gives in to explain that number.