Conjunction fallacy and probabilistic risk assessment.

Sum­mary:

There is a very dan­ger­ous way in which con­junc­tion fal­lacy can be ex­ploited. One can pre­sent you with 2..5 de­tailed, very plau­si­ble failure sce­nar­ios whose prob­a­bil­ities are shown to be very low, us­ing solid math­e­mat­ics; then if you suffer from con­junc­tion fal­lacy, it will look like this im­plies high safety of a de­sign—while in fact it’s the de­tailed­ness of the sce­nario that makes prob­a­bil­ity so low.

Even if you re­al­ize that there may be many other sce­nar­ios that were not pre­sented to you, you still have an in­cred­ibly low prob­a­bil­ity num­ber on a highly plau­si­ble (“most likely”) failure sce­nario, which you, be­ing un­aware of the pow­ers of con­junc­tion, at­tribute to safety of the de­sign.

The con­junc­tion fal­lacy can be viewed as poor un­der­stand­ing of re­la­tion be­tween plau­si­bil­ity and prob­a­bil­ity. Ad­di­tion of ex­tra de­tails doesn’t make sce­nario seem less plau­si­ble (it can even in­crease plau­si­bil­ity), but does math­e­mat­i­cally make it less prob­a­ble.

De­tails:

What hap­pens if a risk as­sess­ment is be­ing pre­pared for (and pos­si­bly by) suffer­ers of con­junc­tion fal­lacy?

De­tailed ex­am­ple sce­nar­ios will be cho­sen, such as:

A Rus­sian in­va­sion of Poland, and a com­plete sus­pen­sion of diplo­matic re­la­tions be­tween the USA and the Soviet Union, some­time in 1983.

Then as a risk es­ti­mate, you mul­ti­ply prob­a­bil­ity of Rus­sian in­va­sion of Poland, by prob­a­bil­ity of it re­sult­ing in sus­pen­sion of diplo­matic re­la­tions be­tween US and SU, and mul­ti­ply by prob­a­bil­ity of it hap­pen­ing speci­fi­cally in 1983 . The re­sult­ing prob­a­bil­ity could be ex­tremely small for suffi­ciently de­tailed sce­nario (you can add the pol­ish prime minister be­ing as­sas­si­nated if your prob­a­bil­ity is still too high for com­fort).

To a sufferer of con­junc­tion fal­lacy it looks like a very plau­si­ble, ‘most likely’ sce­nario has been shown highly im­prob­a­ble, and thus the risks are low. The sufferer of con­junc­tion fal­lacy does not ex­pect that this prob­a­bil­ity could be very low in un­safe de­sign.

It seems to me that the risk as­sess­ment is rou­tinely done in such a fash­ion. Con­sider Space Shut­tle’s re­li­a­bil­ity, or the NRC cost-benefit analy­ses for the spent fuel pools , which goes as low as one in 45 mil­lions years for the most se­vere sce­nario. (Same seem to hap­pen in all of the NRC re­s­olu­tions, to vary­ing ex­tent; feel free to dig through)

Those re­ports looked out­right in­sane to me—a very small num­ber of highly de­tailed sce­nar­ios are shown to be ex­tremely im­prob­a­ble—how in the world would any­one think that this im­plies safety? How in the world can any­one take se­ri­ously one in 45 mil­lion years sce­nario? That’s near the point where a me­te­orite im­pact leads to so­cial di­s­or­der that leads to the fuel pool run­ning dry!

I couldn’t un­der­stand that. De­tailed sce­nar­ios are in­her­ently un­likely to hap­pen when­ever the de­sign is safe or not; their un­like­hood is a prop­erty of their de­tailed­ness, not of safety or un­safety of de­sign.

Un­til it clicked that if you read those through the gog­gles of con­junc­tion fal­lacy, it is what looks like the most likely failure modes that are shown to be in­cred­ibly im­prob­a­ble. Pre­vi­ously (be­fore read­ing less­wrong) I didn’t re­ally un­der­stand how ex­actly any­one buys into this sort of stuff, and could find no way to even ar­gue. You can’t quite talk some­one out of some­thing when you don’t un­der­stand how they be­lieve in it. You say “there may be many sce­nar­ios that were not con­sid­ered”, and they know that already.

This is one se­ri­ously dan­ger­ous way in which con­junc­tion fal­lacy can be ex­ploited. It seems to be rather com­mon in risk anal­y­sis.

Note: I do think that the con­junc­tion fal­lacy is re­spon­si­ble for much of the cred­i­bil­ity given to such risk es­ti­mates; no-one seem to se­ri­ously be­lieve that NRC always cov­ers all the pos­si­ble sce­nar­ios, yet at same time there seem to be a sig­nifi­cant mi­s­un­der­stand­ing of the mag­ni­tude of the prob­lem; the NRC risk es­ti­mates are taken as within the bal­l­park of the cor­rect value in the cost-benefit anal­y­sis for the safety fea­tures. For nu­clear power, wide­spread pro­mo­tion of re­sults of such analy­ses re­sults in mas­sive loss of pub­lic trust once an ac­ci­dent hap­pens, and con­se­quently to nar­row­ing of available op­tions and tran­si­tion to less de­sir­able en­ergy sources (coal in par­tic­u­lar), which in it­self is a mas­sive dis-util­ity.

[The other is­sue in linked NRC study is of course that the cost-benefit anal­y­sis had used in­ter­nal prob­a­bil­ity when it should have used ex­ter­nal prob­a­bil­ity.]

edit: minor clarifying

ed­its: im­proved the ab­stract and clar­ified the ar­ti­cle fur­ther based on the com­ments.