Stupidity and Dishonesty Explain Each Other Away

The explaining-away effect (or, collider bias; or, Berkson’s paradox) is a statistical phenomenon in which statistically independent causes with a common effect become anticorrelated when conditioning on the effect.

In the language of d-separation, if you have a causal graph X → Z ← Y, then conditioning on Z unblocks the path between X and Y.

Daphne Koller and Nir Friedman give an example of reasoning about disease etiology: if you have a sore throat and cough, and aren’t sure whether you have the flu or mono, you should be relieved to find out it’s “just” a flu, because that decreases the probability that you have mono. You could be inflected with both the influenza and mononucleosis viruses, but if the flu is completely sufficient to explain your symptoms, there’s no additional reason to expect mono.[1]

Judea Pearl gives an example of reasoning about a burglar alarm: if your neighbor calls you at your dayjob to tell you that your burglar alarm went off, it could be because of a burglary, or it could have been a false-positive due to a small earthquake. There could have been both an earthquake and a burglary, but if you get news of an earthquake, you’ll stop worrying so much that your stuff got stolen, because the earthquake alone was sufficient to explain the alarm.[2]

Here’s another example: if someone you’re arguing with is wrong, it could be either because they’re just too stupid to get the right answer, or it could be because they’re being dishonest—or some combintation of the two, but more of one means that less of the other is required to explain the observation of the person being wrong. As a causal graph—[3]

stupidity → wrongness ← dishonesty

Notably, the decomposition still works whether you count subconscious motivated reasoning as “stupidity” or “dishonesty”. (Needless to say, it’s also symmetrical across persons—if you’re wrong, it could be because you’re stupid or are being dishonest.)


  1. ↩︎

    Daphne Koller and Nier Friedman, Probabilistic Graphical Models: Principles and Techniques, §3.2.1.2 “Reasoning Patterns”

  2. ↩︎

    Judea Pearl, Probabilistic Reasoning in Intelligent Systems, §2.2.4 “Multiple Causes and ‘Explaining Away’”

  3. ↩︎

    Thanks to Daniel Kumor for example code for causal graphs.