Knowing About Biases Can Hurt People

Once upon a time I tried to tell my mother about the prob­lem of ex­pert cal­ibra­tion, say­ing: “So when an ex­pert says they’re 99% con­fi­dent, it only hap­pens about 70% of the time.” Then there was a pause as, sud­denly, I re­al­ized I was talk­ing to my mother, and I hastily added: “Of course, you’ve got to make sure to ap­ply that skep­ti­cism even­hand­edly, in­clud­ing to your­self, rather than just us­ing it to ar­gue against any­thing you dis­agree with—”

And my mother said: “Are you kid­ding? This is great! I’m go­ing to use it all the time!”

Taber and Lodge’s “Mo­ti­vated Skep­ti­cism in the Eval­u­a­tion of Poli­ti­cal Beliefs” de­scribes the con­fir­ma­tion of six pre­dic­tions:

1. Prior at­ti­tude effect. Sub­jects who feel strongly about an is­sue—even when en­couraged to be ob­jec­tive—will eval­u­ate sup­port­ive ar­gu­ments more fa­vor­ably than con­trary ar­gu­ments.

2. Dis­con­fir­ma­tion bias. Sub­jects will spend more time and cog­ni­tive re­sources den­i­grat­ing con­trary ar­gu­ments than sup­port­ive ar­gu­ments.

3. Con­fir­ma­tion bias. Sub­jects free to choose their in­for­ma­tion sources will seek out sup­port­ive rather than con­trary sources.

4. At­ti­tude po­lariza­tion. Ex­pos­ing sub­jects to an ap­par­ently bal­anced set of pro and con ar­gu­ments will ex­ag­ger­ate their ini­tial po­lariza­tion.

5. At­ti­tude strength effect. Sub­jects voic­ing stronger at­ti­tudes will be more prone to the above bi­ases.

6. So­phis­ti­ca­tion effect. Poli­ti­cally knowl­edge­able sub­jects, be­cause they pos­sess greater am­mu­ni­tion with which to counter-ar­gue in­con­gru­ent facts and ar­gu­ments, will be more prone to the above bi­ases.

If you’re ir­ra­tional to start with, hav­ing more knowl­edge can hurt you. For a true Bayesian, in­for­ma­tion would never have nega­tive ex­pected util­ity. But hu­mans aren’t perfect Bayes-wielders; if we’re not care­ful, we can cut our­selves.

I’ve seen peo­ple severely messed up by their own knowl­edge of bi­ases. They have more am­mu­ni­tion with which to ar­gue against any­thing they don’t like. And that prob­lem—too much ready am­mu­ni­tion—is one of the pri­mary ways that peo­ple with high men­tal ag­ility end up stupid, in Stanovich’s “dys­ra­tiona­lia” sense of stu­pidity.

You can think of peo­ple who fit this de­scrip­tion, right? Peo­ple with high g-fac­tor who end up be­ing less effec­tive be­cause they are too so­phis­ti­cated as ar­guers? Do you think you’d be helping them—mak­ing them more effec­tive ra­tio­nal­ists—if you just told them about a list of clas­sic bi­ases?

I re­call some­one who learned about the cal­ibra­tion/​over­con­fi­dence prob­lem. Soon af­ter he said: “Well, you can’t trust ex­perts; they’re wrong so of­ten—as ex­per­i­ments have shown. So there­fore, when I pre­dict the fu­ture, I pre­fer to as­sume that things will con­tinue his­tor­i­cally as they have—” and went off into this whole com­plex, er­ror-prone, highly ques­tion­able ex­trap­o­la­tion. Some­how, when it came to trust­ing his own preferred con­clu­sions, all those bi­ases and fal­la­cies seemed much less salient—leapt much less read­ily to mind—than when he needed to counter-ar­gue some­one else.

I told the one about the prob­lem of dis­con­fir­ma­tion bias and so­phis­ti­cated ar­gu­ment, and lo and be­hold, the next time I said some­thing he didn’t like, he ac­cused me of be­ing a so­phis­ti­cated ar­guer. He didn’t try to point out any par­tic­u­lar so­phis­ti­cated ar­gu­ment, any par­tic­u­lar flaw—just shook his head and sighed sadly over how I was ap­par­ently us­ing my own in­tel­li­gence to defeat it­self. He had ac­quired yet an­other Fully Gen­eral Coun­ter­ar­gu­ment.

Even the no­tion of a “so­phis­ti­cated ar­guer” can be deadly, if it leaps all too read­ily to mind when you en­counter a seem­ingly in­tel­li­gent per­son who says some­thing you don’t like.

I en­deavor to learn from my mis­takes. The last time I gave a talk on heuris­tics and bi­ases, I started out by in­tro­duc­ing the gen­eral con­cept by way of the con­junc­tion fal­lacy and rep­re­sen­ta­tive­ness heuris­tic. And then I moved on to con­fir­ma­tion bias, dis­con­fir­ma­tion bias, so­phis­ti­cated ar­gu­ment, mo­ti­vated skep­ti­cism, and other at­ti­tude effects. I spent the next thirty min­utes ham­mer­ing on that theme, rein­tro­duc­ing it from as many differ­ent per­spec­tives as I could.

I wanted to get my au­di­ence in­ter­ested in the sub­ject. Well, a sim­ple de­scrip­tion of con­junc­tion fal­lacy and rep­re­sen­ta­tive­ness would suffice for that. But sup­pose they did get in­ter­ested. Then what? The liter­a­ture on bias is mostly cog­ni­tive psy­chol­ogy for cog­ni­tive psy­chol­ogy’s sake. I had to give my au­di­ence their dire warn­ings dur­ing that one lec­ture, or they prob­a­bly wouldn’t hear them at all.

Whether I do it on pa­per, or in speech, I now try to never men­tion cal­ibra­tion and over­con­fi­dence un­less I have first talked about dis­con­fir­ma­tion bias, mo­ti­vated skep­ti­cism, so­phis­ti­cated ar­guers, and dys­ra­tiona­lia in the men­tally ag­ile. First, do no harm!