Hindsight bias

Hind­sight bias is when peo­ple who know the an­swer vastly over­es­ti­mate its pre­dictabil­ity or ob­vi­ous­ness, com­pared to the es­ti­mates of sub­jects who must guess with­out ad­vance knowl­edge. Hind­sight bias is some­times called the I-knew-it-all-along effect.

Fischhoff and Beyth (1975) pre­sented stu­dents with his­tor­i­cal ac­counts of un­fa­mil­iar in­ci­dents, such as a con­flict be­tween the Gurkhas and the Bri­tish in 1814. Given the ac­count as back­ground knowl­edge, five groups of stu­dents were asked what they would have pre­dicted as the prob­a­bil­ity for each of four out­comes: Bri­tish vic­tory, Gurkha vic­tory, stale­mate with a peace set­tle­ment, or stale­mate with no peace set­tle­ment. Four ex­per­i­men­tal groups were re­spec­tively told that these four out­comes were the his­tor­i­cal out­come. The fifth, con­trol group was not told any his­tor­i­cal out­come. In ev­ery case, a group told an out­come as­signed sub­stan­tially higher prob­a­bil­ity to that out­come, than did any other group or the con­trol group.

Hind­sight bias mat­ters in le­gal cases, where a judge or jury must de­ter­mine whether a defen­dant was legally neg­li­gent in failing to fore­see a haz­ard (Sanchiro 2003). In an ex­per­i­ment based on an ac­tual le­gal case, Kamin and Rach­lin­ski (1995) asked two groups to es­ti­mate the prob­a­bil­ity of flood dam­age caused by block­age of a city-owned draw­bridge. The con­trol group was told only the back­ground in­for­ma­tion known to the city when it de­cided not to hire a bridge watcher. The ex­per­i­men­tal group was given this in­for­ma­tion, plus the fact that a flood had ac­tu­ally oc­curred. In­struc­tions stated the city was neg­li­gent if the fore­see­able prob­a­bil­ity of flood­ing was greater than 10%. 76% of the con­trol group con­cluded the flood was so un­likely that no pre­cau­tions were nec­es­sary; 57% of the ex­per­i­men­tal group con­cluded the flood was so likely that failure to take pre­cau­tions was legally neg­li­gent. A third ex­per­i­men­tal group was told the out­come an­dalso ex­plic­itly in­structed to avoid hind­sight bias, which made no differ­ence: 56% con­cluded the city was legally neg­li­gent.

View­ing his­tory through the lens of hind­sight, we vastly un­der­es­ti­mate the cost of effec­tive safety pre­cau­tions. In 1986, the Challenger ex­ploded for rea­sons traced to an O-ring los­ing flex­i­bil­ity at low tem­per­a­ture. There were warn­ing signs of a prob­lem with the O-rings. But pre­vent­ing the Challenger dis­aster would have re­quired, not at­tend­ing to the prob­lem with the O-rings, but at­tend­ing to ev­ery warn­ing sign which seemed as se­vere as the O-ring prob­lem, with­out benefit of hind­sight. It could have been done, but it would have re­quired a gen­eral policy much more ex­pen­sive than just fix­ing the O-Rings.

Shortly af­ter Septem­ber 11th 2001, I thought to my­self, and now some­one will turn up minor in­tel­li­gence warn­ings of some­thing-or-other, and then the hind­sight will be­gin. Yes, I’m sure they had some minor warn­ings of an al Qaeda plot, but they prob­a­bly also had minor warn­ings of mafia ac­tivity, nu­clear ma­te­rial for sale, and an in­va­sion from Mars.

Be­cause we don’t see the cost of a gen­eral policy, we learn overly spe­cific les­sons. After Septem­ber 11th, the FAA pro­hibited box-cut­ters on air­planes—as if the prob­lem had been the failure to take this par­tic­u­lar “ob­vi­ous” pre­cau­tion. We don’t learn the gen­eral les­son: the cost of effec­tive cau­tion is very high be­cause you must at­tend to prob­lems that are not as ob­vi­ous now as past prob­lems seem in hind­sight.

The test of a model is how much prob­a­bil­ity it as­signs to the ob­served out­come. Hind­sight bias sys­tem­at­i­cally dis­torts this test; we think our model as­signed much more prob­a­bil­ity than it ac­tu­ally did. In­struct­ing the jury doesn’t help. You have to write down your pre­dic­tions in ad­vance. Or as Fischhoff (1982) put it:

When we at­tempt to un­der­stand past events, we im­plic­itly test the hy­pothe­ses or rules we use both to in­ter­pret and to an­ti­ci­pate the world around us. If, in hind­sight, we sys­tem­at­i­cally un­der­es­ti­mate the sur­prises that the past held and holds for us, we are sub­ject­ing those hy­pothe­ses to in­or­di­nately weak tests and, pre­sum­ably, find­ing lit­tle rea­son to change them.

Part of the se­quence Mys­te­ri­ous An­swers to Mys­te­ri­ous Questions

Next post: “Hind­sight De­val­ues Science

Pre­vi­ous post: “Con­ser­va­tion of Ex­pected Ev­i­dence

Fischhoff, B. 1982. For those con­demned to study the past: Heuris­tics and bi­ases in hind­sight. In Kah­ne­man et. al. 1982: 332–351.

Fischhoff, B., and Beyth, R. 1975. I knew it would hap­pen: Re­mem­bered prob­a­bil­ities of once-fu­ture things. Or­ga­ni­za­tional Be­hav­ior and Hu­man Perfor­mance, 13: 1-16.

Kamin, K. and Rach­lin­ski, J. 1995. Ex Post ≠ Ex Ante: Deter­min­ing Li­a­bil­ity in Hind­sight. Law and Hu­man Be­hav­ior, 19(1): 89-104.

Sanchiro, C. 2003. Find­ing Er­ror. Mich. St. L. Rev. 1189.