I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias

Ab­stract: “Close-call coun­ter­fac­tu­als”, claims of what could have al­most hap­pened but didn’t, can be used to ei­ther defend a be­lief or to at­tack it. Peo­ple have a ten­dency to re­ject coun­ter­fac­tu­als as im­prob­a­ble when those coun­ter­fac­tu­als threaten a be­lief (the “I was not al­most wrong” defense), but to em­brace coun­ter­fac­tu­als that sup­port a be­lief (the “I was al­most right” defense). This be­hav­ior is the strongest in peo­ple who score high on a test for need for clo­sure and sim­plic­ity. Ex­plor­ing coun­ter­fac­tual wor­lds can be used to re­duce over­con­fi­dence, but it can also lead to log­i­cally in­co­her­ent an­swers, es­pe­cially in peo­ple who score low on a test for need for clo­sure and sim­plic­ity.

”I was not al­most wrong”

Dr. Zany, the Ne­far­i­ous Scien­tist, has a the­ory which he in­tends to use to achieve his goal of world dom­i­na­tion. ”As you know, I have long been a stu­dent of hu­man na­ture”, he tells his as­sis­tant, AS-01. (Dr. Zany has always wanted to have an in­tel­li­gent robot as his as­sis­tant. Un­for­tu­nately, for some rea­son all the robots he has built have only been in­ter­ested in erad­i­cat­ing the color blue from the uni­verse. And blue is his fa­vorite color. So for now, he has re­sorted to just hiring a hu­man as­sis­tant and refer­ring to her with a robot-like name.)

”Dur­ing my stud­ies, I have dis­cov­ered the fol­low­ing. When­ever my arch­neme­sis, Cap­tain Anvil, shows up at a scene, the me­dia will very quickly show up to make a re­port about it, and they pre­fer to send the re­port live. While this is go­ing on, the whole city – in­clud­ing the po­lice forces! - will be cap­ti­vated by the re­port about Cap­tain Anvil, and ne­glect to pay at­ten­tion to any­thing else. This hap­pened once, and a bank was robbed on the other side of the city while no­body was pay­ing any at­ten­tion. Thus, I know how to com­mit the perfect crime – I sim­ply need to cre­ate a di­ver­sion that at­tracts Cap­tain Anvil, and then no­body will no­tice me. His­tory tells us that this is the in­evitable out­come of Cap­tain Anvil show­ing up!”

But to Dr. Zany’s an­noy­ance, AS-01 is always doubt­ing him. Dr. Zany has of­ten con­sid­ered turn­ing her into a brain-in-a-vat as pun­ish­ment, but she makes the best tuna sand­wiches Dr. Zany has ever tasted. He’s forced to tol­er­ate her im­pun­dence, or he’ll lose that culi­nary plea­sure.

”But Dr. Zany”, AS-01 says. ”Sup­pose that some TV re­porter had hap­pened to be on her way to where Cap­tain Anvil was, and on her route she saw the bank rob­bery. Then part of the me­dia at­ten­tion would have been di­verted, and the po­lice would have heard about the rob­bery. That might hap­pen to you, too!”

Dr. Zany’s fa­vorite be­lief is now be­ing threat­ened. It might not be in­evitable that Cap­tain Anvil show­ing up will ac­tu­ally let crim­i­nals el­se­where act un­hin­dered! AS-01 has pre­sented a plau­si­ble-sound­ing coun­ter­fac­tual, ”if a TV re­porter had seen the rob­bery, then the city’s at­ten­tion had been di­verted to the other crime scene”. Although the his­tor­i­cal record does not show that Dr. Zany’s the­ory would have been wrong, the coun­ter­fac­tual sug­gests that he might be al­most wrong.

There are now three tac­tics that Dr. Zany can use to defend his be­lief (war­rant­edly or not):

1. Challenge the muta­bil­ity of the an­tecedent. Since AS-01′s coun­ter­fac­tual is of the form ”if A, then B”, Dr. Zany could ques­tion the plau­si­bil­ity of A.

”Baloney!” ex­claims Dr. Zany. ”No TV re­porter could ever have wan­dered past, let alone seen the rob­bery!”

That seems a lit­tle hard to be­lieve, how­ever.

2. Challenge the causal prin­ci­ples link­ing the an­tecedent to the con­se­quent. Dr. Zany is not log­i­cally re­quired to ac­cept the ”then” in ”if A, then B”. There are always un­stated back­ground as­sump­tions that he can ques­tion.

”Hum­bug!” shouts Dr. Zany. ”Yes, a re­porter could have seen the rob­bery and alerted the me­dia, but given the choice of cov­er­ing such a minor in­ci­dent and con­tin­u­ing to re­port on Cap­tain Anvil, they would not have cared about the bank rob­bery!”

3. Con­cede the coun­ter­fac­tual, but in­sist that it does not mat­ter for the over­all the­ory.

”In­con­ceiv­able!” yelps Dr. Zany. ”Even if the city’s at­ten­tion would have been di­verted to the rob­bery, the rob­bers would have es­caped by then! So Cap­tain Anvil’s pres­ence would have al­lowed them to suc­ceed re­gard­less!”


Em­piri­cal work sug­gests that it’s not only Dr. Zany who wants to stick to his be­liefs. Let us for a mo­ment turn our at­ten­tion away from su­pervillains, and look at pro­fes­sional his­to­ri­ans and an­a­lysts of world poli­tics. In or­der to make sense of some­thing as com­pli­cated as world his­tory, ex­perts re­sort to var­i­ous sim­plify­ing strate­gies. For in­stance, one ex­plana­tory schema is called ne­o­re­al­ist bal­anc­ing. Ne­o­re­al­ist bal­anc­ing claims that ”when one state threat­ens to be­come too pow­er­ful, other states co­a­lesce against it, thereby pre­serv­ing the bal­ance of power”. Among other things, it im­plies that Hitler’s failure was pre­de­ter­mined by a fun­de­men­tal law of world poli­tics.

Tet­lock (1998, 1999, 2001) sur­veyed a num­ber of ex­perts on his­tory and in­ter­na­tional af­fairs. He sur­veyed the ex­perts on their com­mit­ment to such the­o­ries, and then posed them coun­ter­fac­tu­als that con­flicted with some of those the­o­ries. For in­stance, coun­ter­fac­tu­als that con­flicted with ne­o­re­al­ist bal­anc­ing were “If Go­er­ing had con­tinued to con­cen­trate Luft­waffe at­tacks on Bri­tish air­bases and radar sta­tions, Ger­many would have won the Bat­tle of Bri­tain” and “If the Ger­man mil­i­tary had played more effec­tively on the wide­spread re­sent­ment of lo­cal pop­u­la­tions to­ward the Stal­inist regime, the Soviet Union would have col­lapsed”. The ex­perts were then asked to in­di­cate the ex­tent to which they agreed with the an­tecedent, the causal link, and the claim that the coun­ter­fac­tual be­ing true would have sub­stan­tially changed world his­tory.

As might have been ex­pected, ex­perts who sub­scribed to a cer­tain the­ory were skep­ti­cal about coun­ter­fac­tu­als threat­en­ing the the­ory, and em­ployed all three defenses more than ex­perts who were less com­mit­ted. Deny­ing the pos­si­bil­ity of the an­tecedent was done the least fre­quently, while ques­tion­ing the over­all im­pact of the con­se­quence was the most com­mon defense.

By it­self, this might not be a sign of bias – the ex­perts might have been skep­ti­cal of a coun­ter­fac­tual be­cause they had an ir­ra­tional com­mit­ment to the­ory, but they might also have ac­quired a ra­tio­nal com­mit­ment to the the­ory be­cause they were skep­ti­cal of coun­ter­fac­tu­als challeng­ing it. Maybe ne­o­re­al­ist bal­anc­ing is true, and the ex­perts sub­scribing to it are right to defend it. What’s more tel­ling is that Tet­lock also mea­sured each ex­pert’s need for clo­sure. It turned out that if an ex­pert had – like Dr. Zany – had a high need for clo­sure, then they were also more likely to em­ploy defenses ques­tion­ing the val­idity of a coun­ter­fac­tual.

The­o­ret­i­cally, high need-for-clo­sure in­di­vi­d­u­als are char­ac­ter­ized by two ten­den­cies: ur­gency which in­clines them to ‘seize’ quickly on read­ily available ex­pla­na­tions and to dis­miss al­ter­na­tives and per­ma­nence which in­clines them to ‘freeze’ on these ex­pla­na­tions and per­sist with them even in the face of formidable coun­terev­i­dence. In the cur­rent con­text, high need-for-clo­sure in­di­vi­d­u­als were hy­poth­e­sized to pre­fer sim­ple ex­pla­na­tions that por­tray the past as in­evitable, to defend these ex­pla­na­tions tena­ciously when con­fronted by dis­so­nant close-call coun­ter­fac­tu­als that im­ply events could have un­folded oth­er­wise, to ex­press con­fi­dence in con­di­tional fore­casts that ex­tend these ex­pla­na­tions into the fu­ture, and to defend dis­con­firmed fore­casts from re­fu­ta­tion by in­vok­ing sec­ond-or­der coun­ter­fac­tu­als that im­ply that the pre­dicted events al­most hap­pened. (Tet­lock, 1998)

If two peo­ple draw differ­ent con­clu­sions from the same in­for­ma­tion, then at least one of them is wrong. Tet­lock is care­ful to note that the data doesn’t re­veal whether it’s the peo­ple with a high or a low need for clo­sure who are closer to the truth, but we prob­a­bly pre­sume that at least some of them were be­ing ex­ceed­ingly defen­sive.

This gives us rea­son to be wor­ried. If some past oc­cur­rance seems to fit perfectly into our pet the­ory, have we con­sid­ered the case that we might be al­most wrong? And if we have, are we ex­hibit­ing an ex­cess need for clo­sure by rush­ing to its defense, or are we be­ing ex­ces­sively flex­ible by un­nec­es­sar­ily ad­mit­ting that some­thing might have gone differ­ently? We should only ad­mit to be­ing al­most wrong if we re­ally were al­most wrong, af­ter all. Is the cog­ni­tive style we hap­pen to have the one that’s the most cor­re­lated with get­ting the right an­swers?

”I was al­most right.”

Hav­ing defended his the­ory against AS-01′s crit­i­cism, Dr. Zany puts the the­ory into use by start­ing a fire in a tar fac­tory, di­vert­ing Cap­tain Anvil. While the me­dia is pre­oc­cu­pied with re­port­ing the story, Dr. Zany tries to steal the bridge con­nect­ing Ex­am­ple City to the con­ti­nent. Un­for­tu­nately, a City Po­lice pa­trol boat hap­pens to see this, alert­ing the po­lice forces (as well as Cap­tain Anvil) to the site. Dr. Zany is forced to with­draw.

”Damn that unan­ti­ci­pated pa­trol boat!”, Dr. Zany swears. ”If only it had not ap­peared, my plan would have worked perfectly!” AS-01 wisely says noth­ing, and avoids be­ing turned into a brain-in-a-vat.


Tet­lock (1998, 1999) sur­veyed a num­ber of ex­perts and asked them to make pre­dic­tions about world poli­tics. After­wards, when it was clear whether or not the pre­dic­tions had turned out to be true, he sur­veyed them again. It turned out that like Dr. Zany, most of the mis­taken ex­perts had not se­ri­ously up­dated their be­liefs:

Not sur­pris­ingly, ex­perts who got it right cred­ited their ac­cu­racy to their sound read­ing of the ‘ba­sic forces’ at play in the situ­a­tion. Across is­sue do­mains they as­signed av­er­age rat­ings be­tween 6.5 and 7.6 on a 9-point scale where 9 in­di­cates max­i­mum con­fi­dence. Per­haps more sur­pris­ingly, ex­perts who got it wrong were al­most as likely to be­lieve that their read­ing of the poli­ti­cal situ­a­tion was fun­da­men­tally sound. They as­signed av­er­age rat­ings from 6.3 to 7.1, across do­main (Tet­lock, 1998)

Many of the ex­perts defended their read­ing of the situ­a­tion by say­ing that they were ”al­most right”. For in­stance, ex­perts who pre­dicted in 1988 that the Com­mu­nist Party of the Soviet Union would grow in­creas­ingly au­thor­tar­ian dur­ing the next five years were prone to claiming that the hardliner coup of 1991 had al­most suc­ceeded, and if that had hap­pened, their pre­dic­tion would have be­come true. Similarly, ob­servers of South Africa who in 1988-1989 ex­pected white minor­ity rule to con­tinue or to be­come in­creas­ingly op­pres­sive were likely to be­lieve that were it not for two ex­cep­tional in­di­vi­d­u­als – de Klerk and Man­dela—in key lead­er­ship roles, South Africa could eas­ily have gone the other way.

In to­tal, Tet­lock (1999) iden­ti­fied five log­i­cally defen­si­ble strate­gies for defend­ing one’s fore­casts, all of which were em­ployed by at least some of the ex­perts. Again, it was the ex­perts who scored the high­est on a need for clo­sure who tended to em­ploy such defenses the most:

  1. The an­tecedent (the A in the ”if A, then B”) was never ad­e­quately satis­fied. Ex­perts might in­sist ”if we had prop­erly im­ple­mented de­ter­rence or re­as­surance, we could have averter war” or ”if real shock ther­apy had been prac­ticed, we could have averted the nasty bout of hy­per­in­fla­tion”.

  2. Although the speci­fied an­tecedent was satis­fied, some­thing un­ex­pected hap­pened, sev­er­ing the nor­mal link of cause and effect. Ex­perts might de­clare that rapid pri­va­ti­za­tion in state in­dus­tries would have led to the pre­dicted surge in eco­nomic growth, but only if the gov­ern­ment had pur­sued pru­dent mon­e­tary poli­cies.

  3. Although the pre­dicted out­come did not oc­cur, it ”al­most oc­curred” and would have, if not for some in­her­ently un­pre­dictable out­side shock.

  4. Although the pre­dicted out­come has not yet oc­curred, it even­tu­ally will and we just need to be more pa­tient (hardline com­mu­nists may yet pre­vail in Moscow, the EU might still fall apart).

  5. Although the rele­vant con­di­tions were satis­fied and the pre­dicted out­come never came close to oc­cur­ring and never will, this should not be held against the frame­work that in­spired the fore­cast. Fore­casts are in­her­ently un­re­li­able and poli­tics is hard to pre­dict: just be­cause the frame­work failed once didn’t mean that it’s wrong.

Again, Tet­lock is care­ful to note that al­though it’s tempt­ing to dis­miss all such ma­neu­ver­ing as ”trans­par­ently defen­sive post ho­cery”, it would be wrong to au­to­mat­i­cally in­ter­pret it as bias. Each defense is a po­ten­tially valid ob­jec­tion, and might have been the right one to make, in some cases.

But there are also signs of bias. Tet­lock (1999) makes a num­ber of ob­ser­va­tions from his data, not­ing – among other things – that the stronger the origi­nal con­fi­dence in a claim, the more likely an ex­pert is to em­ploy var­i­ous defenses. That would sug­gest that big threats to an ex­pert’s claims of ex­per­tise ac­ti­vate many defenses. He also notes that the ex­perts who’d made failed pre­dic­tions and em­ployed strong defenses tended not to up­date their con­fi­dence, while the ex­perts who’d made failed pre­dic­tions but didn’t em­ploy strong defenses did up­date.

Again, some of the ex­perts were prob­a­bly right to defend them­selves, but some of them were prob­a­bly bi­ased and only try­ing to pro­tect their rep­u­ta­tions. We should our­selves be alert when we catch our­selves us­ing one of those tech­niques to defend our pre­dic­tions.

Ex­plor­ing counter-fac­tual wor­lds: a pos­si­ble de­bi­as­ing tech­nique.

”Although my plan failed this time, I was al­most right! The next time, I’ll be pre­pared for any pa­trol boats!”, Dr. Zany mut­ters to him­self, back in the safety of his lab­o­ra­tory.

”Yes, it was an un­likely co­in­ci­dence in­deed”, AS-01 agrees. ”Say, I know that such co­in­ci­dences are ter­ribly un­likely, but I started won­der­ing – what other co­in­ci­dence might have caused your plan to fail? Are there any oth­ers that we should take into ac­count be­fore the next try?”

”Hmm....”, Dr. Zany re­sponds, thought­fully.


Tet­lock & Le­bow (2001) found that ex­perts be­came less con­vinced of the in­evita­bil­ity of a sce­nario when they were ex­plic­itly in­structed to con­sider var­i­ous events that might have led to a differ­ent out­come. In two stud­ies, ex­perts were told to con­sider the Cuban Mis­sile Cri­sis and, for each day of the crisis, es­ti­mate the sub­jec­tive prob­a­bil­ity that the crisis would end ei­ther peace­fully or vi­o­lently. When ex­perts were told to con­sider var­i­ous pro­vided coun­ter­fac­tu­als sug­gest­ing a differ­ent out­come, they thought that a vi­o­lent out­come re­mained a pos­si­bil­ity for longer than the ex­perts who weren’t given such coun­ter­fac­tu­als to con­sider. The same hap­pened when the ex­perts weren’t given ready-made coun­ter­fac­tu­als, but were told to gen­er­ate al­ter­na­tive sce­nar­ios of their own, at an in­creas­ingly fine re­s­olu­tion.

The other group (n = 34) was asked to con­sider (1) how the set of more vi­o­lent end­ings of the Cuban mis­sile crisis could be dis­ag­gre­gated into sub­sets in which vi­o­lence re­mained lo­cal­ized or spread out­side the Caribbean, (2) in turn differ­en­ti­ated into sub­sets in which vi­o­lence claimed fewer or more than 100 ca­su­alties, and (3) for the higher ca­su­alty sce­nario, still more differ­en­ti­ated into a con­flict ei­ther limited to con­ven­tional weaponry or ex­tend­ing to nu­clear. (Tet­lock & Le­bow, 2001)

Again, the ex­perts who gen­er­ated coun­ter­fac­tual sce­nar­ios be­came less con­fi­dent of their pre­dic­tions. The ex­perts with a low need for clo­sure ad­justed their opinions con­sid­er­ably more than the ones with a high need for clo­sure.

How­ever, this tech­nique has its dan­gers as well. More fine-grained sce­nar­ios offer an op­por­tu­nity to tell more de­tailed sto­ries, and hu­mans give dis­pro­por­tionate weight to de­tailed sto­ries. Un­pack­ing the var­i­ous sce­nar­ios leads us to giv­ing too much weight for the in­di­vi­d­ual sub­sce­nar­ios. You might re­mem­ber the ex­am­ple of ”the USA and Soviet Union sus­pend­ing re­la­tions” be­ing con­sid­ered less prob­a­ble than ”the Soviet Union in­vades Poland, and the USA and Soviet Union sus­pend re­la­tions”, even though the sec­ond sce­nario is a sub­set of the first. Peo­ple with a low need for clo­sure seem to be es­pe­cially sus­pectible to this, while peo­ple with a high need for clo­sure tend to pro­duce more log­i­cally co­her­ent an­swers. This might be con­sid­ered an ad­van­tage of the high need for clo­sure – an un­will­ing­ness to en­gage in ex­tended wild goose chases, and thus as­sign minor sce­nar­ios a dis­pro­por­tionately high probability

References

Tet­lock, P.E. (1998) Close-Call Coun­ter­fac­tu­als and Belief-Sys­tem Defenses: I Was Not Al­most Wrong But I Was Al­most Right. Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy, Vol. 75, No. 3, 639-652. http://​​fac­ulty.haas.berkeley.edu/​​tet­lock/​​Vita/​​Philip%20Tet­lock/​​Phil%20Tet­lock/​​1994-1998/​​1998%20Close-Call%20Coun­ter­fac­tu­als%20and%20Belief-Sys­tem%20Defenses.pdf

Tet­lock, P.E. (1999) The­ory-Driven Rea­son­ing About Plau­si­ble Pasts and Prob­a­ble Fu­tures in World Poli­tics: Are We Pri­son­ers of Our Precon­cep­tions? Amer­i­can Jour­nal of Poli­ti­cal Science, Vol. 43, No. 2, 335-366. http://​​www.uky.edu/​​AS/​​PoliSci/​​Peffley/​​pdf/​​Tet­lock%201999%20AJPS%20The­ory-driven%20World%20Poli­tics.pdf

Tet­lock, P.E. & Le­bow, R.N. (2001) Pok­ing Coun­ter­fac­tual Holes in Cover­ing Laws: Cog­ni­tive Styles and His­tor­i­cal Rea­son­ing. Amer­i­can Poli­ti­cal Science Re­view, Vol. 95, No. 4. http://​​fac­ulty.haas.berkeley.edu/​​tet­lock/​​vita/​​philip%20tet­lock/​​phil%20tet­lock/​​1999-2000/​​2001%20pok­ing%20coun­ter­fac­tual%20holes%20in%20cov­er­ing%20laws....pdf