Simultaneously Right and Wrong

Re­lated to: Belief in Belief, Con­ve­nient Overconfidence

“You’ve no idea of what a poor opinion I have of my­self, and how lit­tle I de­serve it.”

-- W.S. Gilbert

In 1978, Steven Ber­glas and Ed­ward Jones performed a study on vol­un­tary use of perfor­mance in­hibit­ing drugs. They asked sub­jects to solve cer­tain prob­lems. The con­trol group re­ceived sim­ple prob­lems, the ex­per­i­men­tal group im­pos­si­ble prob­lems. The re­searchers then told all sub­jects they’d solved the prob­lems suc­cess­fully, leav­ing the con­trols con­fi­dent in their own abil­ities and the ex­per­i­men­tal group pri­vately aware they’d just made a very lucky guess.

Then they offered the sub­jects a choice of two drugs to test. One drug sup­pos­edly en­hanced perfor­mance, the other sup­pos­edly hand­i­capped it.

There’s a cut here in case you want to pre­dict what hap­pened.

Males in the lucky guesser group chose the perfor­mance-in­hibit­ing drug sig­nifi­cantly more than those in the con­trol group1.

The re­searchers con­jec­tured that the lucky guessers felt good about ac­ing the first test. They an­ti­ci­pated failing the sec­ond whether they chose the helpful or the harm­ful drug. So they chose the harm­ful drug as an ex­cuse: “Oh, I would have passed the test, only the drug was mak­ing me stupid.” As the study points out, this is a win-win situ­a­tion: if they fail, the drug ex­cuses their failure, and if they suc­ceed it’s dou­bly im­pres­sive that they passed even with a hand­i­cap.

Since this study, psy­chol­o­gists have ap­plied self-hand­i­cap­ping to in­ter­pret com­mon phe­nom­ena like pro­cras­ti­na­tion, hypochon­dria, and minor­ity un­der­achieve­ment; they’ve also dis­cov­ered fac­tors that in­crease or de­crease self-hand­i­cap­ping ten­den­cies (for ex­am­ple, you can in­crease it by mak­ing the test more rele­vant to your self-es­teem: just say “this is a proven test of in­tel­lec­tual abil­ity in gen­eral”).

But some es­pe­cially in­ter­est­ing stud­ies in­ves­ti­gated the effect of pri­vacy on self-hand­i­cap­ping. For ex­am­ple, Hob­den con­ducted an ex­per­i­ment similar to Ber­glas and Jones’, albeit with tapes of perfor­mance-en­hanc­ing or -hand­i­cap­ping mu­sic in­stead of drugs. The twist: half the sub­jects’ choice of tapes and test scores were pub­lic, and half the sub­jects be­lieved their choice of tapes and test scores were anony­mous. What hap­pens when no one but the sub­ject him­self will ever know his test score? He self-hand­i­caps just as of­ten as ev­ery­one else. And it seems to *work*. The same set of stud­ies showed that sub­jects who self-hand­i­cap on a test are less likely to at­tribute their failure on the test to their own in­com­pe­tence.

In or­der to hand­i­cap, sub­jects must have an in­ac­cu­rate as­sess­ment of their own abil­ities. Other­wise, there’s no self-es­teem to pro­tect. If I be­lieve my IQ is 80, and I get 80 on an IQ test, I have no in­cen­tive to make ex­cuses to my­self, or to try to ex­plain away the re­sults. The only time I would want to ex­plain away the re­sults as based on some ex­ter­nal fac­tor was if I’d been go­ing around think­ing my real IQ was 100.

But sub­jects also must have an ac­cu­rate as­sess­ment of their own abil­ities. Sub­jects who take an easy pre-test and ex­pect an easy test do not self-hand­i­cap. Only sub­jects who un­der­stand their low chances of suc­cess can think “I will prob­a­bly fail this test, so I will need an ex­cuse2.

If this sounds fa­mil­iar, it’s be­cause it’s an­other form of the dragon prob­lem from Belief in Belief. The be­liever says there is a dragon in his garage, but ex­pects all at­tempts to de­tect the dragon’s pres­ence to fail. Eliezer writes: “The claimant must have an ac­cu­rate model of the situ­a­tion some­where in his mind, be­cause he can an­ti­ci­pate, in ad­vance, ex­actly which ex­per­i­men­tal re­sults he’ll need to ex­cuse.”

Should we say that the sub­ject be­lieves he will get an 80, but be­lieves in be­liev­ing that he will get a 100? This doesn’t quite cap­ture the spirit of the situ­a­tion. Clas­sic be­lief in be­lief seems to in­volve value judg­ments and com­plex be­lief sys­tems, but self-hand­i­cap­ping seems more like sim­ple over­con­fi­dence bias3. Is there any other ev­i­dence that over­con­fi­dence has a be­lief-in-be­lief as­pect to it?

Last Novem­ber, Robin de­scribed a study where sub­jects were less over­con­fi­dent if asked to pre­dict their perfor­mance on tasks they will ac­tu­ally be ex­pected to com­plete. He ended by not­ing that “It is al­most as if we at some level re­al­ize that our over­con­fi­dence is un­re­al­is­tic.”

Belief in be­lief in re­li­gious faith and self-con­fi­dence seem to be two ar­eas in which we can be si­mul­ta­neously right and wrong: ex­press­ing a bi­ased po­si­tion on a su­perfi­cial level while hold­ing an ac­cu­rate po­si­tion on a deeper level. The speci­fics are differ­ent in each case, but per­haps the same gen­eral mechanism may un­der­lie both. How many other bi­ases use this same mechanism?

Foot­notes

1: In most stud­ies on this effect, it’s most com­monly ob­served among males. The rea­sons are too com­pli­cated and con­tro­ver­sial to be dis­cussed in this post, but are left as an ex­er­cise for the reader with a back­ground in evolu­tion­ary psy­chol­ogy.

2: Com­pare the ideal Bayesian, for whom ex­pected fu­ture ex­pec­ta­tion is always the same as the cur­rent ex­pec­ta­tion, and in­vestors in an ideal stock mar­ket, who must always ex­pect a stock’s price to­mor­row to be on av­er­age the same as its price to­day—to this poor crea­ture, who ac­cu­rately pre­dicts that he will lower his es­ti­mate of his in­tel­li­gence af­ter tak­ing the test, but who doesn’t use that pre­dic­tion to change his pre-test es­ti­mates.

3: I have seen “over­con­fi­dence bias” used in two differ­ent ways: to mean poor cal­ibra­tion on guesses (ie pre­dic­tions made with 99% cer­tainty that are only right 70% of the time) and to mean the ten­dency to over­es­ti­mate one’s own good qual­ities and chance of suc­cess. I am us­ing the lat­ter defi­ni­tion here to re­main con­sis­tent with the com­mon us­age on Over­com­ing Bias; other peo­ple may call this same er­ror “op­ti­mism bias”.