Burdensome Details

Merely cor­rob­o­ra­tive de­tail, in­tended to give artis­tic verisimil­i­tude to an oth­er­wise bald and un­con­vinc­ing nar­ra­tive . . .

—Pooh-Bah, in Gilbert and Sul­li­van’s The Mikado

The con­junc­tion fal­lacy is when hu­mans as­sign a higher prob­a­bil­ity to a propo­si­tion of the form “A and B” than to one of the propo­si­tions “A” or “B” in iso­la­tion, even though it is a the­o­rem that con­junc­tions are never like­lier than their con­juncts. For ex­am­ple, in one ex­per­i­ment, 68% of the sub­jects ranked it more likely that “Rea­gan will provide fed­eral sup­port for un­wed moth­ers and cut fed­eral sup­port to lo­cal gov­ern­ments” than that “Rea­gan will provide fed­eral sup­port for un­wed moth­ers.”1

A long se­ries of clev­erly de­signed ex­per­i­ments, which weeded out al­ter­na­tive hy­pothe­ses and nailed down the stan­dard in­ter­pre­ta­tion, con­firmed that con­junc­tion fal­lacy oc­curs be­cause we “sub­sti­tute judg­ment of rep­re­sen­ta­tive­ness for judg­ment of prob­a­bil­ity.”2 By adding ex­tra de­tails, you can make an out­come seem more char­ac­ter­is­tic of the pro­cess that gen­er­ates it. You can make it sound more plau­si­ble that Rea­gan will sup­port un­wed moth­ers, by adding the claim that Rea­gan will also cut sup­port to lo­cal gov­ern­ments. The im­plau­si­bil­ity of one claim is com­pen­sated by the plau­si­bil­ity of the other; they “av­er­age out.”

Which is to say: Ad­ding de­tail can make a sce­nario sound more plau­si­ble, even though the event nec­es­sar­ily be­comes less prob­a­ble.

If so, then, hy­po­thet­i­cally speak­ing, we might find fu­tur­ists spin­ning un­con­scionably plau­si­ble and de­tailed fu­ture his­to­ries, or find peo­ple swal­low­ing huge pack­ages of un­sup­ported claims bun­dled with a few strong-sound­ing as­ser­tions at the cen­ter.

If you are pre­sented with the con­junc­tion fal­lacy in a naked, di­rect com­par­i­son, then you may suc­ceed on that par­tic­u­lar prob­lem by con­sciously cor­rect­ing your­self. But this is only slap­ping a band-aid on the prob­lem, not fix­ing it in gen­eral.

In the 1982 ex­per­i­ment where pro­fes­sional fore­cast­ers as­signed sys­tem­at­i­cally higher prob­a­bil­ities to “Rus­sia in­vades Poland, fol­lowed by sus­pen­sion of diplo­matic re­la­tions be­tween the usa and the ussr” than to “Sus­pen­sion of diplo­matic re­la­tions be­tween the usa and the ussr,” each ex­per­i­men­tal group was only pre­sented with one propo­si­tion.3 What strat­egy could these fore­cast­ers have fol­lowed, as a group, that would have elimi­nated the con­junc­tion fal­lacy, when no in­di­vi­d­ual knew di­rectly about the com­par­i­son? When no in­di­vi­d­ual even knew that the ex­per­i­ment was about the con­junc­tion fal­lacy? How could they have done bet­ter on their prob­a­bil­ity judg­ments?

Patch­ing one gotcha as a spe­cial case doesn’t fix the gen­eral prob­lem. The gotcha is the symp­tom, not the dis­ease.

What could the fore­cast­ers have done to avoid the con­junc­tion fal­lacy, with­out see­ing the di­rect com­par­i­son, or even know­ing that any­one was go­ing to test them on the con­junc­tion fal­lacy? It seems to me, that they would need to no­tice the word “and.” They would need to be wary of it—not just wary, but leap back from it. Even with­out know­ing that re­searchers were af­ter­ward go­ing to test them on the con­junc­tion fal­lacy par­tic­u­larly. They would need to no­tice the con­junc­tion of two en­tire de­tails, and be shocked by the au­dac­ity of any­one ask­ing them to en­dorse such an in­sanely com­pli­cated pre­dic­tion. And they would need to pe­nal­ize the prob­a­bil­ity sub­stan­tially—a fac­tor of four, at least, ac­cord­ing to the ex­per­i­men­tal de­tails.

It might also have helped the fore­cast­ers to think about pos­si­ble rea­sons why the US and Soviet Union would sus­pend diplo­matic re­la­tions. The sce­nario is not “The US and Soviet Union sud­denly sus­pend diplo­matic re­la­tions for no rea­son,” but “The US and Soviet Union sus­pend diplo­matic re­la­tions for any rea­son.”

And the sub­jects who rated “Rea­gan will provide fed­eral sup­port for un­wed moth­ers and cut fed­eral sup­port to lo­cal gov­ern­ments”? Again, they would need to be shocked by the word “and.” More­over, they would need to add ab­sur­di­ties—where the ab­sur­dity is the log prob­a­bil­ity, so you can add it—rather than av­er­ag­ing them. They would need to think, “Rea­gan might or might not cut sup­port to lo­cal gov­ern­ments (1 bit), but it seems very un­likely that he will sup­port un­wed moth­ers (4 bits). To­tal ab­sur­dity: 5 bits.” Or maybe, “Rea­gan won’t sup­port un­wed moth­ers. One strike and it’s out. The other propo­si­tion just makes it even worse.”

Similarly, con­sider Tver­sky and Kah­ne­mans (1983) ex­per­i­ment based around a six-sided die with four green faces and two red faces.4 The sub­jects had to bet on the se­quence (1) rgrrr, (2) gr­grrr, or (3) gr­rrrr ap­pear­ing any­where in twenty rolls of the dice. Sixty-five per­cent of the sub­jects chose gr­grrr, which is strictly dom­i­nated by rgrrr, since any se­quence con­tain­ing gr­grrr also pays off for rgrrr. How could the sub­jects have done bet­ter? By notic­ing the in­clu­sion? Per­haps; but that is only a band-aid, it does not fix the fun­da­men­tal prob­lem. By ex­plic­itly calcu­lat­ing the prob­a­bil­ities? That would cer­tainly fix the fun­da­men­tal prob­lem, but you can’t always calcu­late an ex­act prob­a­bil­ity.

The sub­jects lost heuris­ti­cally by think­ing: “Aha! Se­quence 2 has the high­est pro­por­tion of green to red! I should bet on Se­quence 2!” To win heuris­ti­cally, the sub­jects would need to think: “Aha! Se­quence 1 is short! I should go with Se­quence 1!”

They would need to feel a stronger emo­tional im­pact from Oc­cam’s Ra­zor—feel ev­ery added de­tail as a bur­den, even a sin­gle ex­tra roll of the dice.

Once upon a time, I was speak­ing to some­one who had been mes­mer­ized by an in­cau­tious fu­tur­ist (one who adds on lots of de­tails that sound neat). I was try­ing to ex­plain why I was not like­wise mes­mer­ized by these amaz­ing, in­cred­ible the­o­ries. So I ex­plained about the con­junc­tion fal­lacy, speci­fi­cally the “sus­pend­ing re­la­tions ± in­vad­ing Poland” ex­per­i­ment. And he said, “Okay, but what does this have to do with—” And I said, “It is more prob­a­ble that uni­verses repli­cate for any rea­son, than that they repli­cate via black holes be­cause ad­vanced civ­i­liza­tions man­u­fac­ture black holes be­cause uni­verses evolve to make them do it.” And he said, “Oh.”

Un­til then, he had not felt these ex­tra de­tails as ex­tra bur­dens. In­stead they were cor­rob­o­ra­tive de­tail, lend­ing verisimil­i­tude to the nar­ra­tive. Some­one pre­sents you with a pack­age of strange ideas, one of which is that uni­verses repli­cate. Then they pre­sent sup­port for the as­ser­tion that uni­verses repli­cate. But this is not sup­port for the pack­age, though it is all told as one story.

You have to dis­en­tan­gle the de­tails. You have to hold up ev­ery one in­de­pen­dently, and ask, “How do we know this de­tail?” Some­one sketches out a pic­ture of hu­man­ity’s de­scent into nan­otech­nolog­i­cal war­fare, where China re­fuses to abide by an in­ter­na­tional con­trol agree­ment, fol­lowed by an arms race . . . Wait a minute—how do you know it will be China? Is that a crys­tal ball in your pocket or are you just happy to be a fu­tur­ist? Where are all these de­tails com­ing from? Where did that spe­cific de­tail come from?

For it is writ­ten:

If you can lighten your bur­den you must do so.

There is no straw that lacks the power to break your back.

1Amos Tver­sky and Daniel Kah­ne­man, “Judg­ments of and by Rep­re­sen­ta­tive­ness: Heuris­tics and Bi­ases,” in Judg­ment Un­der Uncer­tainty, ed. Daniel Kah­ne­man, Paul Slovic, and Amos Tver­sky (New York: Cam­bridge Univer­sity Press, 1982), 84–98 .

2See Amos Tver­sky and Daniel Kah­ne­man, “Ex­ten­sional Ver­sus In­tu­itive Rea­son­ing: The Con­junc­tion Fal­lacy in Prob­a­bil­ity Judg­ment,” Psy­cholog­i­cal Re­view 90, no. 4 (1983): 293–315 and Daniel Kah­ne­man and Shane Fred­er­ick, “Rep­re­sen­ta­tive­ness Re­vis­ited: At­tribute Sub­sti­tu­tion in In­tu­itive Judg­ment,” in Heuris­tics and Bi­ases: The Psy­chol­ogy of In­tu­itive Judg­ment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kah­ne­man (Cam­bridge Univer­sity Press, 2002) for more in­for­ma­tion.

3Tver­sky and Kah­ne­man, “Ex­ten­sional Ver­sus In­tu­itive Rea­son­ing .”

4Ibid.