Do We Believe Everything We’re Told?

Some early ex­per­i­ments on an­chor­ing and ad­just­ment tested whether dis­tract­ing the sub­jects—ren­der­ing sub­jects cog­ni­tively “busy” by ask­ing them to keep a look­out for “5” in strings of num­bers, or some such—would de­crease ad­just­ment, and hence in­crease the in­fluence of an­chors. Most of the ex­per­i­ments seemed to bear out the idea that be­ing cog­ni­tive busy in­creased an­chor­ing, and more gen­er­ally con­tam­i­na­tion.

Look­ing over the ac­cu­mu­lat­ing ex­per­i­men­tal re­sults—more and more find­ings of con­tam­i­na­tion, ex­ac­er­bated by cog­ni­tive busy­ness—Daniel Gilbert saw a truly crazy pat­tern emerg­ing: Do we be­lieve ev­ery­thing we’re told?

One might nat­u­rally think that on be­ing told a propo­si­tion, we would first com­pre­hend what the propo­si­tion meant, then con­sider the propo­si­tion, and fi­nally ac­cept or re­ject it. This ob­vi­ous-seem­ing model of cog­ni­tive pro­cess flow dates back to Descartes. But Descartes’s ri­val, Spinoza, dis­agreed; Spinoza sug­gested that we first pas­sively ac­cept a propo­si­tion in the course of com­pre­hend­ing it, and only af­ter­ward ac­tively dis­be­lieve propo­si­tions which are re­jected by con­sid­er­a­tion.

Over the last few cen­turies, philoso­phers pretty much went along with Descartes, since his view seemed more, y’know, log­i­cal and in­tu­itive.1 But Gilbert saw a way of test­ing Descartes’s and Spinoza’s hy­pothe­ses ex­per­i­men­tally.

If Descartes is right, then dis­tract­ing sub­jects should in­terfere with both ac­cept­ing true state­ments and re­ject­ing false state­ments. If Spinoza is right, then dis­tract­ing sub­jects should cause them to re­mem­ber false state­ments as be­ing true, but should not cause them to re­mem­ber true state­ments as be­ing false.

Gilbert, Krull, and Malone bear out this re­sult, show­ing that, among sub­jects pre­sented with novel state­ments la­beled true or false, dis­trac­tion had no effect on iden­ti­fy­ing true propo­si­tions (55% suc­cess for un­in­ter­rupted pre­sen­ta­tions, vs. 58% when in­ter­rupted); but did af­fect iden­ti­fy­ing false propo­si­tions (55% suc­cess when un­in­ter­rupted, vs. 35% when in­ter­rupted).2

A much more dra­matic illus­tra­tion was pro­duced in fol­lowup ex­per­i­ments by Gilbert, Ta­far­odi, and Malone.3 Sub­jects read aloud crime re­ports crawl­ing across a video mon­i­tor, in which the color of the text in­di­cated whether a par­tic­u­lar state­ment was true or false. Some re­ports con­tained false state­ments that ex­ac­er­bated the sever­ity of the crime; other re­ports contained

false state­ments that ex­ten­u­ated (ex­cused) the crime. Some sub­jects also had to pay at­ten­tion to strings of digits, look­ing for a “5,” while read­ing the crime re­ports—this be­ing the dis­trac­tion task to cre­ate cog­ni­tive busy­ness. Fi­nally, sub­jects had to recom­mend the length of prison terms for each crim­i­nal, from 0 to 20 years.

Sub­jects in the cog­ni­tively busy con­di­tion recom­mended an av­er­age of 11.15 years in prison for crim­i­nals in the “ex­ac­er­bat­ing” con­di­tion, that is, crim­i­nals whose re­ports con­tained la­beled false state­ments ex­ac­er­bat­ing the sever­ity of the crime. Busy sub­jects recom­mended an av­er­age of 5.83 years in prison for crim­i­nals whose re­ports con­tained la­beled false state­ments ex­cus­ing the crime. This nearly twofold differ­ence was, as you might sus­pect, statis­ti­cally sig­nifi­cant.

Non-busy par­ti­ci­pants read ex­actly the same re­ports, with the same la­bels, and the same strings of num­bers oc­ca­sion­ally crawl­ing past, ex­cept that they did not have to search for the num­ber “5.” Thus, they could de­vote more at­ten­tion to “un­be­liev­ing” state­ments la­beled false. Th­ese non-busy par­ti­ci­pants recom­mended 7.03 years ver­sus 6.03 years for crim­i­nals whose re­ports falsely ex­ac­er­bated or falsely ex­cused.

Gilbert, Ta­far­odi, and Malone’s pa­per was en­ti­tled “You Can’t Not Believe Every­thing You Read.”

This sug­gests—to say the very least—that we should be more care­ful when we ex­pose our­selves to un­re­li­able in­for­ma­tion, es­pe­cially if we’re do­ing some­thing else at the time. Be care­ful when you glance at that news­pa­per in the su­per­mar­ket.

PS: Ac­cord­ing to an un­ver­ified RUMOR I just made up, peo­ple will be less skep­ti­cal of this es­say BECAUSE OF THE DISTRACTING FONT CHANGES.

1See Robin Han­son, “Policy Tug-O-War,” Over­com­ing Bias (blog), 2007, http://​​www.over­com­ing­bias.com/​​2007/​​05/​​policy_tu­gowar.html.

2Daniel T. Gilbert, Dou­glas S. Krull, and Pa­trick S. Malone, “Un­be­liev­ing the Un­be­liev­able: Some Prob­lems in the Re­jec­tion of False In­for­ma­tion,” Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy 59 (4 1990): 601–613.

3Daniel T. Gilbert, Romin W. Ta­far­odi, and Pa­trick S. Malone, “You Can’t Not Believe Every­thing You Read,” Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy 65 (2 1993): 221–233.