Rationality: An Introduction

In­tro­duc­tion

by Rob Bensinger

In the au­tumn of 1951, a foot­ball game be­tween Dart­mouth and Prince­ton turned un­usu­ally rough. A pair of psy­chol­o­gists, Dart­mouth’s Albert Has­torf and Prince­ton’s Hadley Cantril, de­cided to ask stu­dents from both schools which team had ini­ti­ated the rough play. Nearly ev­ery­one agreed that Prince­ton hadn’t started it; but 86% of Prince­ton stu­dents be­lieved that Dart­mouth had started it, whereas only 36% of Dart­mouth stu­dents blamed Dart­mouth. (Most Dart­mouth stu­dents be­lieved “both started it.”)

When shown a film of the game later and asked to count the in­frac­tions they saw, Dart­mouth stu­dents claimed to see a mean of 4.3 in­frac­tions by the Dart­mouth team (and iden­ti­fied half as “mild”), whereas Prince­ton stu­dents claimed to see a mean of 9.8 Dart­mouth in­frac­tions (and iden­ti­fied a third as “mild”).1

When some­thing we value is threat­ened—our world-view, our in-group, our so­cial stand­ing, or some­thing else we care about—our thoughts and per­cep­tions rally to their defense.2,3 Some psy­chol­o­gists go so far as to hy­poth­e­size that the hu­man abil­ity to come up with ex­plicit jus­tifi­ca­tions for our con­clu­sions evolved speci­fi­cally to help us win ar­gu­ments.4

One of the ba­sic in­sights of 20th-cen­tury psy­chol­ogy is that hu­man be­hav­ior is of­ten driven by so­phis­ti­cated un­con­scious pro­cesses, and the sto­ries we tell our­selves about our mo­tives and rea­sons are much more bi­ased and con­fab­u­lated than we re­al­ize. We of­ten fail, in fact, to re­al­ize that we’re do­ing any story-tel­ling. When we seem to “di­rectly per­ceive” things about our­selves in in­tro­spec­tion, it of­ten turns out to rest on ten­u­ous im­plicit causal mod­els.5,6 When we try to ar­gue for our be­liefs, we can come up with shaky rea­son­ing bear­ing no re­la­tion to how we first ar­rived at the be­lief.7 Rather than trust­ing ex­pla­na­tions in pro­por­tion to their pre­dic­tive power, we tend to trust sto­ries in pro­por­tion to their psy­cholog­i­cal ap­peal.

How can we do bet­ter? How can we ar­rive at a re­al­is­tic view of the world, when we’re so prone to ra­tio­nal­iza­tion? How can we come to a re­al­is­tic view of our men­tal lives, when our thoughts about think­ing are also sus­pect?

What’s the least shaky place we could put our weight down?

At the turn of the 20th cen­tury, com­ing up with sim­ple (e.g., set-the­o­retic) ax­ioms for ar­ith­metic gave math­e­mat­i­ci­ans a clearer stan­dard by which to judge the cor­rect­ness of their con­clu­sions. If a hu­man or calcu­la­tor out­puts “2 + 2 = 4,” we can now do more than just say “that seems in­tu­itively right.” We can ex­plain why it’s right, and we can prove that its right­ness is tied in sys­tem­atic ways to the right­ness of the rest of ar­ith­metic.

But math­e­mat­ics lets us model the be­hav­iors of phys­i­cal sys­tems that are a lot more in­ter­est­ing than a pocket calcu­la­tor. We can also for­mal­ize ra­tio­nal be­lief in gen­eral, us­ing prob­a­bil­ity the­ory to pick out fea­tures held in com­mon by all suc­cess­ful forms of in­fer­ence. We can even for­mal­ize ra­tio­nal be­hav­ior in gen­eral by draw­ing upon de­ci­sion the­ory.

Prob­a­bil­ity the­ory defines how we would ideally rea­son in the face of un­cer­tainty, if we had the req­ui­site time, com­put­ing power, and men­tal con­trol. Given some back­ground knowl­edge (pri­ors) and a new piece of ev­i­dence, prob­a­bil­ity the­ory uniquely and pre­cisely defines the best set of new be­liefs (pos­te­rior) I could adopt. Like­wise, de­ci­sion the­ory defines what ac­tion I should take based on my be­liefs. For any con­sis­tent set of be­liefs and prefer­ences I could have, there is a de­ci­sion-the­o­retic an­swer to how I should then act in or­der to satisfy my prefer­ences.

Sup­pose you find out that one of your six class­mates has a crush on you—per­haps you get a let­ter from a se­cret ad­mirer, and you’re sure it’s from one of those six—but you have no idea which of the six it is. Bob hap­pens to be one of those six class­mates. If you have no spe­cial rea­son to think Bob’s any like­lier (or any less likely) than the other five can­di­dates, then what are the odds that Bob is the one with the crush?

An­swer: The odds are 1:5. There are six pos­si­bil­ities, so a wild guess would re­sult in you get­ting it right once for ev­ery five times you got it wrong, on av­er­age.

We can’t say, “Well, I have no idea who has a crush on me; maybe it’s Bob, or maybe it’s not. So I’ll just say the odds are fifty-fifty.” Even if we would rather say “I don’t know” or “Maybe” and stop there, the right an­swer is still 1:5. This fol­lows from the as­sump­tion that there are six pos­si­bil­ities and you have no rea­son to fa­vor one of them over any of the oth­ers.8

Sup­pose that you’ve also no­ticed you get winked at by peo­ple ten times as of­ten when they have a crush on you. If Bob then winks at you, that’s a new piece of ev­i­dence. In that case, it would be a mis­take to stay skep­ti­cal about whether Bob is your se­cret ad­mirer; the 10:1 odds in fa­vor of “a ran­dom per­son who winks at me has a crush on me” out­weigh the 1:5 odds against “Bob has a crush on me.”

It would also be a mis­take to say, “That ev­i­dence is so strong, it’s a sure bet that he’s the one who has the crush on me! I’ll just as­sume from now on that Bob is into me.” Over­con­fi­dence is just as bad as un­der­con­fi­dence.

In fact, there’s only one vi­able an­swer to this ques­tion too. To change our mind from the 1:5 prior odds in re­sponse to the ev­i­dence’s 10:1 like­li­hood ra­tio, we mul­ti­ply the left sides to­gether and the right sides to­gether, get­ting 10:5 pos­te­rior odds, or 2:1 odds in fa­vor of “Bob has a crush on me.” Given our as­sump­tions and the available ev­i­dence, guess­ing that Bob has a crush on you will turn out to be cor­rect 2 times for ev­ery 1 time it turns out to be wrong. Equiv­a­lently: the prob­a­bil­ity that he’s at­tracted to you is 23. Any other con­fi­dence level would be in­con­sis­tent.

It turns out that given very mod­est con­straints, the ques­tion “What should I be­lieve?” has an ob­jec­tively right an­swer. It has a right an­swer when you’re wracked with un­cer­tainty, not just when you have a con­clu­sive proof. There is always a cor­rect amount of con­fi­dence to have in a state­ment, even when it looks more like a “per­sonal be­lief” in­stead of an ex­pert-ver­ified “fact.”

Yet we of­ten talk as though the ex­is­tence of un­cer­tainty and dis­agree­ment makes be­liefs a mere mat­ter of taste. We say “that’s just my opinion” or “you’re en­ti­tled to your opinion,” as though the as­ser­tions of sci­ence and math ex­isted on a differ­ent and higher plane than be­liefs that are merely “pri­vate” or “sub­jec­tive.” To which economist Robin Han­son has re­sponded:9

You are never en­ti­tled to your opinion. Ever! You are not even en­ti­tled to “I don’t know.” You are en­ti­tled to your de­sires, and some­times to your choices. You might own a choice, and if you can choose your prefer­ences, you may have the right to do so. But your be­liefs are not about you; be­liefs are about the world. Your be­liefs should be your best available es­ti­mate of the way things are; any­thing else is a lie. [ . . . ]

It is true that some top­ics give ex­perts stronger mechanisms for re­solv­ing dis­putes. On other top­ics our bi­ases and the com­plex­ity of the world make it harder to draw strong con­clu­sions. [ . . . ]

But never for­get that on any ques­tion about the way things are (or should be), and in any in­for­ma­tion situ­a­tion, there is always a best es­ti­mate. You are only en­ti­tled to your best hon­est effort to find that best es­ti­mate; any­thing else is a lie.

Our cul­ture hasn’t in­ter­nal­ized the les­sons of prob­a­bil­ity the­ory—that the cor­rect an­swer to ques­tions like “How sure can I be that Bob has a crush on me?” is just as log­i­cally con­strained as the cor­rect an­swer to a ques­tion on an alge­bra quiz or in a ge­ol­ogy text­book.

Our brains are kludges slapped to­gether by nat­u­ral se­lec­tion. Hu­mans aren’t perfect rea­son­ers or perfect de­ci­sion-mak­ers, any more than we’re perfect calcu­la­tors. Even at our best, we don’t com­pute the ex­act right an­swer to “what should I think?” and “what should I do?”10

And yet, know­ing we can’t be­come fully con­sis­tent, we can cer­tainly still get bet­ter. Know­ing that there’s an ideal stan­dard we can com­pare our­selves to—what re­searchers call Bayesian ra­tio­nal­ity—can guide us as we im­prove our thoughts and ac­tions. Though we’ll never be perfect Bayesi­ans, the math­e­mat­ics of ra­tio­nal­ity can help us un­der­stand why a cer­tain an­swer is cor­rect, and help us spot ex­actly where we messed up.

Imag­ine try­ing to learn math through rote mem­o­riza­tion alone. You might be told that “10 + 3 = 13,” “31 + 108 = 139,” and so on, but it won’t do you a lot of good un­less you un­der­stand the pat­tern be­hind the squig­gles. It can be a lot harder to seek out meth­ods for im­prov­ing your ra­tio­nal­ity when you don’t have a gen­eral frame­work for judg­ing a method’s suc­cess. The pur­pose of this book is to help peo­ple build for them­selves such frame­works.

The tightly linked es­says in How to Ac­tu­ally Change Your Mind were origi­nally writ­ten by Eliezer Yud­kowsky for the blog Over­com­ing Bias. Pub­lished in the late 2000s, these posts helped in­spire the growth of a vibrant com­mu­nity in­ter­ested in ra­tio­nal­ity and self-im­prove­ment.

Map and Ter­ri­tory was the first such col­lec­tion. How to Ac­tu­ally Change Your Mind is the sec­ond. The full six-book set, ti­tled Ra­tion­al­ity: From AI to Zom­bies, can be found on Less Wrong at http://​less­wrong.com/​ra­tio­nal­ity.

One of the ra­tio­nal­ity com­mu­nity’s most pop­u­lar writ­ers, Scott Alexan­der, has pre­vi­ously ob­served:11

[O]bvi­ously it’s use­ful to have as much ev­i­dence as pos­si­ble, in the same way it’s use­ful to have as much money as pos­si­ble. But equally ob­vi­ously it’s use­ful to be able to use a limited amount of ev­i­dence wisely, in the same way it’s use­ful to be able to use a limited amount of money wisely.

Ra­tion­al­ity tech­niques help us get more mileage out of the ev­i­dence we have, in cases where the ev­i­dence is in­con­clu­sive or our bi­ases are dis­tort­ing how we in­ter­pret the ev­i­dence.

This ap­plies to our per­sonal lives, as in the tale of Bob. It ap­plies to dis­agree­ments be­tween poli­ti­cal fac­tions and sports fans. And it ap­plies to philo­soph­i­cal puz­zles and de­bates about the fu­ture tra­jec­tory of tech­nol­ogy and so­ciety. Rec­og­niz­ing that the same math­e­mat­i­cal rules ap­ply to each of these do­mains (and that in many cases the same cog­ni­tive bi­ases crop up), How to Ac­tu­ally Change Your Mind freely moves be­tween a wide range of top­ics.

The first se­quence of es­says in this book, Overly Con­ve­nient Ex­cuses, fo­cuses on prob­a­bil­is­ti­cally “easy” ques­tions—ones where the odds are ex­treme, and sys­tem­atic er­rors seem like they should be par­tic­u­larly easy to spot.…

From there, we move into murk­ier wa­ters with Poli­tics and Ra­tion­al­ity. Poli­tics—or rather, main­stream na­tional poli­tics of the sort de­bated by TV pun­dits—is fa­mous for its an­gry, un­pro­duc­tive dis­cus­sions. On the face of it, there’s some­thing sur­pris­ing about that. Why do we take poli­ti­cal dis­agree­ments so per­son­ally, even though the ma­chin­ery and effects of na­tional poli­tics are of­ten so dis­tant from us in space or in time? For that mat­ter, why do we not be­come more care­ful and rigor­ous with the ev­i­dence when we’re deal­ing with is­sues we deem im­por­tant?

The Dart­mouth-Prince­ton game hints at an an­swer. Much of our rea­son­ing pro­cess is re­ally ra­tio­nal­iza­tion—story-tel­ling that makes our cur­rent be­liefs feel more co­her­ent and jus­tified, with­out nec­es­sar­ily im­prov­ing their ac­cu­racy. Against Ra­tion­al­iza­tion speaks to this prob­lem, fol­lowed by See­ing with Fresh Eyes, on the challenge of rec­og­niz­ing ev­i­dence that doesn’t fit our ex­pec­ta­tions and as­sump­tions.

In prac­tice, lev­el­ing up in ra­tio­nal­ity of­ten means en­coun­ter­ing in­ter­est­ing and pow­er­ful new ideas and col­lid­ing more with the in-per­son ra­tio­nal­ity com­mu­nity. Death Spirals dis­cusses some im­por­tant haz­ards that can af­flict groups united around com­mon in­ter­ests and amaz­ing shiny ideas, which ra­tio­nal­ists will need to over­come if they’re to trans­late their high-minded ideas into real-world effec­tive­ness. How to Ac­tu­ally Change Your Mind then con­cludes with a se­quence on Let­ting Go.

Our nat­u­ral state isn’t to change our minds like a Bayesian would. Get­ting the Dart­mouth and Prince­ton stu­dents to no­tice what they’re ac­tu­ally see­ing won’t be as easy as recit­ing the ax­ioms of prob­a­bil­ity the­ory to them. As philan­thropic re­search an­a­lyst Luke Muehlhauser writes in “The Power of Agency”:12

You are not a Bayesian ho­muncu­lus whose rea­son­ing is “cor­rupted” by cog­ni­tive bi­ases.

You just are cog­ni­tive bi­ases.

Con­fir­ma­tion bias, sta­tus quo bias, cor­re­spon­dence bias, and the like are not tacked on to our rea­son­ing; they are its very sub­stance.

That doesn’t mean that de­bi­as­ing is im­pos­si­ble. We aren’t perfect calcu­la­tors un­der­neath all our ar­ith­metic er­rors, ei­ther. Many of our math­e­mat­i­cal limi­ta­tions re­sult from very deep facts about how the hu­man brain works. Yet we can train our math­e­mat­i­cal abil­ities; we can learn when to trust and dis­trust our math­e­mat­i­cal in­tu­itions; we can shape our en­vi­ron­ments to make things eas­ier on us. And if we’re wrong to­day, we can be less so to­mor­row.

1Albert Has­torf and Hadley Cantril, “They Saw a Game: A Case Study,” Jour­nal of Ab­nor­mal and So­cial Psy­chol­ogy 49 (1954): 129–134, http://​www2.psych.ubc.ca/​~schaller/​Psyc590Read­ings/​Has­torf1954.pdf.

2Emily Pronin, “How We See Our­selves and How We See Others,” Science 320 (2008): 1177–1180.

3Robert P. Val­lone, Lee Ross, and Mark R. Lep­per, “The Hos­tile Me­dia Phenomenon: Bi­ased Per­cep­tion and Per­cep­tions of Me­dia Bias in Cover­age of the Beirut Mas­sacre,” Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy 49 (1985): 577–585, http://​ssc.wisc.edu/​~jpili­avi/​965/​hwang.pdf.

4Hugo Mercier and Dan Sper­ber, “Why Do Hu­mans Rea­son? Ar­gu­ments for an Ar­gu­men­ta­tive The­ory,” Be­hav­ioral and Brain Sciences 34 (2011): 57–74, http://​hal.archives-ou­vertes.fr/​file/​in­dex/​do­cid/​904097/​file­name/​MercierSper­berWhy­do­hu­man­srea­son.pdf.

5Richard E. Nis­bett and Ti­mothy D. Wil­son, “Tel­ling More than We Can Know: Ver­bal Re­ports on Men­tal Pro­cesses,” Psy­cholog­i­cal Re­view 84 (1977): 231–259, http://​peo­ple.virginia.edu/​~tdw/​nis­bett&wil­son.pdf.

6Eric Sch­witzgebel, Per­plex­ities of Con­scious­ness (MIT Press, 2011).

7Jonathan Haidt, “The Emo­tional Dog and Its Ra­tional Tail: A So­cial In­tu­ition­ist Ap­proach to Mo­ral Judg­ment,” Psy­cholog­i­cal Re­view 108, no. 4 (2001): 814–834, doi:10.1037/​0033-295X.108.4.814.

8We’re also as­sum­ing, un­re­al­is­ti­cally, that you can re­ally be cer­tain the ad­mirer is one of those six peo­ple, and that you aren’t ne­glect­ing other pos­si­bil­ities. (What if more than one of your class­mates has a crush on you?)

9Robin Han­son, “You Are Never En­ti­tled to Your Opinion,” Over­com­ing Bias (Blog), 2006, http://​www.over­com­ing­bias.com/​2006/​12/​you_are_never_e.html.

10We lack the com­pu­ta­tional re­sources (and evolu­tion lacked the en­g­ineer­ing ex­per­tise and fore­sight) to iron out all our bugs. In­deed, even a max­i­mally effi­cient rea­soner in the real world would still need to rely on heuris­tics and ap­prox­i­ma­tions. The best pos­si­ble com­pu­ta­tion­ally tractable al­gorithms for chang­ing be­liefs would still fall short of prob­a­bil­ity the­ory’s con­sis­tency.

11Scott Alexan­der, “Why I Am Not Rene Descartes,” Slate Star Codex (Blog), 2014, http://​slat­estar­codex.com/​2014/​11/​27/​why-i-am-not-rene-descartes/​.

12Luke Muehlhauser, “The Power of Agency,” Less Wrong (Blog), 2011, http://​less­wrong.com/​lw/​5i8/​the_power_of_agency/​.

No comments.