Rationality: An Introduction


What should I be­lieve?

As it turns out, that ques­tion has a right an­swer.

It has a right an­swer when you’re wracked with un­cer­tainty, not just when you have a con­clu­sive proof. There is always a cor­rect amount of con­fi­dence to have in a state­ment, even when it looks like a “per­sonal be­lief ” and not like an ex­pert-ver­ified “fact.”

Yet we of­ten talk as though the ex­is­tence of un­cer­tainty and dis­agree­ment make be­liefs a mere mat­ter of taste. We say “that’s just my opinion” or “you’re en­ti­tled to your opinion,” as though the as­ser­tions of sci­ence and math ex­isted on a differ­ent and higher plane than be­liefs that are merely “pri­vate” or“sub­jec­tive.” But, writes Robin Han­son:[1]

You are never en­ti­tled to your opinion. Ever! You are not even en­ti­tled to “I don’t know.” You are en­ti­tled to your de­sires, and some­times to your choices. You might own a choice, and if you can choose your prefer­ences, you may have the right to do so. But your be­liefs are not about you; be­liefs are about the world. Your be­liefs should be your best available es­ti­mate of the way things are; any­thing else is a lie. [...]

It is true that some top­ics give ex­perts stronger mechanisms for re­solv­ing dis­putes. On other top­ics our bi­ases and the com­plex­ity of the world make it harder to draw strong con­clu­sions. [...]

But never for­get that on any ques­tion about the way things are (or should be), and in any in­for­ma­tion situ­a­tion, there is always a best es­ti­mate. You are only en­ti­tled to your best hon­est effort to find that best es­ti­mate; any­thing else is a lie.

Sup­pose you find out that one of six peo­ple has a crush on you—per­haps you get a let­ter from a se­cret ad­mirer and you’re sure it’s from one of those six—but you have no idea which of those six it is. Your class­mate Bob is one of the six can­di­dates, but you have no spe­cial ev­i­dence for or against him be­ing the one with the crush. In that case, the odds that Bob is the one with the crush are 1:5.

Be­cause there are six pos­si­bil­ities, a wild guess would re­sult in you get­ting it right once for ev­ery five times you got it wrong, on av­er­age. This is what we mean by “the odds are 1:5.” You can’t say, “Well, I have no idea who has a crush on me; maybe it’s Bob, or maybe it’s not. So I’ll just say the odds are fifty-fifty.” Even if you’d rather say “I don’t know” or “Maybe” and stop there, the an­swer is still 1:5.[2]

Sup­pose also that you’ve no­ticed you get winked at by peo­ple ten times as of­ten when they have a crush on you. If Bob then winks at you, that’s a new piece of ev­i­dence. In that case, it would be a mis­take to stay skep­ti­cal about whether Bob is your se­cret ad­mirer; the 10:1 odds in fa­vor of “a ran­dom per­son who winks at me has a crush on me” out­weigh the 1:5 odds against “Bob has a crush on me.”

It would also be a mis­take to say, “That ev­i­dence is so strong, it’s a sure bet­that he’s the one who has the crush on me! I’ll just as­sume from now on that Bob is into me.” Over­con­fi­dence is just as bad as un­der­con­fi­dence.

In fact, there’s only one pos­si­ble an­swer to this ques­tion that’s math­e­mat­i­cally con­sis­tent. To change our mind from the 1:5 prior odds based on the ev­i­dence’s 10:1 like­li­hood ra­tio, we mul­ti­ply the left sides to­gether and the right sides to­gether, get­ting 10:5 pos­te­rior odds, or 2:1 odds in fa­vor of “Bob has a crush on me.” Given our as­sump­tions and the available ev­i­dence, guess­ing that Bob has a crush on you will turn out to be cor­rect 2 times for ev­ery 1 time it turns out to be wrong. Equiv­a­lently: the prob­a­bil­ity that he’s at­tracted to you is 23. Any other con­fi­dence level would be in­con­sis­tent.

Our cul­ture hasn’t in­ter­nal­ized the les­sons of prob­a­bil­ity the­ory—that the cor­rect an­swer to ques­tions like “How sure can I be that Bob has a crush on me?” is just as log­i­cally con­strained as the cor­rect an­swer to a ques­tion on an alge­bra quiz or in a ge­ol­ogy text­book. Our clichés are out of step with the dis­cov­ery that “what be­liefs should I hold?” has an ob­jec­tively right an­swer, whether your ques­tion is “does my class­mate have a crush on me?” or “do I have an im­mor­tal soul?” There re­ally is a right way to change your mind. And it’s a pre­cise way.

How to Not Ac­tu­ally Change Your Mind

Re­vis­ing our be­liefs in any­thing re­motely like this ideal­ized way is a tricky task, how­ever.

In the first vol­ume of Ra­tion­al­ity: From AI to Zom­bies, we dis­cussed the value of “proper” be­liefs. There’s noth­ing in­trin­si­cally wrong with ex­press­ing your sup­port for some­thing you care about—like a group you iden­tify with, or a spiritual ex­pe­rience you find ex­alt­ing. When we con­flate cheers with fac­tual be­liefs, how­ever, those mi­s­un­der­stood cheers can help shield an en­tire ide­ol­ogy from con­tam­i­na­tion by the ev­i­dence.

Even be­liefs that seem to el­e­gantly ex­plain our ob­ser­va­tions aren’t im­mune to this prob­lem. It’s all too easy for us to see a vaguely sci­en­tific-sound­ing (or oth­er­wise au­thor­i­ta­tive) phrase and con­clude that it has “ex­plained” some­thing, even when it doesn’t af­fect the odds we im­plic­itly as­sign to our pos­si­ble fu­ture ex­pe­riences.

Worst of all, pro­saic be­liefs—be­liefs that are in prin­ci­ple falsifi­able, be­liefs that do con­strain what we ex­pect to see—can still get stuck in our heads, re­in­forced by a net­work of illu­sions and bi­ases.

In 1951, a foot­ball game be­tween Dart­mouth and Prince­ton turned un­usu­ally rough. Psy­chol­o­gists Has­torf and Cantril asked stu­dents from each school who had started the rough play. Nearly all agreed that Prince­ton hadn’t started it; but 86% of Prince­ton stu­dents be­lieved that Dart­mouth had started it, whereas only 36% of Dart­mouth stu­dents blamed Dart­mouth. (Most Dart­mouth stu­dents be­lieved “both started it.”)

There’s no rea­son to think this was a cheer, as op­posed to a real be­lief. The stu­dents were prob­a­bly led by their differ­ent be­liefs to make differ­ent pre­dic­tions about the be­hav­ior of play­ers in fu­ture games. And yet some­how the perfectly or­di­nary fac­tual be­liefs at Dart­mouth were wildly differ­ent from the perfectly or­di­nary fac­tual be­liefs at Prince­ton.

Can we blame this on the differ­ent sources Dart­mouth and Prince­ton stu­dents had ac­cess to? On its own, bias in the differ­ent news sources that groups rely on is a pretty se­ri­ous prob­lem.

How­ever, there is more than that at work in this case. When ac­tu­ally shown a film of the game later and asked to count the in­frac­tions they saw, Dart­mouth stu­dents claimed to see a mean of 4.3 in­frac­tions by the Dart­mouth team (and iden­ti­fied half as “mild”), whereas Prince­ton stu­dents claimed to see a mean of 9.8 Dart­mouth in­frac­tions (and iden­ti­fied a third as “mild”).

Never mind get­ting ri­val fac­tions to agree about com­pli­cated propo­si­tions in na­tional poli­tics or moral philos­o­phy; stu­dents with differ­ent group loy­alties couldn’t even agree on what they were see­ing.[3]

When some­thing we care about is threat­ened—our wor­ld­view, our in­group, our so­cial stand­ing, or any­thing else—our thoughts and per­cep­tions rally to their defense.[4,5] Some psy­chol­o­gists these days go so far as to hy­poth­e­size that our abil­ity to come up with ex­plicit jus­tifi­ca­tions for our con­clu­sions evolved speci­fi­cally to help us win ar­gu­ments.[6]

One of the defin­ing in­sights of 20th-cen­tury psy­chol­ogy, an­i­mat­ing ev­ery- one from the dis­ci­ples of Freud to pre­sent-day cog­ni­tive psy­chol­o­gists, is that hu­man be­hav­ior is of­ten driven by so­phis­ti­cated un­con­scious pro­cesses, and the sto­ries we tell our­selves about our mo­tives and rea­sons are much more bi­ased and con­fab­u­lated than we re­al­ize.

We of­ten fail, in fact, to re­al­ize that we’re do­ing any story-tel­ling. When we seem to “di­rectly per­ceive” things about our­selves in in­tro­spec­tion, it of­ten turns out to rest on ten­u­ous im­plicit causal mod­els.[7,8] When we try to ar­gue for our be­liefs, we can come up with shaky rea­son­ing bear­ing no re­la­tion to how we first ar­rived at the be­lief.[9] Rather than judg­ing our ex­pla­na­tions by their pre­dic­tive power, we tell sto­ries to make sense of what we think we know.

How can we do bet­ter? How can we ar­rive at a re­al­is­tic view of the world,when our minds are so prone to ra­tio­nal­iza­tion? How can we come to a re­al­is­tic view of our men­tal lives, when our thoughts about think­ing are also sus­pect? How can we be­come less bi­ased, when our efforts to de­bias our­selves can turn out to have bi­ases of their own?

What’s the least shaky place we could put our weight down?

The Math­e­mat­ics of Rationality

At the turn of the 20th cen­tury, com­ing up with sim­ple (e.g., set-the­o­retic) ax­ioms for ar­ith­metic gave math­e­mat­i­ci­ans a clearer stan­dard by which to judge the cor­rect­ness of their con­clu­sions. If a hu­man or calcu­la­tor out­puts “2 + 2 = 4,” we can now do more than just say “that seems in­tu­itively right.” We can ex­plain why it’s right, and we can prove that its right­ness is tied in sys­tem­atic ways to the right­ness of the rest of ar­ith­metic.

But math­e­mat­ics and logic let us model the be­hav­iors of phys­i­cal sys­tems that are a lot more in­ter­est­ing than a pocket calcu­la­tor. We can also for­mal­ize ra­tio­nal be­lief in gen­eral, us­ing prob­a­bil­ity the­ory to pick out fea­tures held in com­mon by all suc­cess­ful forms of in­fer­ence. We can even for­mal­ize ra­tio­nal be­hav­ior in gen­eral by draw­ing upon de­ci­sion the­ory.

Prob­a­bil­ity the­ory defines how we would ideally rea­son in the face of un­cer­tainty, if we had the time, the com­put­ing power, and the self-con­trol. Given some back­ground knowl­edge (pri­ors) and a new piece of ev­i­dence, prob­a­bil­ity the­ory uniquely defines the best set of new be­liefs (pos­te­rior) I could adopt. Like­wise, de­ci­sion the­ory defines what ac­tion I should take based on my be­liefs. For any con­sis­tent set of be­liefs and prefer­ences I could have about Bob, there is a de­ci­sion-the­o­retic an­swer to how I should then act in or­der to satisfy my prefer­ences.

Hu­mans aren’t perfect rea­son­ers or perfect de­ci­sion-mak­ers, any more than we’re perfect calcu­la­tors. Our brains are kludges slapped to­gether by nat­u­ral se­lec­tion. Even at our best, we don’t com­pute the ex­act right an­swer to “what should I think?” and “what should I do?” We lack the time and com­put­ing power, and evolu­tion lacked the en­g­ineer­ing ex­per­tise and fore­sight, to iron out all our bugs.

A max­i­mally effi­cient bug-free rea­soner in the real world, in fact, would still need to rely on heuris­tics and ap­prox­i­ma­tions. The op­ti­mal com­pu­ta­tion­ally tractable al­gorithms for chang­ing be­liefs fall short of prob­a­bil­ity the­ory’s con­sis­tency.

And yet, know­ing we can’t be­come fully con­sis­tent, we can cer­tainly still get bet­ter. Know­ing that there’s an ideal stan­dard we can com­pare our­selves to—what re­searchers call “Bayesian ra­tio­nal­ity”—can guide us as we im­prove our thoughts and ac­tions. Though we’ll never be perfect Bayesi­ans, the math­e­mat­ics of ra­tio­nal­ity can help us un­der­stand why a cer­tain an­swer is cor­rect, and help us spot ex­actly where we messed up.

Imag­ine try­ing to learn math through rote mem­o­riza­tion alone. You might be told that “10 + 3 = 13,” “31 + 108 = 139,” and so on, but it won’t do you a lot of good un­less you un­der­stand the pat­tern be­hind the squig­gles. It can be a lot harder to seek out meth­ods for im­prov­ing your ra­tio­nal­ity when you don’t have a gen­eral frame­work for judg­ing a method’s suc­cess. The pur­pose of this book is to help peo­ple build for them­selves such frame­works.

Ra­tion­al­ity Applied

In a blog post dis­cussing how ra­tio­nal­ity-en­thu­si­ast “ra­tio­nal­ists” differ from anti-em­piri­cist “ra­tio­nal­ists,” Scott Alexan­der ob­served:[10]

[O]bvi­ously it’s use­ful to have as much ev­i­dence as pos­si­ble, in the same way it’s use­ful to have as much money as pos­si­ble. But equally ob­vi­ously it’s use­ful to be able to use a limited amount of ev­i­dence wisely, in the same way it’s use­ful to be able to use a limited amount of money wisely.

Ra­tion­al­ity tech­niques help us get more mileage out of the ev­i­dence we have, in cases where the ev­i­dence is in­con­clu­sive or our bi­ases and at­tach­ments are dis­tort­ing how we in­ter­pret the ev­i­dence. This ap­plies to our per­sonal lives, as in the tale of Bob. It ap­plies to dis­agree­ments be­tween poli­ti­cal fac­tions (and be­tween sports fans). And it ap­plies to tech­nolog­i­cal and philo­soph­i­cal puz­zles, as in de­bates over tran­shu­man­ism, the po­si­tion that we should use tech­nol­ogy to rad­i­cally re­fur­bish the hu­man con­di­tion. Rec­og­niz­ing that the same math­e­mat­i­cal rules ap­ply to each of these do­mains—and that the same cog­ni­tive bi­ases in many cases hold sway—How to Ac­tu­ally Change Your Mind draws on a wide range of ex­am­ple prob­lems.

The first se­quence of es­says in How to Ac­tu­ally Change Your Mind, “Overly Con­ve­nient Ex­cuses,” fo­cuses on ques­tions that are as prob­a­bil­is­ti­cally clear- cut as ques­tions get. The Bayes-op­ti­mal an­swer is of­ten in­fea­si­ble to com­pute, but er­rors like con­fir­ma­tion bias can take root even in cases where the available ev­i­dence is over­whelming and we have plenty of time to think things over.

From there, we move into murk­ier wa­ters with a se­quence on “Poli­tics and Ra­tion­al­ity.” Main­stream na­tional poli­tics, as de­bated by TV pun­dits, is fa­mous for its an­gry, un­pro­duc­tive dis­cus­sions. On the face of it, there’s some­thing sur­pris­ing about that. Why do we take poli­ti­cal dis­agree­ments so per­son­ally, even when the ma­chin­ery and effects of na­tional poli­tics are so dis­tant from us in space or in time? For that mat­ter, why do we not be­come more care­ful and rigor­ous with the ev­i­dence when we’re deal­ing with is­sues we deem im­por­tant?

The Dart­mouth-Prince­ton game hints at an an­swer. Much of our rea­son­ing pro­cess is re­ally ra­tio­nal­iza­tion—sto­ry­tel­ling that makes our cur­rent be­liefs feel more co­her­ent and jus­tified, with­out nec­es­sar­ily im­prov­ing their ac­cu­racy. “Against Ra­tion­al­iza­tion” speaks to this prob­lem, fol­lowed by “Against Dou­ble­think” (on self-de­cep­tion) and “See­ing with Fresh Eyes” (on the challenge of rec­og­niz­ing ev­i­dence that doesn’t fit our ex­pec­ta­tions and as­sump­tions).Level­ing up in ra­tio­nal­ity means en­coun­ter­ing a lot of in­ter­est­ing and pow­er­ful new ideas. In many cases, it also means mak­ing friends who you can bounce ideas off of and find­ing com­mu­ni­ties that en­courage you to bet­ter your­self. “Death Spirals” dis­cusses some im­por­tant haz­ards that can af­flict groups united around com­mon in­ter­ests and amaz­ing shiny ideas, which will need to be over­come if we’re to get the full benefits out of ra­tio­nal­ist com­mu­ni­ties. How to Ac­tu­ally Change Your Mind then con­cludes with a se­quence on “Let­ting Go.”

Our nat­u­ral state isn’t to change our minds like a Bayesian would. Get­ting the Dart­mouth and Prince­ton stu­dents to no­tice what they’re re­ally see­ing won’t be as easy as recit­ing the ax­ioms of prob­a­bil­ity the­ory to them. As Luke Muehlhauser writes, in The Power of Agency:[11]

You are not a Bayesian ho­muncu­lus whose rea­son­ing is “cor- rupted” by cog­ni­tive bi­ases.

You just are cog­ni­tive bi­ases.

Con­fir­ma­tion bias, sta­tus quo bias, cor­re­spon­dence bias, and the like are not tacked on to our rea­son­ing; they are its very sub­stance.

That doesn’t mean that de­bi­as­ing is im­pos­si­ble. We aren’t perfect calcu­la­tors un­der­neath all our ar­ith­metic er­rors, ei­ther. Many of our math­e­mat­i­cal limi­ta­tions re­sult from very deep facts about how the hu­man brain works. Yet we can train our math­e­mat­i­cal abil­ities; we can learn when to trust and dis­trust our math­e­mat­i­cal in­tu­itions, and share our knowl­edge, and help one an­other; we can shape our en­vi­ron­ments to make things eas­ier on us, and build tools to offload much of the work.

Our bi­ases are part of us. But there is a shadow of Bayesi­anism pre­sent in us as well, a flawed ap­para­tus that re­ally can bring us closer to truth. No ho­muncu­lus—but still, some truth. Enough, per­haps, to get started.

1. Robin Han­son, “You Are Never En­ti­tled to Your Opinion,” Over­com­ing Bias (blog) (2006), http: //​www.over­com­ing­bias.com/​2006/​12/​you_are_never_e.html.

2. This fol­lows from the as­sump­tion that there are six pos­si­bil­ities and you have no rea­son to fa­vor one of them over any of the oth­ers. We’re also as­sum­ing, un­re­al­is­ti­cally, that you can re­ally be cer­tain the ad­mirer is one of those six peo­ple, and that you aren’t ne­glect­ing other pos­si­bil­ities. (What if more than one of the six peo­ple has a crush on you?)
3. Albert Has­torf and Hadley Cantril, “They Saw a Game: A Case Study,” Jour­nal of Ab­nor­mal and So­cial Psy­chol­ogy 49 (1954): 129–134, http://​www2.psych.ubc.ca/​~schaller/​Psyc590Read­ings/​Has­torf1954.pdf.
4.Emily Pronin, “How We See Our­selves and How We See Others,” Science 320 (2008): 1177–1180, http://​psych.prince­ton.edu/​psy­chol­ogy/​re­search/​pronin/​pubs/​2008%20Self%20and%20Other.pdf.
5. Robert P. Val­lone, Lee Ross, and Mark R. Lep­per, “The Hos­tile Me­dia Phenomenon: Bi­ased Per­cep­tion and Per­cep­tions of Me­dia Bias in Cover­age of the Beirut Mas­sacre,” Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy 49 (1985): 577–585, http://​ssc.wisc.edu/​~jpili­avi/​965/​hwang.pdf.
6. Hugo Mercier and Dan Sper­ber, “Why Do Hu­mans Rea­son? Ar­gu­ments for an Ar­gu­men­ta­tive The­ory,” Be­hav­ioral and Brain Sciences 34 (2011): 57–74, https://​hal.archives-ou­vertes.fr/​file/​ in­dex/​do­cid/​904097/​file­name/​MercierSper­berWhy­do­hu­man­srea­son.pdf.
7. Richard E. Nis­bett and Ti­mothy D. Wil­son, “Tel­ling More than We Can Know: Ver­bal Re­ports on Men­tal Pro­cesses,” Psy­cholog­i­cal Re­view 84 (1977): 231–259, http://​peo­ple.virginia.edu/​~tdw/​nis­bett&wil­son.pdf.
8. Eric Sch­witzgebel, Per­plex­ities of Con­scious­ness (MIT Press, 2011).
9. Jonathan Haidt, “The Emo­tional Dog and Its Ra­tional Tail: A So­cial In­tu­ition­ist Ap­proach to Mo­ral Judg­ment,” Psy­cholog­i­cal Re­view 108, no. 4 (2001): 814–834, doi:10.1037/​0033-295X.108.4.814.
10. Scott Alexan­der, “Why I Am Not Rene Descartes,” Slate Star Codex (blog) (2014), http://​slat­estar­codex.com/​2014/​11/​27/​why-i-am-not-rene-descartes/​.
11. Luke Muehlhauser, “The Power of Agency,” Less Wrong (blog) (2011), http://​​less­wrong.com/​​lw/​​ 5i8/​​the_power_of_agency/​​.
No comments.