What Do We Mean By “Rationality”?

I mean two things:

1. Epistemic ra­tio­nal­ity: sys­tem­at­i­cally im­prov­ing the ac­cu­racy of your be­liefs. 2. In­stru­men­tal ra­tio­nal­ity: sys­tem­at­i­cally achiev­ing your val­ues.

The first con­cept is sim­ple enough. When you open your eyes and look at the room around you, you’ll lo­cate your lap­top in re­la­tion to the table, and you’ll lo­cate a book­case in re­la­tion to the wall. If some­thing goes wrong with your eyes, or your brain, then your men­tal model might say there’s a book­case where no book­case ex­ists, and when you go over to get a book, you’ll be dis­ap­pointed.

This is what it’s like to have a false be­lief, a map of the world that doesn’t cor­re­spond to the ter­ri­tory. Epistemic ra­tio­nal­ity is about build­ing ac­cu­rate maps in­stead. This cor­re­spon­dence be­tween be­lief and re­al­ity is com­monly called “truth,” and I’m happy to call it that.1

In­stru­men­tal ra­tio­nal­ity, on the other hand, is about steer­ing re­al­ity—send­ing the fu­ture where you want it to go. It’s the art of choos­ing ac­tions that lead to out­comes ranked higher in your prefer­ences. I some­times call this “win­ning.”

So ra­tio­nal­ity is about form­ing true be­liefs and mak­ing de­ci­sions that help you win.

(Where truth doesnt mean “cer­tainty,” since we can do plenty to in­crease the prob­a­bil­ity that our be­liefs are ac­cu­rate even though were un­cer­tain; and win­ning doesnt mean “win­ning at oth­ers ex­pense,” since our val­ues in­clude ev­ery­thing we care about, in­clud­ing other peo­ple.)

When peo­ple say “X is ra­tio­nal!” it’s usu­ally just a more stri­dent way of say­ing “I think X is true” or “I think X is good.” So why have an ad­di­tional word for “ra­tio­nal” as well as “true” and “good”?

An analo­gous ar­gu­ment can be given against us­ing “true.” There is no need to say “it is true that snow is white” when you could just say “snow is white.” What makes the idea of truth use­ful is that it al­lows us to talk about the gen­eral fea­tures of map-ter­ri­tory cor­re­spon­dence. “True mod­els usu­ally pro­duce bet­ter ex­per­i­men­tal pre­dic­tions than false mod­els” is a use­ful gen­er­al­iza­tion, and it’s not one you can make with­out us­ing a con­cept like “true” or “ac­cu­rate.”

Similarly, “Ra­tional agents make de­ci­sions that max­i­mize the prob­a­bil­is­tic ex­pec­ta­tion of a co­her­ent util­ity func­tion” is the kind of thought that de­pends on a con­cept of (in­stru­men­tal) ra­tio­nal­ity, whereas “It’s ra­tio­nal to eat veg­eta­bles” can prob­a­bly be re­placed with “It’s use­ful to eat veg­eta­bles” or “It’s in your in­ter­est to eat veg­eta­bles.” We need a con­cept like “ra­tio­nal” in or­der to note gen­eral facts about those ways of think­ing that sys­tem­at­i­cally pro­duce truth or value—and the sys­tem­atic ways in which we fall short of those stan­dards.

As we’ve ob­served in the pre­vi­ous es­says, ex­per­i­men­tal psy­chol­o­gists some­times un­cover hu­man rea­son­ing that seems very strange. For ex­am­ple, some­one rates the prob­a­bil­ity “Bill plays jazz” as less than the prob­a­bil­ity “Bill is an ac­coun­tant who plays jazz.” This seems like an odd judg­ment, since any par­tic­u­lar jazz-play­ing ac­coun­tant is ob­vi­ously a jazz player. But to what higher van­tage point do we ap­peal in say­ing that the judg­ment is wrong ?

Ex­per­i­men­tal psy­chol­o­gists use two gold stan­dards: prob­a­bil­ity the­ory, and de­ci­sion the­ory.

Prob­a­bil­ity the­ory is the set of laws un­der­ly­ing ra­tio­nal be­lief. The math­e­mat­ics of prob­a­bil­ity ap­plies equally to “figur­ing out where your book­case is” and “es­ti­mat­ing how many hairs were on Julius Cae­sars head,” even though our ev­i­dence for the claim “Julius Cae­sar was bald” is likely to be more com­pli­cated and in­di­rect than our ev­i­dence for the claim “theres a book­case in my room.” It’s all the same prob­lem of how to pro­cess the ev­i­dence and ob­ser­va­tions to up­date one’s be­liefs. Similarly, de­ci­sion the­ory is the set of laws un­der­ly­ing ra­tio­nal ac­tion, and is equally ap­pli­ca­ble re­gard­less of what one’s goals and available op­tions are.

Let “P(such-and-such)” stand for “the prob­a­bil­ity that such-and-such hap­pens,” and “P(A,B)” for “the prob­a­bil­ity that both A and B hap­pen.” Since it is a uni­ver­sal law of prob­a­bil­ity the­ory that P(A) ≥ P(A,B), the judg­ment that P(Bill plays jazz) is less than P(Bill plays jazz, Bill is an ac­coun­tant) is la­beled in­cor­rect.

To keep it tech­ni­cal, you would say that this prob­a­bil­ity judg­ment is non-Bayesian. Beliefs that con­form to a co­her­ent prob­a­bil­ity dis­tri­bu­tion, and de­ci­sions that max­i­mize the prob­a­bil­is­tic ex­pec­ta­tion of a co­her­ent util­ity func­tion, are called “Bayesian.”

I should em­pha­size that this isnt the no­tion of ra­tio­nal­ity thats com­mon in pop­u­lar cul­ture. Peo­ple may use the same string of sounds, “ra-tio-nal,” to re­fer to “act­ing like Mr. Spock of Star Trek” and “act­ing like a Bayesian”; but this doesnt mean that act­ing Spock-like helps one hair with epistemic or in­stru­men­tal ra­tio­nal­ity.2

All of this does not quite ex­haust the prob­lem of what is meant in prac­tice by “ra­tio­nal­ity,” for two ma­jor rea­sons:

First, the Bayesian for­mal­isms in their full form are com­pu­ta­tion­ally in­tractable on most real-world prob­lems. No one can ac­tu­ally calcu­late and obey the math, any more than you can pre­dict the stock mar­ket by calcu­lat­ing the move­ments of quarks.

This is why there is a whole site called “Less Wrong,” rather than a sin­gle page that sim­ply states the for­mal ax­ioms and calls it a day. There’s a whole fur­ther art to find­ing the truth and ac­com­plish­ing value from in­side a hu­man mind: we have to learn our own flaws, over­come our bi­ases, pre­vent our­selves from self-de­ceiv­ing, get our­selves into good emo­tional shape to con­front the truth and do what needs do­ing, et cetera, et cetera.

Se­cond, some­times the mean­ing of the math it­self is called into ques­tion. The ex­act rules of prob­a­bil­ity the­ory are called into ques­tion by, e.g., an­thropic prob­lems in which the num­ber of ob­servers is un­cer­tain. The ex­act rules of de­ci­sion the­ory are called into ques­tion by, e.g., New­comblike prob­lems in which other agents may pre­dict your de­ci­sion be­fore it hap­pens.3

In cases where our best for­mal­iza­tions still come up short, we can re­turn to sim­pler ideas like “truth” and “win­ning.” If you are a sci­en­tist just be­gin­ning to in­ves­ti­gate fire, it might be a lot wiser to point to a campfire and say “Fire is that or­angey-bright hot stuff over there,” rather than say­ing “I define fire as an al­chem­i­cal trans­mu­ta­tion of sub­stances which re­leases phlo­gis­ton.” You cer­tainly shouldn’t ig­nore some­thing just be­cause you can’t define it. I cant quote the equa­tions of Gen­eral Rel­a­tivity from mem­ory, but nonethe­less if I walk off a cliff, Ill fall. And we can say the same of cog­ni­tive bi­ases and other ob­sta­cles to truth—they wont hit any less hard if it turns out we cant define com­pactly what “ir­ra­tional­ity” is.

In cases like these, it is fu­tile to try to set­tle the prob­lem by com­ing up with some new defi­ni­tion of the word “ra­tio­nal” and say­ing, “There­fore my preferred an­swer, by defi­ni­tion, is what is meant by the word ‘ra­tio­nal.’ ” This sim­ply raises the ques­tion of why any­one should pay at­ten­tion to your defi­ni­tion. I’m not in­ter­ested in prob­a­bil­ity the­ory be­cause it is the holy word handed down from Laplace. I’m in­ter­ested in Bayesian-style be­lief-up­dat­ing (with Oc­cam pri­ors) be­cause I ex­pect that this style of think­ing gets us sys­tem­at­i­cally closer to, you know, ac­cu­racy, the map that re­flects the ter­ri­tory.

And then there are ques­tions of how to think that seem not quite an­swered by ei­ther prob­a­bil­ity the­ory or de­ci­sion the­ory—like the ques­tion of how to feel about the truth once you have it. Here, again, try­ing to define “ra­tio­nal­ity” a par­tic­u­lar way doesn’t sup­port an an­swer, but merely pre­sumes one.

I am not here to ar­gue the mean­ing of a word, not even if that word is “ra­tio­nal­ity.” The point of at­tach­ing se­quences of let­ters to par­tic­u­lar con­cepts is to let two peo­ple com­mu­ni­cate—to help trans­port thoughts from one mind to an­other. You can­not change re­al­ity, or prove the thought, by ma­nipu­lat­ing which mean­ings go with which words.

So if you un­der­stand what con­cept I am gen­er­ally get­ting at with this word “ra­tio­nal­ity,” and with the sub-terms “epistemic ra­tio­nal­ity” and “in­stru­men­tal ra­tio­nal­ity,” we have com­mu­ni­cated: we have ac­com­plished ev­ery­thing there is to ac­com­plish by talk­ing about how to define “ra­tio­nal­ity.” What’s left to dis­cuss is not what mean­ing to at­tach to the syl­la­bles “ra-tio-na-li-ty”; what’s left to dis­cuss is what is a good way to think.

If you say, “It’s (epistem­i­cally) ra­tio­nal for me to be­lieve X, but the truth is Y,” then you are prob­a­bly us­ing the word “ra­tio­nal” to mean some­thing other than what I have in mind. (E.g., “ra­tio­nal­ity” should be con­sis­tent un­der re­flec­tion—“ra­tio­nally” look­ing at the ev­i­dence, and “ra­tio­nally” con­sid­er­ing how your mind pro­cesses the ev­i­dence, shouldn’t lead to two differ­ent con­clu­sions.)

Similarly, if you find your­self say­ing, “The (in­stru­men­tally) ra­tio­nal thing for me to do is X, but the right thing for me to do is Y,” then you are al­most cer­tainly us­ing some other mean­ing for the word “ra­tio­nal” or the word “right.” I use the term “ra­tio­nal­ity” nor­ma­tively, to pick out de­sir­able pat­terns of thought.

In this case—or in any other case where peo­ple dis­agree about word mean­ings—you should sub­sti­tute more spe­cific lan­guage in place of “ra­tio­nal”: “The self-benefit­ing thing to do is to run away, but I hope I would at least try to drag the child off the railroad tracks,” or “Causal de­ci­sion the­ory as usu­ally for­mu­lated says you should two-box on New­comb’s Prob­lem, but I’d rather have a mil­lion dol­lars.”

In fact, I recom­mend read­ing back through this es­say, re­plac­ing ev­ery in­stance of “ra­tio­nal” with “foozal,” and see­ing if that changes the con­no­ta­tions of what I’m say­ing any. If so, I say: strive not for ra­tio­nal­ity, but for fooza­l­ity.

The word “ra­tio­nal” has po­ten­tial pit­falls, but there are plenty of non-bor­der­line cases where “ra­tio­nal” works fine to com­mu­ni­cate what I’m get­ting at. Like­wise “ir­ra­tional.” In these cases I’m not afraid to use it.

Yet one should be care­ful not to overuse that word. One re­ceives no points merely for pro­nounc­ing it loudly. If you speak over­much of the Way, you will not at­tain it.

1For a longer dis­cus­sion of truth, see “The Sim­ple Truth” at the very end of this vol­ume.

2The idea that ra­tio­nal­ity is about strictly priv­ileg­ing ver­bal rea­son­ing over feel­ings is a case in point. Bayesian ra­tio­nal­ity ap­plies to urges, hunches, per­cep­tions, and word­less in­tu­itions, not just to as­ser­tions.

I gave the ex­am­ple of open­ing your eyes, look­ing around you, and build­ing a men­tal model of a room con­tain­ing a book­case against the wall. The mod­ern idea of ra­tio­nal­ity is gen­eral enough to in­clude your eyes and your brains vi­sual ar­eas as things-that-map, and to in­clude in­stincts and emo­tions in the be­lief-and-goal calcu­lus.

3For an in­for­mal state­ment of New­comb’s Prob­lem, see Jim Holt, “Think­ing In­side the Boxes,” Slate, 2002, http://​www.slate.com/​ar­ti­cles/​arts/​egghead/​2002/​02/​think­in­gin­side_the_boxes.sin­gle.html.