What Do We Mean By “Rationality”?

We mean:

  1. Epistemic ra­tio­nal­ity: be­liev­ing, and up­dat­ing on ev­i­dence, so as to sys­tem­at­i­cally im­prove the cor­re­spon­dence be­tween your map and the ter­ri­tory. The art of ob­tain­ing be­liefs that cor­re­spond to re­al­ity as closely as pos­si­ble. This cor­re­spon­dence is com­monly termed “truth” or “ac­cu­racy”, and we’re happy to call it that.

  2. In­stru­men­tal ra­tio­nal­ity: achiev­ing your val­ues. Not nec­es­sar­ily “your val­ues” in the sense of be­ing self­ish val­ues or un­shared val­ues: “your val­ues” means any­thing you care about. The art of choos­ing ac­tions that steer the fu­ture to­ward out­comes ranked higher in your prefer­ences. On LW we some­times re­fer to this as “win­ning”.

If that seems like a perfectly good defi­ni­tion, you can stop read­ing here; oth­er­wise con­tinue.

Some­times ex­per­i­men­tal psy­chol­o­gists un­cover hu­man rea­son­ing that seems very strange—for ex­am­ple, some­one rates the prob­a­bil­ity “Bill plays jazz” as less than the prob­a­bil­ity “Bill is an ac­coun­tant who plays jazz”. This seems like an odd judg­ment, since any par­tic­u­lar jazz-play­ing ac­coun­tant is ob­vi­ously a jazz player. But to what higher van­tage point do we ap­peal in say­ing that the judg­ment is wrong?

Ex­per­i­men­tal psy­chol­o­gists use two gold stan­dards: prob­a­bil­ity the­ory, and de­ci­sion the­ory. Since it is a uni­ver­sal law of prob­a­bil­ity the­ory that P(A) ≥ P(A & B), the judg­ment P(“Bill plays jazz”) < P(“Bill plays jazz” & “Bill is ac­coun­tant”) is la­beled in­cor­rect.

To keep it tech­ni­cal, you would say that this prob­a­bil­ity judg­ment is non-Bayesian. Beliefs that con­form to a co­her­ent prob­a­bil­ity dis­tri­bu­tion, and de­ci­sions that max­i­mize the prob­a­bil­is­tic ex­pec­ta­tion of a co­her­ent util­ity func­tion, are called “Bayesian”.

This does not quite ex­haust the prob­lem of what is meant in prac­tice by “ra­tio­nal­ity”, for two ma­jor rea­sons:

First, the Bayesian for­mal­isms in their full form are com­pu­ta­tion­ally in­tractable on most real-world prob­lems. No one can ac­tu­ally calcu­late and obey the math, any more than you can pre­dict the stock mar­ket by calcu­lat­ing the move­ments of quarks.

This is why we have a whole site called “Less Wrong”, rather than sim­ply stat­ing the for­mal ax­ioms and be­ing done. There’s a whole fur­ther art to find­ing the truth and ac­com­plish­ing value from in­side a hu­man mind: we have to learn our own flaws, over­come our bi­ases, pre­vent our­selves from self-de­ceiv­ing, get our­selves into good emo­tional shape to con­front the truth and do what needs do­ing, etcetera etcetera and so on.

Se­cond, some­times the mean­ing of the math it­self is called into ques­tion. The ex­act rules of prob­a­bil­ity the­ory are called into ques­tion by e.g. an­thropic prob­lems in which the num­ber of ob­servers is un­cer­tain. The ex­act rules of de­ci­sion the­ory are called into ques­tion by e.g. New­comblike prob­lems in which other agents may pre­dict your de­ci­sion be­fore it hap­pens.

In cases like these, it is fu­tile to try to set­tle the prob­lem by com­ing up with some new defi­ni­tion of the word “ra­tio­nal”, and say­ing, “There­fore my preferred an­swer, by defi­ni­tion, is what is meant by the word ‘ra­tio­nal’.” This sim­ply begs the ques­tion of why any­one should pay at­ten­tion to your defi­ni­tion. We aren’t in­ter­ested in prob­a­bil­ity the­ory be­cause it is the holy word handed down from Laplace. We’re in­ter­ested in Bayesian-style be­lief-up­dat­ing (with Oc­cam pri­ors) be­cause we ex­pect that this style of think­ing gets us sys­tem­at­i­cally closer to, you know, ac­cu­racy, the map that re­flects the ter­ri­tory. (More on the fu­til­ity of ar­gu­ing “by defi­ni­tion” here and here.)

And then there are ques­tions of “How to think” that seem not quite an­swered by ei­ther prob­a­bil­ity the­ory or de­ci­sion the­ory—like the ques­tion of how to feel about the truth once we have it. Here again, try­ing to define “ra­tio­nal­ity” a par­tic­u­lar way doesn’t sup­port an an­swer, merely pre­sume it.

From the Twelve Virtues of Ra­tion­al­ity:

How can you im­prove your con­cep­tion of ra­tio­nal­ity? Not by say­ing to your­self, “It is my duty to be ra­tio­nal.” By this you only en­shrine your mis­taken con­cep­tion. Per­haps your con­cep­tion of ra­tio­nal­ity is that it is ra­tio­nal to be­lieve the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but ra­tio­nal­ity is to be­lieve the words of the Great Teacher,” you lose a chance to dis­cover your mis­take.

Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak over­much of the Way you will not at­tain it.

You may try to name the high­est prin­ci­ple with names such as “the map that re­flects the ter­ri­tory” or “ex­pe­rience of suc­cess and failure” or “Bayesian de­ci­sion the­ory”. But per­haps you de­scribe in­cor­rectly the name­less virtue. How will you dis­cover your mis­take? Not by com­par­ing your de­scrip­tion to it­self, but by com­par­ing it to that which you did not name.

We are not here to ar­gue the mean­ing of a word, not even if that word is “ra­tio­nal­ity”. The point of at­tach­ing se­quences of let­ters to par­tic­u­lar con­cepts is to let two peo­ple com­mu­ni­cate—to help trans­port thoughts from one mind to an­other. You can­not change re­al­ity, or prove the thought, by ma­nipu­lat­ing which mean­ings go with which words.

So if you un­der­stand what con­cept we are gen­er­ally get­ting at with this word “ra­tio­nal­ity”, and with the sub-terms “epistemic ra­tio­nal­ity” and “in­stru­men­tal ra­tio­nal­ity”, we have com­mu­ni­cated: we have ac­com­plished ev­ery­thing there is to ac­com­plish by talk­ing about how to define “ra­tio­nal­ity”. What’s left to dis­cuss is not what mean­ing to at­tach to the syl­la­bles “ra-tio-na-li-ty”; what’s left to dis­cuss is what is a good way to think.

With that said, you should be aware that many of us will re­gard as con­tro­ver­sial—at the very least—any con­strual of “ra­tio­nal­ity” that makes it non-nor­ma­tive:

For ex­am­ple, if you say, “The ra­tio­nal be­lief is X, but the true be­lief is Y” then you are prob­a­bly us­ing the word “ra­tio­nal” in a way that means some­thing other than what most of us have in mind. (E.g. some of us ex­pect “ra­tio­nal­ity” to be con­sis­tent un­der re­flec­tion—“ra­tio­nally” look­ing at the ev­i­dence, and “ra­tio­nally” con­sid­er­ing how your mind pro­cesses the ev­i­dence, shouldn’t lead to two differ­ent con­clu­sions.) Similarly, if you find your­self say­ing “The ra­tio­nal thing to do is X, but the right thing to do is Y” then you are al­most cer­tainly us­ing one of the words “ra­tio­nal” or “right” in a way that a huge chunk of read­ers won’t agree with.

In this case—or in any other case where con­tro­versy threat­ens—you should sub­sti­tute more spe­cific lan­guage: “The self-benefit­ing thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks” or “Causal de­ci­sion the­ory as usu­ally for­mu­lated says you should two-box on New­comb’s Prob­lem, but I’d rather have a mil­lion dol­lars.”

“X is ra­tio­nal!” is usu­ally just a more stri­dent way of say­ing “I think X is true” or “I think X is good”. So why have an ad­di­tional word for “ra­tio­nal” as well as “true” and “good”? Be­cause we want to talk about sys­tem­atic meth­ods for ob­tain­ing truth and win­ning.

The word “ra­tio­nal” has po­ten­tial pit­falls, but there are plenty of non-bor­der­line cases where “ra­tio­nal” works fine to com­mu­ni­cate what one is get­ting at, like­wise “ir­ra­tional”. In these cases we’re not afraid to use it.

Yet one should also be care­ful not to overuse that word. One re­ceives no points merely for pro­nounc­ing it loudly. If you speak over­much of the Way you will not at­tain it.