Rationality: Appreciating Cognitive Algorithms

Fol­lowup to: The Use­ful Idea of Truth

It is an er­ror mode, and in­deed an an­noy­ance mode, to go about preach­ing the im­por­tance of the “Truth”, es­pe­cially if the Truth is sup­posed to be some­thing in­cred­ibly lofty in­stead of some bor­ing, mun­dane truth about grav­ity or rain­bows or what your coworker said about your man­ager.

Thus it is a worth­while ex­er­cise to prac­tice deflat­ing the word ‘true’ out of any sen­tence in which it ap­pears. (Note that this is a spe­cial case of ra­tio­nal­ist taboo.) For ex­am­ple, in­stead of say­ing, “I be­lieve that the sky is blue, and that’s true!” you can just say, “The sky is blue”, which con­veys es­sen­tially the same in­for­ma­tion about what color you think the sky is. Or if it feels differ­ent to say “I be­lieve the Democrats will win the elec­tion!” than to say, “The Democrats will win the elec­tion”, this is an im­por­tant warn­ing of be­lief-alief di­ver­gence.

Try it with these:

  • I be­lieve Jess just wants to win ar­gu­ments.

  • It’s true that you weren’t pay­ing at­ten­tion.

  • I be­lieve I will get bet­ter.

  • In re­al­ity, teach­ers care a lot about stu­dents.

If ‘truth’ is defined by an in­finite fam­ily of sen­tences like ‘The sen­tence “the sky is blue” is true if and only if the sky is blue’, then why would we ever need to talk about ‘truth’ at all?

Well, you can’t deflate ‘truth’ out of the sen­tence “True be­liefs are more likely to make suc­cess­ful ex­per­i­men­tal pre­dic­tions” be­cause it states a prop­erty of map-ter­ri­tory cor­re­spon­dences in gen­eral. You could say ‘ac­cu­rate maps’ in­stead of ‘true be­liefs’, but you would still be in­vok­ing the same con­cept.

It’s only be­cause most sen­tences con­tain­ing the word ‘true’ are not talk­ing about map-ter­ri­tory cor­re­spon­dences in gen­eral, that most such sen­tences can be deflated.

Now con­sider—when are you forced to use the word ‘ra­tio­nal’?

As with the word ‘true’, there are very few sen­tences that truly need to con­tain the word ‘ra­tio­nal’ in them. Con­sider the fol­low­ing defla­tions, all of which con­vey es­sen­tially the same in­for­ma­tion about your own opinions:

  • “It’s ra­tio­nal to be­lieve the sky is blue”
    → “I think the sky is blue”
    → “The sky is blue”

  • “Ra­tional Diet­ing: Why To Choose Pa­leo”
    → “Why you should think the pa­leo diet has the best con­se­quences for health”
    → “I like the pa­leo diet”

Gen­er­ally, when peo­ple bless some­thing as ‘ra­tio­nal’, you could di­rectly sub­sti­tute the word ‘op­ti­mal’ with no loss of con­tent—or in some cases the phrases ‘true’ or ‘be­lieved-by-me’, if we’re talk­ing about a be­lief rather than a strat­egy.

Try it with these:

  • “It’s ra­tio­nal to teach your chil­dren calcu­lus.”

  • “I think this is the most ra­tio­nal book ever.”

  • “It’s ra­tio­nal to be­lieve in grav­ity.”

Med­i­ta­tion: Un­der what rare cir­cum­stances can you not deflate the word ‘ra­tio­nal’ out of a sen­tence?


Re­ply: We need the word ‘ra­tio­nal’ in or­der to talk about cog­ni­tive al­gorithms or men­tal pro­cesses with the prop­erty “sys­tem­at­i­cally in­creases map-ter­ri­tory cor­re­spon­dence” (epistemic ra­tio­nal­ity) or “sys­tem­at­i­cally finds a bet­ter path to goals” (in­stru­men­tal ra­tio­nal­ity).


“It’s (epistem­i­cally) ra­tio­nal to be­lieve more in hy­pothe­ses that make suc­cess­ful ex­per­i­men­tal pre­dic­tions.”


“Chas­ing sunk costs is (in­stru­men­tally) ir­ra­tional.”

You can’t deflate the con­cept of ra­tio­nal­ity out of the in­tended mean­ing of those sen­tences. You could find some way to rephrase it with­out the word ‘ra­tio­nal’; but then you’d have to use other words de­scribing the same con­cept, e.g:

“If you be­lieve more in hy­pothe­ses that make suc­cess­ful pre­dic­tions, your map will bet­ter cor­re­spond to re­al­ity over time.”


“If you chase sunk costs, you won’t achieve your goals as well as you could oth­er­wise.”

The word ‘ra­tio­nal’ is prop­erly used to talk about cog­ni­tive al­gorithms which sys­tem­at­i­cally pro­mote map-ter­ri­tory cor­re­spon­dences or goal achieve­ment.

Similarly, a ra­tio­nal­ist isn’t just some­body who re­spects the Truth.

All too many peo­ple re­spect the Truth.

They re­spect the Truth that the U.S. gov­ern­ment planted ex­plo­sives in the World Trade Cen­ter, the Truth that the stars con­trol hu­man des­tiny (iron­i­cally, the ex­act re­verse will be true if ev­ery­thing goes right), the Truth that global warm­ing is a lie… and so it goes.

A ra­tio­nal­ist is some­body who re­spects the pro­cesses of find­ing truth. They re­spect some­body who seems to be show­ing gen­uine cu­ri­os­ity, even if that cu­ri­os­ity is about a should-already-be-set­tled is­sue like whether the World Trade Cen­ter was brought down by ex­plo­sives, be­cause gen­uine cu­ri­os­ity is part of a lov­able al­gorithm and re­spectable pro­cess. They re­spect Stu­art Hameroff for try­ing to test whether neu­rons have prop­er­ties con­ducive to quan­tum com­put­ing, even if this idea seems ex­ceed­ingly un­likely a pri­ori and was sug­gested by awful Gödelian ar­gu­ments about why brains can’t be mechanisms, be­cause Hameroff was try­ing to test his wacky be­liefs ex­per­i­men­tally, and hu­man­ity would still be liv­ing on the sa­vanna if ‘wacky’ be­liefs never got tested ex­per­i­men­tally.

Or con­sider the con­tro­versy over the way CSICOP (Com­mit­tee for Skep­ti­cal In­ves­ti­ga­tion of Claims of the Para­nor­mal) han­dled the so-called Mars effect, the con­tro­versy which led founder Den­nis Rawl­ins to leave CSICOP. Does the po­si­tion of the planet Mars in the sky dur­ing your hour of birth, ac­tu­ally have an effect on whether you’ll be­come a fa­mous ath­lete? I’ll go out on a limb and say no. And if you only re­spect the Truth, then it doesn’t mat­ter very much whether CSICOP raised the goal­posts on the as­trologer Gauquelin—i.e., stated a test and then made up new rea­sons to re­ject the re­sults af­ter Gauquelin’s re­sult came out pos­i­tive. The as­trolog­i­cal con­clu­sion is al­most cer­tainly un-true… and that con­clu­sion was in­deed dero­gated, the Truth up­held.

But a ra­tio­nal­ist is dis­turbed by the claim that there were ra­tio­nal pro­cess vi­o­la­tions. As a Bayesian, in a case like this you do up­date to a very small de­gree in fa­vor of as­trol­ogy, just not enough to over­come the prior odds; and you up­date to a larger de­gree that Gauquelin has in­ad­ver­tantly un­cov­ered some other phe­nomenon that might be worth track­ing down. One definitely shouldn’t state a test and then ig­nore the re­sults, or find new rea­sons the test is in­valid, when the re­sults don’t come out your way. That pro­cess has bad sys­tem­atic prop­er­ties for find­ing truth—and a ra­tio­nal­ist doesn’t just ap­pre­ci­ate the beauty of the Truth, but the beauty of the pro­cesses and cog­ni­tive al­gorithms that get us there.[1]

The rea­son why ra­tio­nal­ists can have un­usu­ally pro­duc­tive and friendly con­ver­sa­tions at least when ev­ery­thing goes right, is not that ev­ery­one in­volved has a great and abid­ing re­spect for what­ever they think is the True or the Op­ti­mal in any given mo­ment. Un­der most ev­ery­day con­di­tions, peo­ple who ar­gue heat­edly aren’t do­ing so be­cause they know the truth but dis­re­spect it. Ra­tion­al­ist con­ver­sa­tions are (po­ten­tially) more pro­duc­tive to the de­gree that ev­ery­one re­spects the pro­cess, and is on mostly the same page about what the pro­cess should be, thanks to all that ex­plicit study of things like cog­ni­tive psy­chol­ogy and prob­a­bil­ity the­ory. When Anna tells me, “I’m wor­ried that you don’t seem very cu­ri­ous about this,” there’s this state of mind called ‘cu­ri­os­ity’ that we both agree is im­por­tant—as a mat­ter of ra­tio­nal pro­cess, on a meta-level above the par­tic­u­lar is­sue at hand—and I know as a mat­ter of pro­cess that when a re­spected fel­low ra­tio­nal­ist tells me that I need to be­come cu­ri­ous, I should pause and check my cu­ri­os­ity lev­els and try to in­crease them.

Is ra­tio­nal­ity-use nec­es­sar­ily tied to ra­tio­nal­ity-ap­pre­ci­a­tion? I can imag­ine a world filled with hordes of ra­tio­nal­ity-users who were taught in school to use the Art com­pe­tently, even though only very few peo­ple love the Art enough to try to ad­vance it fur­ther; and ev­ery­one else has no par­tic­u­lar love or in­ter­est in the Art apart from the prac­ti­cal re­sults it brings. Similarly, I can imag­ine a com­pe­tent ap­plied math­e­mat­i­cian who only worked at a hedge fund for the money, and had never loved math or pro­gram­ming or op­ti­miza­tion in the first place—who’d been in it for the money from day one. I can imag­ine a com­pe­tent mu­si­cian who had no par­tic­u­lar love in com­po­si­tion or joy in mu­sic, and who only cared for the album sales and groupies. Just be­cause some­thing is imag­in­able doesn’t make it prob­a­ble in real life… but then there are many chil­dren who learn to play the pi­ano de­spite hav­ing no love for it; “mu­si­ci­ans” are those who are un­usu­ally good at it, not the ad­e­quately-com­pe­tent.

But for now, in this world where the Art is not yet forcibly im­pressed on schoolchil­dren nor yet ex­plic­itly re­warded in a stan­dard way on stan­dard ca­reer tracks, al­most ev­ery­one who has any skill at ra­tio­nal­ity is the sort of per­son who finds the Art in­trigu­ing for its own sake. Which—per­haps un­for­tu­nately—ex­plains quite a bit, both about ra­tio­nal­ist com­mu­ni­ties and about the world.

[1] Ra­tion­alWiki re­ally needs to re­name it­self to Skep­ticWiki. They’re very in­ter­ested in kick­ing hell out of home­opa­thy, but not as a group in­ter­ested in the ab­stract beauty of ques­tions like “What tri­als should a strange new hy­poth­e­sis un­dergo, which it will not­fail if the hy­poth­e­sis is true?” You can go to them and be like, “You’re crit­i­ciz­ing the­ory X be­cause some peo­ple who be­lieve in it are stupid; but many true the­o­ries have stupid be­liev­ers, like how Deepak Cho­pra claims to be talk­ing about quan­tum physics; so this is not a use­ful method in gen­eral for dis­crim­i­nat­ing true and false the­o­ries” and they’ll be like, “Ha! So what? Who cares? X is crazy!” I think it was ac­tu­ally Ra­tion­alWiki which first ob­served that it and Less Wrong ought to swap names.

(Main­stream sta­tus here.)

Part of the se­quence Highly Ad­vanced Episte­mol­ogy 101 for Beginners

Next post: “Fire­wal­ling the Op­ti­mal from the Ra­tional

Pre­vi­ous post: “Skill: The Map is Not the Ter­ri­tory