Firewalling the Optimal from the Rational

Fol­lowup to: Ra­tion­al­ity: Ap­pre­ci­at­ing Cog­ni­tive Al­gorithms (minor post)

There’s an old anec­dote about Ayn Rand, which Michael Sher­mer re­counts in his “The Un­like­liest Cult in His­tory” (note: call­ing a fact un­likely is an in­sult to your prior model, not the fact it­self), which went as fol­lows:

Bran­den re­called an evening when a friend of Rand’s re­marked that he en­joyed the mu­sic of Richard Strauss. “When he left at the end of the evening, Ayn said, in a re­ac­tion be­com­ing in­creas­ingly typ­i­cal, ‘Now I un­der­stand why he and I can never be real soul­mates. The dis­tance in our sense of life is too great.’ Often she did not wait un­til a friend had left to make such re­marks.”

Many read­ers may already have ap­pre­ci­ated this point, but one of the Go stones placed to block that failure mode is be­ing care­ful what we bless with the great com­mu­nity-nor­ma­tive-key­word ‘ra­tio­nal’. And one of the ways we do that is by try­ing to deflate the word ‘ra­tio­nal’ out of sen­tences, es­pe­cially in post ti­tles or crit­i­cal com­ments, which can live with­out the word. As you hope­fully re­call from the pre­vi­ous post, we’re only forced to use the word ‘ra­tio­nal’ when we talk about the cog­ni­tive al­gorithms which sys­tem­at­i­cally pro­mote goal achieve­ment or map-ter­ri­tory cor­re­spon­dences. Other­wise the word can be deflated out of the sen­tence; e.g. “It’s ra­tio­nal to be­lieve in an­thro­pogenic global warm­ing” goes to “Hu­man ac­tivi­ties are caus­ing global tem­per­a­tures to rise”; or “It’s ra­tio­nal to vote for Party X” deflates to “It’s op­ti­mal to vote for Party X” or just “I think you should vote for Party X”.

If you’re writ­ing a post com­par­ing the ex­per­i­men­tal ev­i­dence for four differ­ent diets, that’s not “Ra­tional Diet­ing”, that’s “Op­ti­mal Diet­ing”. A post about ra­tio­nal diet­ing is if you’re writ­ing about how the sunk cost fal­lacy causes peo­ple to eat food they’ve already pur­chased even if they’re not hun­gry, or if you’re writ­ing about how the typ­i­cal mind fal­lacy or law of small num­bers leads peo­ple to over­es­ti­mate how likely it is that a diet which worked for them will work for a friend. And even then, your ti­tle is ‘Diet­ing and the Sunk Cost Fal­lacy’, un­less it’s an overview of four differ­ent cog­ni­tive bi­ases af­fect­ing diet­ing. In which case a bet­ter ti­tle would be ‘Four Bi­ases Screw­ing Up Your Diet’, since ‘Ra­tional Diet­ing’ car­ries an im­pli­ca­tion that your post dis­cusses the cog­ni­tive al­gorithm for diet­ing, as op­posed to four con­tribut­ing things to keep in mind.

By the same to­ken, a post about Givewell’s top char­i­ties and how they com­pare to ex­is­ten­tial-risk miti­ga­tion is a post about op­ti­mal philan­thropy, while a post about scope in­sen­si­tivity and he­do­nic re­turns vs. marginal re­turns is a post about ra­tio­nal philan­thropy, be­cause the first is dis­cussing ob­ject-level out­comes while the sec­ond is dis­cussing cog­ni­tive al­gorithms. And ei­ther way, if you can have a post ti­tle that doesn’t in­clude the word “ra­tio­nal”, it’s prob­a­bly a good idea be­cause the word gets a lit­tle less pow­er­ful ev­ery time it’s used.

Of course, it’s still a good idea to in­clude con­crete ex­am­ples when talk­ing about gen­eral cog­ni­tive al­gorithms. A good writer won’t dis­cuss ra­tio­nal philan­thropy with­out in­clud­ing some dis­cus­sion of par­tic­u­lar char­i­ties to illus­trate the point. In gen­eral, the con­crete-ab­stract writ­ing pat­tern says that your open­ing para­graph should be a con­crete ex­am­ple of a nonop­ti­mal char­ity, and only af­ter­ward should you gen­er­al­ize to make the ab­stract point. (That’s why the main post opened with the Ayn Rand anec­dote.)

And I’m not say­ing that we should never have posts about Op­ti­mal Diet­ing on LessWrong. What good is all that ra­tio­nal­ity if it never leads us to any­thing op­ti­mal?

Nonethe­less, the sec­ond Go stone placed to block the Ob­jec­tivist Failure Mode is try­ing to define our­selves as a com­mu­nity around the cog­ni­tive al­gorithms; and try­ing to avoid mem­ber­ship tests (es­pe­cially im­plicit de facto tests) that aren’t about ra­tio­nal pro­cess, but just about some par­tic­u­lar thing that a lot of us think is op­ti­mal.

Like, say, pa­leo-in­spired diets.

Or hav­ing to love par­tic­u­lar clas­si­cal mu­sic com­posers, or hate dub­step, or some­thing. (Does any­one know any good dub­step mixes of clas­si­cal mu­sic, by the way?)

Ad­mit­tedly, a lot of the util­ity in prac­tice from any com­mu­nity like this one, can and should come from shar­ing life­hacks. If you go around teach­ing peo­ple meth­ods that they can allegedly use to dis­t­in­guish good strange ideas from bad strange ideas, and there’s some com­bi­na­tion of suc­cess­fully teach­ing Cog­ni­tive Art: Re­sist Con­for­mity with the less lofty en­hancer We Now Have Enough Peo­ple Phys­i­cally Pre­sent That You Don’t Feel Non­con­formist, that com­mu­nity will in­evitably prop­a­gate what they be­lieve to be good new ideas that haven’t been mass-adopted by the gen­eral pop­u­la­tion.

When I saw that Pa­tri Fried­man was wear­ing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wear­ing Vibrams, I got a pair my­self to see if they’d work. They didn’t work for me, which thanks to Cog­ni­tive Art: Say Oops I was able to ad­mit with­out much fuss; and so I put my ath­letic shoes back on again. Pa­leo-in­spired diets haven’t done any­thing dis­cernible for me, but have helped many other peo­ple in the com­mu­nity. Sup­ple­ment­ing potas­sium (cit­rate) hasn’t helped me much, but works dra­mat­i­cally for Anna, Kevin, and Vas­sar. Seth Roberts’s “Shangri-La diet”, which was prop­a­gat­ing through econ­blogs, led me to lose twenty pounds that I’ve mostly kept off, and then it mys­te­ri­ously stopped work­ing...

De facto, I have got­ten a no­tice­able amount of mileage out of imi­tat­ing things I’ve seen other ra­tio­nal­ists do. In prin­ci­ple, this will work bet­ter than read­ing a life­hack­ing blog to what­ever ex­tent ra­tio­nal­ist opinion lead­ers are bet­ter able to filter life­hacks—dis­cern bet­ter and worse ex­per­i­men­tal ev­i­dence, avoid af­fec­tive death spirals around things that sound cool, and give up faster when things don’t work. In prac­tice, I my­self haven’t gone par­tic­u­larly far into the main­stream life­hack­ing com­mu­nity, so I don’t know how much of an ad­van­tage, if any, we’ve got (so far). My sus­pi­cion is that on av­er­age life­hack­ers should know more cool things than we do (by virtue of hav­ing in­vested more time and prac­tice), and have more ob­vi­ously bad things mixed in (due to only av­er­age lev­els of Cog­ni­tive Art: Re­sist Non­sense).

But strange-to-the-main­stream yet oddly-effec­tive ideas prop­a­gat­ing through the com­mu­nity is some­thing that hap­pens if ev­ery­thing goes right. The dan­ger of these things look­ing weird… is one that I think we just have to bite the bul­let on, though opinions on this sub­ject vary be­tween my­self and other com­mu­nity lead­ers.

So a lot of real-world mileage in prac­tice is likely to come out of us imi­tat­ing each other...

And yet nonethe­less, I think it worth nam­ing and re­sist­ing that dark temp­ta­tion to think that some­body can’t be a real com­mu­nity mem­ber if they aren’t eat­ing beef livers and sup­ple­ment­ing potas­sium, or if they be­lieve in a col­lapse in­ter­pre­ta­tion of QM, etcetera. If a new­comer also doesn’t show any par­tic­u­lar, no­tice­able in­ter­est in the al­gorithms and the pro­cess, then sure, don’t feed the trolls. It should be an­other mat­ter if some­one seems in­ter­ested in the pro­cess, bet­ter yet the math, and has some non-zero grasp of it, and are just com­ing to differ­ent con­clu­sions than the lo­cal con­sen­sus.

Ap­plied ra­tio­nal­ity counts for some­thing, in­deed; ra­tio­nal­ity that isn’t ap­plied might as well not ex­ist. And if some­body be­lieves in some­thing re­ally wacky, like Mor­monism or that per­sonal iden­tity fol­lows in­di­vi­d­ual par­ti­cles, you’d ex­pect to even­tu­ally find some flaw in rea­son­ing—a de­par­ture from the rules—if you trace back their rea­son­ing far enough. But there’s a gen­uine and open ques­tion as to how much you should re­ally as­sume—how much would be ac­tu­ally true to as­sume—about the gen­eral rea­son­ing defic­its of some­body who says they’re Mor­mon, but who can solve Bayesian prob­lems on a black­board and ex­plain what Gover­nor Earl War­ren was do­ing wrong and an­a­lyzes the Amanda Knox case cor­rectly. Robert Au­mann (No­bel lau­re­ate Bayesian guy) is a be­liev­ing Ortho­dox Jew, af­ter all.

But the deeper dan­ger isn’t that of mis­tak­enly ex­clud­ing some­one who’s fairly good at a bunch of cog­ni­tive al­gorithms and still has some blind spots.

The deeper dan­ger is in al­low­ing your de facto sense of ra­tio­nal­ist com­mu­nity to start be­ing defined by con­for­mity to what peo­ple think is merely op­ti­mal, rather than the cog­ni­tive al­gorithms and think­ing tech­niques that are sup­posed to be at the cen­ter.

And then a purely metaphor­i­cal Ayn Rand starts kick­ing peo­ple out be­cause they like sub­op­ti­mal mu­sic. A sense of you-must-do-X-to-be­long is also a kind of Author­ity.

Not all Author­ity is bad—prob­a­bil­ity the­ory is also a kind of Author­ity and I try to be ruled by it as much as I can man­age. But good Author­ity should gen­er­ally be mod­u­lar; hav­ing a sweep­ing cul­tural sense of lots and lots of manda­tory things is also a failure mode. This is what I think of as the core Ob­jec­tivist Failure Mode—why the heck is Ayn Rand talk­ing about mu­sic?

So let’s all please be con­ser­va­tive about in­vok­ing the word ‘ra­tio­nal’, and try not to use it ex­cept when we’re talk­ing about cog­ni­tive al­gorithms and think­ing tech­niques. And in gen­eral and as a re­minder, let’s con­tinue ex­ert­ing some pres­sure to ad­just our in­tu­itions about be­long­ing-to-LW-ness in the di­rec­tion of (a) de­liber­ately not re­ject­ing peo­ple who dis­agree with a par­tic­u­lar point of mere op­ti­mal­ity, and (b) de­liber­ately ex­tend­ing hands to peo­ple who show re­spect for the pro­cess and in­ter­est in the al­gorithms even if they’re dis­agree­ing with the gen­eral con­sen­sus.

Part of the se­quence Highly Ad­vanced Episte­mol­ogy 101 for Beginners

Next post: “The Fabric of Real Things

Pre­vi­ous post: “Ra­tion­al­ity: Ap­pre­ci­at­ing Cog­ni­tive Al­gorithms