Epistle to the New York Less Wrongians

(At the sug­ges­tion and re­quest of Tom McCabe, I’m post­ing the es­say that I sent to the New York LW group af­ter my first visit there, and be­fore the sec­ond visit:)

Hav­ing some kind of global ra­tio­nal­ist com­mu­nity come into ex­is­tence seems like a quite ex­tremely good idea. The NYLW group is the fore­run­ner of that, the first group of LW-style ra­tio­nal­ists to form a real com­mu­nity, and to con­front the challenges in­volved in stay­ing on track while grow­ing as a com­mu­nity.

“Stay on track to­ward what?” you ask, and my best shot at de­scribing the vi­sion is as fol­lows:

“Through ra­tio­nal­ity we shall be­come awe­some, and in­vent and test sys­tem­atic meth­ods for mak­ing peo­ple awe­some, and plot to op­ti­mize ev­ery­thing in sight, and the more fun we have the more peo­ple will want to join us.”

(That last part is some­thing I only re­al­ized was Really Im­por­tant af­ter vis­it­ing New York.)

Michael Vas­sar says he’s wor­ried that you might be los­ing track of the “ra­tio­nal­ity” and “world op­ti­miza­tion” parts of this—that peo­ple might be won­der­ing what sort of benefit “ra­tio­nal­ity” de­liv­ers as op­posed to, say, pa­leo diet­ing. (Note—less wor­ried about this now that I’ve met the group in per­son. -EY.)

I ad­mit that the origi­nal Less Wrong se­quences did not heav­ily em­pha­size the benefits for ev­ery­day life (as op­posed to solv­ing ridicu­lously hard sci­en­tific prob­lems). This is some­thing I plan to fix with my forth­com­ing book—along with the prob­lem where the key info is scat­tered over six hun­dred blog posts that only truly ded­i­cated peo­ple and/​or se­ri­ous pro­cras­ti­na­tors can find the time to read.

But I re­ally don’t think the whole ra­tio­nal­ity/​fun as­so­ci­a­tion you’ve got go­ing—my con­grat­u­la­tions on pul­ling that off, by the way, it’s damned im­pres­sive—is some­thing that can (let alone should) be un­tan­gled. Most groups of peo­ple ca­pa­ble of be­com­ing en­thu­si­as­tic about strange new non­con­formist ways of liv­ing their lives would have started try­ing to read each other’s auras by now. Ra­tion­al­ity is the mas­ter life­hack which dis­t­in­guishes which other life­hacks to use.

The way an LW-ra­tio­nal­ity meetup usu­ally gets started is that there is a joy of be­ing around rea­son­able peo­ple—a joy that comes, in a very di­rect way, from those peo­ple car­ing about what’s true and what’s effec­tive, and be­ing able to re­flect on more than their first im­pulse to see whether it makes sense. You wouldn’t want to lose that ei­ther.

But the thing about effec­tive ra­tio­nal­ity is that you can also use it to dis­t­in­guish truth from false­hood, and re­al­ize that the best meth­ods aren’t always the ones ev­ery­one else is us­ing; and you can start as­sem­bling a pool of life­hacks that doesn’t in­clude home­opa­thy. You be­come stronger, and that makes you start think­ing that you can also help other peo­ple be­come stronger. Through the sys­tem­atic ac­cu­mu­la­tion of good ideas and the re­jec­tion of bad ideas, you can get so awe­some that even other peo­ple no­tice, and this means that you can start at­tract­ing a new sort of per­son, one who starts out want­ing to be­come awe­some in­stead of be­ing at­tracted speci­fi­cally to the ra­tio­nal­ity thing. This is fine in the­ory, since in­deed the Art must have a pur­pose higher than it­self or it col­lapses into in­finite re­cur­sion. But some of these new re­cruits may be a bit skep­ti­cal, at first, that all this “ra­tio­nal­ity” stuff is re­ally con­tribut­ing all that much to the awe­some.

Real life is not a moral­ity tale, and I don’t know if I’d proph­esy that the in­stant you get too much awe­some and not enough ra­tio­nal­ity, the group will be pun­ished for that sin by go­ing off and try­ing to read auras. But I think I would proph­esy that if you got too large and in­suffi­ciently rea­son­able, and if you lost sight of your higher pur­poses and your dreams of world op­ti­miza­tion, the first ma­jor speed­bump you hit would splin­ter the group. (There will be some speed­bump, though I don’t know what it will be.)

Ra­tion­al­ity isn’t just about know­ing about things like Bayes’s The­o­rem. It’s also about:

  • Say­ing oops and chang­ing your mind oc­ca­sion­ally.

  • Know­ing that clever ar­gu­ing isn’t the same as look­ing for truth.

  • Ac­tu­ally pay­ing at­ten­tion to what suc­ceeds and what fails, in­stead of just be­ing driven by your in­ter­nal the­o­ries.

  • Re­serv­ing your self-con­grat­u­la­tions for the oc­ca­sions when you ac­tu­ally change a policy or be­lief, be­cause while not ev­ery change is an im­prove­ment, ev­ery im­prove­ment is a change.

  • Self-aware­ness—a core ra­tio­nal skill, but at the same time, a cater­pillar that spent all day ob­sess­ing about be­ing a cater­pillar would never be­come a but­terfly.

  • Hav­ing enough grasp of evolu­tion­ary psy­chol­ogy to re­al­ize that this is no longer an eighty-per­son hunter-gath­erer band and that get­ting into huge shout­ing matches about Repub­li­cans ver­sus Democrats does not ac­tu­ally change very much.

  • Ask­ing whether your most cher­ished be­liefs to shout about ac­tu­ally con­trol your an­ti­ci­pa­tions, whether they mean any­thing, never mind whether their pre­dic­tions are ac­tu­ally cor­rect.

  • Un­der­stand­ing that cor­re­spon­dence bias means that most of your en­e­mies are not in­her­ently evil mu­tants but rather peo­ple who live in a differ­ent per­ceived world than you do. (Albeit of course that some peo­ple are self­ish bas­tards and a very few of them are psy­chopaths.)

  • Be­ing able to ac­cept and con­sider ad­vice from other peo­ple who think you’re do­ing some­thing stupid, with­out lash­ing out at them; and the more you show them this is true, and the more they can trust you not to be offended if they’re frank with you, the bet­ter the ad­vice you can get. (Yes, this has a failure mode where in­sult­ing other peo­ple be­comes a sta­tus dis­play. But you can also have too much po­lite­ness, and it is a tra­di­tional strength of ra­tio­nal­ists that they some­times tell each other the truth. Now and then I’ve told col­lege stu­dents that they are emit­ting ter­rible body odors, and the re­ply I usu­ally get is that they had no idea and I am the first per­son ever to sug­gest this to them.)

  • Com­pre­hend­ing the non­tech­ni­cal ar­gu­ments for Au­mann’s Agree­ment The­o­rem well enough to re­al­ize that when two peo­ple have com­mon knowl­edge of a per­sis­tent dis­agree­ment, some­thing is wrong some­where—not that you can nec­es­sar­ily do bet­ter by au­to­mat­i­cally agree­ing with ev­ery­one who per­sis­tently dis­agrees with you; but still, know­ing that ideal ra­tio­nal agents wouldn’t just go around yel­ling at each other all the time.

  • Know­ing about scope in­sen­si­tivity and diminish­ing marginal re­turns doesn’t just mean that you donate char­i­ta­ble dol­lars to “ex­is­ten­tial risks that few other peo­ple are work­ing on”, in­stead of “The So­ciety For Cur­ing Rare Diseases In Cute Pup­pies”. It means you know that eat­ing half a choco­late brownie ap­pears as es­sen­tially the same plea­surable mem­ory in ret­ro­spect as eat­ing a whole brownie, so long as the other half isn’t in front of you and you don’t have the un­pleas­ant mem­ory of ex­ert­ing willpower not to eat it. (Se­ri­ously, I didn’t em­pha­size all the prac­ti­cal ap­pli­ca­tions of ev­ery cog­ni­tive bias in the Less Wrong se­quences but there are a lot of things like that.)

  • The abil­ity to dis­sent from con­for­mity; re­al­iz­ing the difficulty and im­por­tance of be­ing the first to dis­sent.

  • Know­ing that to avoid plu­ral­is­tic ig­no­rance ev­ery­one should write down their opinion on a sheet of pa­per be­fore hear­ing what ev­ery­one else thinks.

But then one of the chief sur­pris­ing les­sons I learned, af­ter writ­ing the origi­nal Less Wrong se­quences, was that if you suc­ceed in teach­ing peo­ple a bunch of amaz­ing stuff about epistemic ra­tio­nal­ity, this re­veals...

(drum roll)

...that, hav­ing re­paired some of peo­ple’s flaws, you can now see more clearly all the other qual­ities re­quired to be awe­some. The most im­por­tant and no­table of these other qual­ities, need­less to say, is Get­ting Crap Done.

(Those of you read­ing Meth­ods of Ra­tion­al­ity will note that it em­pha­sizes a lot of things that aren’t in the origi­nal Less Wrong, such as the virtues of hard work and prac­tice. This is be­cause I have Learned From Ex­pe­rience.)

Similarly, courage isn’t some­thing I em­pha­sized enough in the origi­nal Less Wrong (as op­posed to MoR) but the thought has since oc­curred to me that most peo­ple can’t do things which re­quire even small amounts of courage. (Leav­ing NYC, I had two Metro­cards with small amounts of re­main­ing value to give away. I felt re­luc­tant to call out any­thing, or ap­proach any­one and offer them a free Metro­card, and I thought to my­self, well, of course I’m re­luc­tant, this task re­quires a small amount of courage and then I asked three times be­fore I found some­one who wanted them. Not, mind you, that this was an im­por­tant task in the grand scheme of things—just a lit­tle bit of re­jec­tion ther­apy, a lit­tle bit of prac­tice in do­ing things which re­quire small amounts of courage.)

Or there’s Munchk­inism, the qual­ity that lets peo­ple try out life­hacks that sound a bit weird. A Munchkin is the sort of per­son who, faced with a role-play­ing game, reads through the rule­books over and over un­til he finds a way to com­bine three in­nocu­ous-seem­ing mag­i­cal items into a cy­cle of in­finite wish spells. Or who, in real life, com­poses a sur­pris­ingly effec­tive diet out of drink­ing a quar­ter-cup of ex­tra-light olive oil at least one hour be­fore and af­ter tast­ing any­thing else. Or com­bines liquid ni­tro­gen and an­tifreeze and life-in­surance poli­cies into a ridicu­lously cheap method of defeat­ing the in­vin­cible specter of un­avoid­able Death. Or figures out how to build the real-life ver­sion of the cy­cle of in­finite wish spells. Magic the Gather­ing is a Munchkin game, and MoR is a Munchkin story.

It would be re­ally awe­some if the New York Less Wrong groups figures out how to teach its mem­bers hard work and courage and Muchk­inism and so on.

It would be even more awe­some if you could muster up the en­ergy to track the re­sults in any sort of sys­tem­atic way so that you can do small-N Science (based on Bayesian like­li­hoods thank you, not the usual statis­ti­cal sig­nifi­cance bul­lhockey) and find out how effec­tive differ­ent teach­ing meth­ods are, or track the effec­tive­ness of other life­hacks as well—the Quan­ti­ta­tive Self road. This, of course, would re­quire Get­ting Crap Done; but I do think that in the long run, whether we end up with re­ally effec­tive ra­tio­nal­ists is go­ing to de­pend a lot on whether we can come up with ev­i­dence-based met­rics for how well a teach­ing method works, or if we’re stuck in the failure mode of psy­cho­anal­y­sis, where we just go around try­ing things that sound like good ideas.

And of course it would be re­ally truly amaz­ingly awe­some if some of you be­came en­er­getic gung-ho in­tel­li­gent peo­ple who can see the world full of low-hang­ing fruit in front of them, who would go on to form mul­ti­ple star­tups which would make mil­lions and billions of dol­lars. That would also be cool.

But not ev­ery­one has to start a startup, not ev­ery­one has to be there to Get Stuff Done, it is okay to have Fun. The more of you there are, the more likely it is that any given five of you will want to form a new band, or like the same sort of danc­ing, or fall in love, or de­cide to try learn­ing med­i­ta­tion and re­port­ing back to the group on how it went. Growth in gen­eral is good. Every added per­son who’s above the ab­solute thresh­old of com­pe­tence is one more per­son who can try out new life­hacks, re­cruit new peo­ple, or just be there putting the whole thing on a larger scale and mak­ing the group more Fun. On the other hand there is a world out there to op­ti­mize, and also the scal­ing of the group is limited by the num­ber of peo­ple who can be or­ga­niz­ers (more on this be­low). There’s a nar­row path to walk be­tween “re­cruit ev­ery­one above the ab­solute thresh­old who seems like fun” and “re­cruit peo­ple with visi­bly un­usu­ally high po­ten­tial to do in­ter­est­ing things”. I would sug­gest mak­ing ex­tra effort to re­cruit peo­ple who seem like they have high po­ten­tial but not any­thing like a rule. But if some­one not only seems to like ex­plicit ra­tio­nal­ity and want to learn more, but also seems like a smart ex­ec­u­tive type who gets things done, per­haps their in­vi­ta­tion to a meetup should be pri­ori­tized?

So that was the main thing I had to say, but now on­ward to some other points.

A sen­si­tive is­sue is what hap­pens when some­one can’t reach the ab­solute thresh­old of com­pe­tence. I think the main rele­vant Less Wrong post on this sub­ject is “Well-Kept Gar­dens Die By Paci­fism.” There are peo­ple who can­not be saved—or at least peo­ple who can­not be saved by any means cur­rently known to you. And there is a whole world out there to be op­ti­mized; some­times even if a per­son can be saved, it takes a ridicu­lous amount of effort that you could bet­ter use to save four other peo­ple in­stead. We’ve had similar prob­lems on the West Coast—I would hear about some­one who wasn’t Get­ting Stuff Done, but who seemed to be mak­ing amaz­ing strides on self-im­prove­ment, and then a month later I would hear the same thing again, and isn’t it re­mark­able how we keep hear­ing about so much progress but never about amaz­ing things the per­son gets done -

(I will par­en­thet­i­cally em­pha­size that ev­ery sin­gle use­ful men­tal tech­nique I have ever de­vel­oped over the course of my en­tire life has been de­vel­oped in the course of try­ing to ac­com­plish some par­tic­u­lar real task and none of it is the re­sult of me sit­ting around and think­ing, “Hm, how­ever shall I Im­prove My­self to­day?” I should ad­vise a mind­set in which mak­ing tremen­dous progress on fix­ing your­self doesn’t merit much con­grat­u­la­tion and only par­tic­u­lar deeds ac­tu­ally ac­com­plished are praised; and also that you always have some thing you’re try­ing to do in the course of any par­tic­u­lar pro­ject of self-im­prove­ment—a tar­get real-world ac­com­plish­ment to which your self-im­prove­ments are a means, not defin­able in terms of any per­son­al­ity qual­ity un­less it is weight loss or words out­put on a writ­ing pro­ject or some­thing else visi­ble and mea­surable.)

- and the other thing is that try­ing to save peo­ple who can­not be saved can drag down a whole com­mu­nity, be­cause it be­comes less Fun, and that means new peo­ple don’t want to join.

I would sug­gest hav­ing a known and fixed pe­riod of time, like four months, that you are al­lowed to spend on try­ing to fix any­one who seems fix­able, and if af­ter that their out­puts do not ex­ceed their in­puts and they are drag­ging down the Fun level rel­a­tive to the av­er­age group mem­ber, fire them. You could maybe have a Spe­cial Com­mit­tee with three peo­ple who would de­cide this—one of the things I pushed for on the West Coast was to have the Board de­cid­ing whether to re­tain peo­ple, with no­body else au­tho­rized to make promises. There should be no one per­son who can be ap­pealed to, who can be moved by pity and im­pul­sively say “Yes, you can stay.” Short of hav­ing Volde­mort do it, the best you can do to re­duce pity and mercy is to have the de­ci­sion made by com­mit­tee.

And if any­one is mak­ing the group less Fun or scar­ing off new mem­bers, and yes this in­cludes be­ing a creep who offends po­ten­tial hero­ine re­cruits, give them an in­stant ul­ti­ma­tum or just fire them on the spot.

You have to be able to do this. This is not the an­ces­tral en­vi­ron­ment where there’s only eighty peo­ple in your tribe and ex­iling any one of them is a huge de­ci­sion that can never be un­done. It’s a large world out there and there are liter­ally hun­dreds of mil­lions of peo­ple whom you do not want in your com­mu­nity, at least rel­a­tive to your cur­rent abil­ity to im­prove them. I’m sorry but it has to be done.

Fi­nally, if you grow much fur­ther it may no longer be pos­si­ble for ev­ery­one to meet all the time as a group. I’m not quite sure what to ad­vise about this—split­ting up into meet­ings on par­tic­u­lar in­ter­ests, maybe, but it seems more like the sort of thing where you ought to dis­cuss the prob­lem as thor­oughly as pos­si­ble be­fore propos­ing any policy solu­tions. My main ad­vice is that if there’s any sep­a­ratish group that forms, I am skep­ti­cal about its abil­ity to stay on track if there isn’t at least one high-level epistemic ra­tio­nal­ist ex­ec­u­tive type to or­ga­nize it, some­one who not only knows Bayes’s The­o­rem but who can also Get Things Done. Re­tired suc­cess­ful startup en­trepreneurs would be great for this if you could get them, but smart driven young peo­ple might be more men­tally flex­ible and a lot more re­cruitable if far less ex­pe­rienced. In any case, I sus­pect that your abil­ity to grow is go­ing to be ul­ti­mately limited by the per­centage of mem­bers who have the abil­ity to be or­ga­niz­ers, and the time to spend or­ga­niz­ing, and who’ve also lev­eled up into good enough ra­tio­nal­ists to keep things on track. Im­pli­ca­tion, make an ex­tra effort to re­cruit peo­ple who can be­come or­ga­niz­ers.

And when­ever some­one does start do­ing some­thing in­ter­est­ing with their life, or suc­cess­fully re­cruits some­one who seems un­usu­ally promis­ing, or spends time or­ga­niz­ing things, don’t for­get to give them a well-de­served cookie.

Fi­nally, re­mem­ber that the trou­ble with the ex­act phras­ing of “be­come awe­some”—though it does nicely for a gloss—is that Awe­some isn’t a static qual­ity of a per­son. Awe­some is as awe­some does.