Should rationality be a movement?

This post is a quick write-up of a dis­cus­sion that I re­cently had with two mem­bers of the ra­tio­nal­ity com­mu­nity. For rea­sons of sim­plic­ity I’ll pre­sent them as hold­ing a sin­gle view­point that is a merger of both their ar­gu­ments. All par­ties seemed to be in agree­ment about the long-term fu­ture be­ing an over­whelming con­sid­er­a­tion, so apolo­gies in ad­vance to any­one with a differ­ent opinion.

In a re­cent dis­cus­sion, I noted that the ra­tio­nal­ity com­mu­nity didn’t have an or­gani­sa­tion like CEA en­gag­ing in move­ment build­ing and sug­gested this might at least par­tially why EA seemed to be much more suc­cess­ful than the ra­tio­nal­ity com­mu­nity. While the ra­tio­nal­ity com­mu­nity has founded the MIRI and CFAR, I pointed out that there were now so many EA-al­igned or­gani­sa­tions it’s im­pos­si­ble to keep track. EA runs con­fer­ences where hun­dreds of peo­ple at­tend, with more on the wait­list, while LW doesn’t even have a con­fer­ence in it’s home­town. EA has groups at the most promi­nent uni­ver­si­ties, while LW has al­most none. Fur­ther, EA now has it’s own uni­ver­sity de­part­ment at Oxford and the sup­port of OpenPhil, a multi-billion dol­lar or­gani­sa­tion. Ad­mit­tedly, Scott Alexan­der grew out of the ra­tio­nal­ity com­mu­nity, but EA has 80,000 hours. I also noted that EA had cre­ated a large num­ber of peo­ple who wanted to be­come AI safety re­searchers; in­deed at some EA con­fer­ences it felt like half the peo­ple there were in­ter­ested in pur­su­ing that path.

Based on this com­par­i­son, EA seems to have been far more suc­cess­ful. How­ever, the other two sug­gested that ap­pear­ances could be mis­lead­ing and that it there­fore wasn’t so ob­vi­ous that ra­tio­nal­ity should be a move­ment at all. In par­tic­u­lar, they ar­gued that most of the progress made so far in terms of AI safety didn’t come from any­thing “mass-move­ment-y”.

For ex­am­ple, they claimed:

  • Slat­estar­codex has been given en­thu­si­as­tic praise by many lead­ing in­tel­lec­tu­als who may go on to in­fluence how oth­ers think. This is the work of just one man who has in­ten­tion­ally tried to limit the growth of the com­mu­nity around it

  • Eliezer Yud­kowsky was more in­fluen­tial than EA on Nick Bostrom’s Su­per­in­tel­li­gence. This book seems to have played a large role in con­vinc­ing more aca­demic types to take this view­point more se­ri­ously. Nei­ther Yud­kowsky’s work on Less Wrong nor Su­per­in­tel­li­gence are de­signed for a ca­sual au­di­ence.

  • They ar­gued that CFAR played a cru­cial role in de­vel­op­ing an in­di­vi­d­ual who helped found the Fu­ture of Life In­sti­tute. This in­sti­tute ran the Asilo­mar Con­fer­ence which kicked off a wave of AI safety re­search.

  • They claimed that even though 80,000 Hours had ac­cess to a large pool of EAs, they hadn’t pro­vided any re­searchers to OpenPhil, only peo­ple filling other roles like op­er­a­tions. In con­trast, they ar­gued that CFAR men­tors and alumni were around 50% of OpenPhil’s re­cent hires and likely de­served some level of credit for this.

Part of their ar­gu­ment was that qual­ity is more im­por­tant than quan­tity for re­search prob­lems like safe AI. In par­tic­u­lar, they asked whether a small team of the most elite re­searchers was more likely to suc­ceed in rev­olu­tion­is­ing sci­ence or build­ing a nu­clear bomb than a much larger group of sci­ence en­thu­si­asts.

My (par­tially ar­tic­u­lated) po­si­tion was that it was too early to ex­pect too much. I ar­gued that even though most EAs in­ter­ested in AI were just en­thu­si­asts, some per­centage of this very large num­ber of EAs would go on to be­come to be suc­cess­ful re­searchers. Fur­ther, I ar­gued that we should ex­pect this im­pact to be sig­nifi­cantly pos­i­tive un­less there was a good rea­son to be­lieve that a large pro­por­tion of EAs would act in strongly net-nega­tive ways.

The coun­ter­ar­gu­ment given was that I had un­der­es­ti­mated the difficulty of be­ing able to use­fully con­tribute to AI safety re­search and that the per­centage who could use­fully con­tribute would be much smaller than I an­ti­ci­pated. If this were the case, then en­gag­ing in more tar­geted out­reach would be more use­ful than build­ing up a mass move­ment.

I ar­gued that more EAs had a chance of be­com­ing highly skil­led re­searchers than they thought. I said that this was not just be­cause EAs tended to be rea­son­ably in­tel­li­gent; but also be­cause they tended to be much bet­ter than av­er­age at en­gag­ing in good-faith dis­cus­sion, be more ex­posed to con­tent around strat­egy/​pri­ori­ti­sa­tion and also benefit from net­work effects.

The first part of their re­sponse was to ar­gue that by be­ing a move­ment EA had ended up com­pro­mis­ing on their com­mit­ment to truth, as fol­lows:

i) EA’s fo­cus on hav­ing an im­pact en­tails grow­ing the move­ment which en­tails pro­tect­ing the rep­u­ta­tion of EA and at­tempt­ing to gain so­cial sta­tus

ii) This causes EA to pri­ori­tise build­ing re­la­tion­ships with high-sta­tus peo­ple, such as offer­ing them ma­jor speak­ing slots at EA con­fer­ences, even when they aren’t par­tic­u­larly rigor­ous thinker.

iii) It also causes EA to want to dis­so­ci­ate from low-sta­tus peo­ple who pro­duce ideas worth pay­ing at­ten­tion to. In par­tic­u­lar, they ar­gued that this had a chilling effect on EA and caused peo­ple to speak in a way that was much more guarded.

iv) By ac­quiring re­sources and sta­tus EA had drawn the at­ten­tion of peo­ple who were in­ter­ested in these re­sources, in­stead of the mis­sion of EA. Th­ese peo­ple would dam­age the epistemic norms by at­tempt­ing to shift the out­comes of truth-find­ing pro­cesses to­wards out­comes that would benefit them.

They then ar­gued that de­spite the rea­sons I pointed out for be­liev­ing that EAs could be suc­cess­ful AI safety re­searchers, that most were lack­ing a cru­cial com­po­nent which was a deep com­mit­ment to at­tempt­ing to fix the is­sue as op­posed to merely seem­ing like they are at­tempt­ing to fix the is­sue. They be­lieved that EA wasn’t the right kind of en­vi­ron­ment for de­vel­op­ing peo­ple like this and that with­out this at­tribute most work peo­ple en­gaged in would end up be­ing es­sen­tially pointless.

Origi­nally I listed an­other point here, but I’ve re­moved it since it wasn’t rele­vant to this par­tic­u­lar de­bate, but in­stead a sec­ond si­mul­ta­neous de­bate about whether CEA was an effec­tive or­gani­sa­tion. I be­lieve that the dis­cus­sion of this topic ended here. I hope that I have rep­re­sented the po­si­tion of the peo­ple I was talk­ing to fairly and I apol­o­gise in ad­vance if I’ve made any mis­takes.