Should rationality be a movement?

This post is a quick write-up of a discussion that I recently had with two members of the rationality community. For reasons of simplicity I’ll present them as holding a single viewpoint that is a merger of both their arguments. All parties seemed to be in agreement about the long-term future being an overwhelming consideration, so apologies in advance to anyone with a different opinion.

In a recent discussion, I noted that the rationality community didn’t have an organisation like CEA engaging in movement building and suggested this might at least partially why EA seemed to be much more successful than the rationality community. While the rationality community has founded the MIRI and CFAR, I pointed out that there were now so many EA-aligned organisations it’s impossible to keep track. EA runs conferences where hundreds of people attend, with more on the waitlist, while LW doesn’t even have a conference in it’s hometown. EA has groups at the most prominent universities, while LW has almost none. Further, EA now has it’s own university department at Oxford and the support of OpenPhil, a multi-billion dollar organisation. Admittedly, Scott Alexander grew out of the rationality community, but EA has 80,000 hours. I also noted that EA had created a large number of people who wanted to become AI safety researchers; indeed at some EA conferences it felt like half the people there were interested in pursuing that path.

Based on this comparison, EA seems to have been far more successful. However, the other two suggested that appearances could be misleading and that it therefore wasn’t so obvious that rationality should be a movement at all. In particular, they argued that most of the progress made so far in terms of AI safety didn’t come from anything “mass-movement-y”.

For example, they claimed:

  • Slatestarcodex has been given enthusiastic praise by many leading intellectuals who may go on to influence how others think. This is the work of just one man who has intentionally tried to limit the growth of the community around it

  • Eliezer Yudkowsky was more influential than EA on Nick Bostrom’s Superintelligence. This book seems to have played a large role in convincing more academic types to take this viewpoint more seriously. Neither Yudkowsky’s work on Less Wrong nor Superintelligence are designed for a casual audience.

  • They argued that CFAR played a crucial role in developing an individual who helped found the Future of Life Institute. This institute ran the Asilomar Conference which kicked off a wave of AI safety research.

  • They claimed that even though 80,000 Hours had access to a large pool of EAs, they hadn’t provided any researchers to OpenPhil, only people filling other roles like operations. In contrast, they argued that CFAR mentors and alumni were around 50% of OpenPhil’s recent hires and likely deserved some level of credit for this.

Part of their argument was that quality is more important than quantity for research problems like safe AI. In particular, they asked whether a small team of the most elite researchers was more likely to succeed in revolutionising science or building a nuclear bomb than a much larger group of science enthusiasts.

My (partially articulated) position was that it was too early to expect too much. I argued that even though most EAs interested in AI were just enthusiasts, some percentage of this very large number of EAs would go on to become to be successful researchers. Further, I argued that we should expect this impact to be significantly positive unless there was a good reason to believe that a large proportion of EAs would act in strongly net-negative ways.

The counterargument given was that I had underestimated the difficulty of being able to usefully contribute to AI safety research and that the percentage who could usefully contribute would be much smaller than I anticipated. If this were the case, then engaging in more targeted outreach would be more useful than building up a mass movement.

I argued that more EAs had a chance of becoming highly skilled researchers than they thought. I said that this was not just because EAs tended to be reasonably intelligent; but also because they tended to be much better than average at engaging in good-faith discussion, be more exposed to content around strategy/​prioritisation and also benefit from network effects.

The first part of their response was to argue that by being a movement EA had ended up compromising on their commitment to truth, as follows:

i) EA’s focus on having an impact entails growing the movement which entails protecting the reputation of EA and attempting to gain social status

ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren’t particularly rigorous thinker.

iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to. In particular, they argued that this had a chilling effect on EA and caused people to speak in a way that was much more guarded.

iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them.

They then argued that despite the reasons I pointed out for believing that EAs could be successful AI safety researchers, that most were lacking a crucial component which was a deep commitment to attempting to fix the issue as opposed to merely seeming like they are attempting to fix the issue. They believed that EA wasn’t the right kind of environment for developing people like this and that without this attribute most work people engaged in would end up being essentially pointless.

Originally I listed another point here, but I’ve removed it since it wasn’t relevant to this particular debate, but instead a second simultaneous debate about whether CEA was an effective organisation. I believe that the discussion of this topic ended here. I hope that I have represented the position of the people I was talking to fairly and I apologise in advance if I’ve made any mistakes.