We Should Prepare for a Larger Representation of Academia in AI Safety

Epistemic Status: I had the idea for the post a few days ago and quickly wrote it down while on a train. I’m very curious about other perspectives.

TL;DR: The recent increased public interest in AI Safety will likely lead to more funding for and more researchers from academia. I expect this increase to be larger than that of non-academic AI Safety work. We should prepare for that by thinking about how we “onboard” new researchers and how to marginally allocate resources (time and money) in the future.

Why I think academia’s share in AI safety will increase

With the recent public interest in AI (existential) safety, many people will think about how they can help. Among people who think “I might want to do research on AI Safety”, most will come from academia because that’s where most research happens. Among people who will think “I should fund AI Safety research”, most will fund academic-style research because that’s where most research talent sits, and because it’s the “normal” thing to do. I expect this increase to be larger than that of AI Safety researchers in companies (though with less certainty), AI Safety orgs, or independent researchers of, e.g., the “Lesswrong /​ Alignment Forum” style.

Weak evidence that this is already happening

At the university of Amsterdam, where I’m a PhD student, there has been increased interest in AI Safety recently. In particular, one faculty actively starts to think about AI existential safety and wants to design a course that will include scalable oversight, and 4 other faculty are at least starting to get informed about AI existential safety with an “open mind”.

What might one do to prepare?

Needless to say, I didn’t think about this a lot, so take the following with a grain of salt and add your own ideas.

  • Academics will mostly read papers that are at least on arxiv. So to “onboard” them, it seems more important than in the past to make the most important insights from lesswrong or the alignment forum accessible to academics.

  • Doing a PhD might become more worthwhile because it’s easier now to have an alignment career in academia.

  • Doing a PhD might also become less worthwhile because “academic-style” research into AI safety will be less neglected going forward. Whether you buy this argument depends on your views on how open-minded academia is to the most important types of AI Safety research.

  • In general, it seems worthwhile to anticipate which types of research will be “covered” by academia, and how to prioritize research in this landscape.

  • Grantmakers should think about how to react to a potentially changing funding landscape, with many more “traditional” grantmakers funding research in academia, and more talented academics being open to work on AI existential safety. This could also mean to prioritize work that is substantially different than what will be researched in academia.


I find it plausible that the representation of AI Safety researchers in companies like OpenAI and DeepMind will also grow very fast, though I think the increase will be smaller than in academia.