An Update on Academia vs. Industry (one year into my faculty job)

I’ve been an assistant professor (equivalent) for ~1 year now at Cambridge. Shortly after accepting the position, I wrote AI x-risk reduction: why I chose academia over industry.

Since then, I’ve had a lot of conversations on academia vs. industry with people getting into AI x-safety (e.g. considering applying for PhDs). This post summarizes that experience, and describes a few other updates from my experience in the last 1.5 years.

Summary of recent conversations:

  • Most people haven’t read my previous post and I point them to it.

  • A common attitude is: “Industry seems better, WTF would you do academia right now?”

  • A perhaps equally common attitude is: “I am new and have no strong opinions, just trying to figure out what people think.”

  • The main reasons I hear for industry over academia are:

    • Short timelines

    • Need to access Foundation models

    • Academic incentives to work on topics less relevant to x-safety.

  • I think these are all valid points, but I there are countervailing considerations. My response to these 3 points has generally been something like:

    • Academia still seems like the best option ATM for rapidly training and credentialing people. Even under fairly short timelines, this seems likely to be more valuable than direct work. Furthermore, it is a mistake to simply focus on efforts on whatever timelines seem most likely; one should also consider tractability and neglectedness of strategies that target different timelines. It seems plausible that we are just screwed on short timelines, and somewhat longer timelines are more tractable. Also, people seem to be making this mistake a lot and thus short timelines seem potentially less neglected.

    • There are and will be open source foundation models. Organizations like Anthropic seem keen on collaborating and providing access (although we haven’t yet pitched them anything concrete). It’s not clear that you have that much better access when you are working out one of these orgs (I’d be curious what people at the orgs think about this one!): my impression is that it is still clunky to do a lot of things with a large model when you are at an org, and things like retraining are obviously very expensive; these orgs seem to also favor large group projects, which I assume are directed by leadership; so in practice, there might be less difference between being entry-level at an org vs. being in academia.

    • The incentives are real, and may be worth playing into somewhat if you want to get a faculty job. However, it is becoming easier and easier to work on safety in academia; safety topics are going mainstream and being published at top conferences. Work that is still outside the academic Overton window can be brought into academia if it can be approached with the technical rigor of academia, and work that meets academic standards is much more valuable than work that doesn’t; this is both because it can be picked up by the ML community, and because it’s much harder to tell if you are making meaningful progress if your work doesn’t meet these standards of rigor. There’s also a good chance that within ~5 years they will be mainstream enough that it’s now a good career move to focus on safety as an academic. Also, depending on your advisor, you can basically have total freedom in academia (and funding tends to give freedom). Finally, the incentives outside of academia are not great either: for-profit orgs are incentivized to build stuff, whether it’s a good idea or not; because LW/​AF do not have established standards of rigor like ML, they end up operating more like a less-functional social science field, where (I’ve heard) trends, personality, and celebrity play an outsized role in determining which research is valorized by the field.

Other updates:

  • Overall, I’m enjoying being a professor tremendously!

  • In particular, it’s been way better than being a grad student, and I have been reflecting a bit on how tough it was to be doing my graduate studies somewhere where people didn’t really understand or care about AI x-safety. I think this is a very important consideration for anyone thinking about starting a grad program, or going to work somewhere where they will not have colleagues interested in AI x-safety. I suggested to a few people planning on starting grad school that they should try and coordinate so that they end up in the same place(s).

  • Teaching has been less work than expected; other duties (especially grading/​marking) have been more work than expected. Overall, the amount of non-research work I have to do is about what I expected so far, but there’s been a bit more admin headaches than expected. I’m intending to get more help with that from a PA or lab manager.

  • I’ve enjoyed having students to do research with about as much as I thought I would. It’s great!

  • There’s a policy I wasn’t aware of that I can’t take more than 8 PhD students at once. There are ways around it, but this is perhaps my main complaint so far.I have not been funding bottlenecked, and don’t expect to be anytime soon.

  • As mentioned above, orgs seem keen on granting access to foundation models to academic collaborators.

  • Several relatively senior people from the broader ML community have approached me with their concerns about AI x-safety. Overall, I increasingly have the impression that the broader ML community is on the cusp of starting to take this seriously, but don’t know what to do about it. I’m of the opinion that nobody really knows what to do about it; I think most things people in the AI x-safety community do are reasonable, but none of them look that promising. I would characterize these ML researchers as rightfully skeptical of solutions proposed by the AI x-safety community (while coming up with some similar ideas, e.g. things along the lines of scalable oversight), confused about why the community focuses on the particular set of technical problems it has, skeptical that technical work will solve the problem, and ignorant of AI x-safety literature. Any scalable approach to following would be extremely valuable: i) creating common knowledge that ML researchers are increasingly worried, ii) creating good ways for them to catch-up on the AI x-safety literature, and/​or ii) soliciting novel ideas from them.

  • EtA: I’ll add more stuff below as I think of it...

  • One thing which has made me reconsider academia is the large amount of funding available at present; It seems worth thinking about how to spend on the order of $10m+/​year, whereas I’m estimating I’ll be spending more like $1m/​year as faculty.

  • I’ve been increasingly keen on working with foundation models and this hasn’t happened as much as I would like. Some possible reasons and limitations of OpenAI API are listed here: https://​​docs.google.com/​​document/​​d/​​18eqLciwWTnuxbNZ28eLEle34OoKCcqqF0OfZjy3DlFs/​​edit?usp=sharing

  • I didn’t originally consider non-tenure-track (TT) jobs, but they have significant appeal: at least at Cambridge, you can be non-TT and still be a principle investigator (PI), meaning you can supervise students and manage a research group. But you don’t have to teach, and may have less admin duties as well. The only downsides I can think of are less prestige and less job security. I think having reliable external funding probably helps a lot with job security.