You should consider applying to PhDs (soon!)
TLDR: In this post, I argue that if you are a junior AI safety researcher, you should consider applying to PhD programs in ML soon, especially if you have recently participated in an AI safety upskilling or research program like MATS or ARENA and might be interested in working on AI safety long term, but don’t have immediate career plans. It is relatively cheap to apply and provides good future option value. I don’t argue that you should necessarily do a PhD, but some other posts do. I also point out that starting a PhD does not lock you in if better opportunities arise. PhD application deadlines for Fall 2025 start are coming up soon; many application deadlines are December 15th, though some are as early as next week. For the uninitiated, I provide a step by step guide to applying at the end of this post.
Applying to PhD programs might, in expectation, be worth your time.
This might be true even if you are not sure you:
Want to do a PhD.
Think being a researcher is the best way for you to contribute to AI safety.
Think you even want to work on AI safety.
This is provided that you assign sufficient probability to all of the above, and you don’t currently have better plans. PhD applications can be annoying, but in general applying is cheap, and can often teach you things. A PhD might be something you might want to start in a year. Applying now gives you the optionality to do this, which future you might thank you for.
What is the basic case for a PhD?
A PhD is the world’s default program for training researchers.
“Making AI go well” is a hard problem, which we do not know how to solve. It therefore requires research. A PhD lets you work on, and get empirical feedback on, your own research ideas, which seems important for building “research taste”: the important skill of developing and evaluating research ideas.
More ambitiously, AI safety lacks “research leads”—people who are capable of coming up with and leading new research agendas; such people often have PhDs (though note this is not the only or even most common path for PhD graduates).
Being a PhD student might put you in a better environment to do productive research.
Compared to being an independent researcher; you have access to more resources: an advisor (who will either be somewhere between very helpful, not very helpful, and net negative, but who you get to choose), funding (do not do a PhD if you don’t get funding), compute, a default set of collaborators, structure, etc. Importantly, it is a much more stable career option than short-term independent research grants (or indeed short term programs), while offering approximately the same amount of research freedom. Getting funding for independent research is significantly harder than it used to be, and the state of AI safety funding for individuals is often unpredictable over the long term. Security and stability is often good.
Compared to working at an organisation; the PhD offers significantly more intellectual freedom. You often have near complete freedom to work on your own ideas, rather than some direction dictated by someone else. If you constantly feel that all the researchers you talk to are wrong and misguided, then a PhD could be for you!
A PhD does not lock you in for 6 years.
If it’s going badly, or some better/higher impact opportunity comes up, you can just leave and go and do that. If you think timelines might be short, and you want to leave and go somewhere with higher influence, you can do that. If your particular institution is a bad environment, you can often visit other labs or work from an AI safety hub (e.g. Berkeley, Boston, or London).
Doing a PhD does not close doors, but might have an opportunity cost.
I argue you should consider applying to a PhD, but do not take a strong position here on whether you should do it if you have other competitive options. This post is mostly targeted towards those who don’t currently have such options, which is not to say that a PhD might not be the right path even if you do have other options!
The world is credentialist.
Many institutions relevant for AI safety are too – especially larger organisations without the ability to determine technical ability without the academic signal given by a PhD. For example, government agencies tend to be substantially more credentialist than small start-ups.
More people should potentially consider pursuing an academic career in general, i.e., trying to set-up an AI safety lab as a professor at a prestigious university. A PhD is a necessary prerequisite for this.
Why might I want to apply even if I’m confident a PhD is not my current favourite choice?
Many people I talk to have just finished an AI safety research program like MATS, and don’t have great concrete plans for what to do next. Some pursue independent research, others apply for various AI safety jobs.
I argue above that a PhD is often better than doing research independently.
The AI safety job market is very competitive, so you might not immediately find a job, which can be disheartening. Having a backup plan is important. There’s some sense in which PhDs aren’t a real backup plan; they’re instead a good place to develop a plan.
Academic timelines are rigid and mean that if you apply now, you would not start until ~September 2025 (almost a year!). Similarly if you don’t apply now, you wouldn’t be able to start until September 2026, at the earliest. It’s possible that the world and your views about where you are best placed to contribute to AI safety may significantly evolve over the next year before you start. Even if you are currently not sure whether a PhD is right for you, nothing stops you from waiting until September 2025 to decide whether to start or not (though I do recommend being open with your advisor about this prospect if you do get an offer), so applying now gives you significant option value.
In what cases might applying not be a good idea?
After doing some cheap tests (e.g. a research program), you decide that technical AI safety research is not for you. In this case, you might want to consider other options.
There are many ways to contribute to AI safety that do not require a PhD, or even research ability. Some of these paths might be better options for you. If you are a strong engineer already, you might be better placed to be a research helper. I might still weakly recommend applying, as applying is cheap, and the job market remains competitive.
PhD programs in ML are now very competitive. You might struggle to get an offer from a good program if you don’t have legible evidence of research competency, and strong letters of recommendation from established researchers who have mentored you in research projects. The program choice matters; your experience in a worse program might be closer to “freedom and funding” than to a structured program with lots of support. I still think being a PhD student in a non-top program might be better than doing independent research, for most people.
Applying might not be as cheap as you think. I would guess it might take a few days of full time work at minimum, and up to a few weeks if you are putting in a high effort application to many places.
Convinced? Here is a step by step guide to applying.
Read a guide (there are likely other good ones). Some of the application process involves arbitrary customs and conventions. If you get these wrong you may signal that you are an inexperienced outsider.
Reach out to three people who might be able to write you a letter of reference ASAP. Note that it’s already kind of late to be asking for this application round, so be prepared with backup letter writers.
Figure out where you might want to apply, and when the application deadlines are.
Fill out the application forms up until the “request references” point, so your referees have as much time as possible to submit references. They are busy people!
(Optionally) research and email the professors you want to apply to ASAP. Be non generic.
Write a Statement of Purpose that summarises who you are, what you’re interested in, what cool work you have done before, and who you might want to work with.
Use an LLM to point out all the obvious flaws in your application, and fix them.
Pay and apply! Application fees are generally around $80.
A personal anecdote.
I applied to PhD programs last year, after having done an AI safety research program and having worked on technical AI safety for ~one year. I probably spent too long on the application process, but found it informative. It forced me to write out what exactly my interests were, and I had many good chats with professors who were working in areas I was interested in. I was pretty unsure about whether doing a PhD was right for me at all when I applied, and remained unsure for much of the subsequent year. I ended up getting a few good offers. As the year went by, my views changed quite substantially, and I became more convinced that a PhD was a good option for me and am now quite excited about the prospect. I may still not end up doing my PhD, but I’m pretty confident past me made the correct expected-value calculation when deciding to apply, and appreciate the optionality he afforded me.
Resources
Finally, some existing resources on PhDs in AI safety, that both do a better job making the case for/against PhDs than I do in this post, and paint a clearer picture of what doing a PhD might be like.
Adam Gleave, More people getting into AI safety should do a PhD.
Benjamin Hilton, AI Safety Technical Research (via 80000 hours).
Find a PhD.
Thanks to Andy Arditi, Rudolf Laine, Joseph Miller, Sören Mindermann, Neel Nanda, Jake Mendel, Alejandro Ortega and Francis Rhys Ward for helpful feedback on this post.
I totally agree, you should apply to PhD programs. (In stem cell biology.)
I decided to apply, and now I’m wondering what the best schools are for AI safety.
After some preliminary research, I’m thinking these are the most likely schools to be worth applying to, in approximate order of priority:
UC Berkeley (top choice)
CMU
Georgia Tech
University of Washington
University of Toronto
Cornell
University of Illinois—Urbana-Champaign
University of Oxford
University of Cambridge
Imperial College London
UT Austin
UC San Diego
I’ll probably cut this list down significantly after researching the schools’ faculty and their relevance to AI safety, especially for schools lower on this list.
I might also consider the CDTs in the UK mentioned in Stephen McAleese’s comment. But I live in the U.S. and am hesitant about moving abroad—maybe this would involve some big logistical tradeoffs even if the school itself is good.
Anything big I missed? (Unfortunately, the Stanford deadline is tomorrow and the MIT deadline was yesterday, so those aren’t gonna happen.) Or, any schools that seem obviously worse than continuing to work as a SWE at a Big Tech company in the Bay Area? (I think the fact that I live near Berkeley is a nontrivial advantage for me, career-wise.)
Do you know what topics within AI Safety you’re interested in? Or are you unsure and so looking for something that lets you keep your options open?
Yeah, I’m particularly interested in scalable oversight over long-horizon tasks and chain-of-thought faithfulness. I’d probably be pretty open to a wide range of safety-relevant topics though.
In general, what gets me most excited about AI research is trying to come up with the perfect training scheme to incentivize the AI to learn what you want it to—things like HCH, Debate, and the ELK contest were really cool to me. So I’m a bit less interested in areas like mechanistic interpretability or very theoretical math
UC Berkeley has historically had the largest concentration of people thinking about AI existential safety. It’s also closely coupled to the Bay Area safety community. I think you’re possibly underrating Boston universities (i.e. Harvard and Northeastern, as you say the MIT deadline has passed). There is a decent safety community there, in part due to excellent safety-focussed student groups. Toronto is also especially strong on safety imo.
Generally, I would advise thinking more about advisors with aligned interests over universities (this relates to Neel’s comment about interests), though intellectual environment does of course matter. When you apply, you’ll want to name some advisors who you might want to work with on your statement of purpose.
Strong upvote!
One thing I’d emphasise is that there’s a pretty big overhead to submitting a single application (getting recommendation letters, writing a generic statement of purpose), but it doesn’t take much effort to apply to more after that (you can rejig your SOP quite easily to fit different universities). Given the application process is noisy and competitive, if you’re submitting one application you should probably submit loads if you can afford the application costs. Good luck to everyone applying! :))
I’ll have to push back on this. I think if there’s one specific program that you’d like to go to, especially if there’s an advisor you have in mind, it’s good to tailor your application to that program. However, this might not apply to the typical reader of this post.
I followed a k strategy with my PhD statements of purpose (and recommendations) rather than an r strategy. I tailored my applications to the specific schools, and it seemed to work pretty decently well. I know of more qualified people who were rejected from a much higher proportion of schools who spent much less time on each application.
(Disclaimer: this is all anecdotal. Also, I was applying for chemistry programs, not AI)
It’s very field-dependent. In ecology & evolution, advisor-student fit is very influential and most programmes are direct admit to a certain professor. The weighting seems different for CS programs, many of which make you choose an advisor after admission (my knowledge is weaker here).
In the UK it’s more funding dependent—grant-funded PhDs are almost entirely dependent on the advisor’s opinion, whereas DTPs/CDTs have different selection criteria and are (imo) more grades-focused.
If you’re interested in doing a PhD in AI in the UK, I recommend applying for the Centres for Doctoral Training (CDTs) in AI such as:
CDT in Responsible and Trustworthy in-the-world NLP (University of Edinburgh)
CDT in Practice-Oriented Intelligence (University of Bristol)
CDT in Fundamentals of AI (University of Oxford)
CDT in Safe and Trusted AI (King’s College London)
CDT in Statistics and Machine Learning (University of Oxford)
Note that these programs are competitive so the acceptance rate is ~10%.
It is worth noting that UKRI is in the process of changing their language to Doctoral Landscape Awards (replacing DTP) and Doctoral Focal Awards (CDT). The announcements for BBSRC and NERC have already been done, but I can’t find what EPSRC is doing.
Thanks, this post made me seriously consider applying to a PhD, and I strong-upvoted. I had vaguely assumed that PhDs take way too long and don’t allow enough access to compute compared to industry AI labs. But considering the long lead time required for the application process and the reminder that you can always take new opportunities as they come up, I now think applying is worth it.
However, looking into it, putting together a high-quality application starting now and finishing by the deadline seems approximately impossible? If the deadline were December 15, that would give you two weeks; other schools like Berkeley have even earlier deadlines. I asked ChatGPT how long it would take to apply to just a single school, and it said it would take 43–59 hours of time spent working, or ~4–6 weeks in real time. Claude said 37-55 hours/4-6 weeks.
Not to discourage anyone from starting their application now if they think they can do it—I guess if you’re sufficiently productive and agentic and maybe take some amphetamines, you can do anything. But this seems like a pretty crazy timeline. Just the thought of asking someone to write me a recommendation letter in a two-week timeframe makes me feel bad.
Your post does make me think “if I were going to be applying to a PhD next December, what would I want to do now?” That seems pretty clarifying, and would probably be a helpful frame even if it turns out that a better opportunity comes along and I never apply to a PhD.
I think it’d be a good idea for you to repost this in August or early September of next year!
The AI time estimates are wildly high IMO, across basically every category. Some parts are also clearly optional (e.g. spending 2 hours reviewing). If you know what you want to research, writing a statement can be much shorter. I have previously applied to ML PhDs in two weeks and gotten an offer. The recommendation letters are the longest and most awkward to request at such notice, but two weeks isn’t obviously insane, especially if you have a good relationship with your reference letter writers (many students do things later than is recommended, no reference letter writer in academia will be shocked by this).
If you apply in 2025 December, you would start in 2026 fall. That is a very very long time from now. I think the stupidly long application cycle is pure dysfunction from academia, but you still need to take it into account.
(Also fyi, some UK programs have deadlines in spring if you can get your own funding)
I agree that it’s not impossible, but it’s definitely very late in the cycle to start thinking about PhD applications, and the claim that it would be more helpful to make the case for a PhD to people earlier in the cycle seems totally reasonable to me
+1 to the other comments, I think this is totally doable, especially if you can take time off work.
The hard part imo is letters of recommendation, especially if you don’t have many people who’ve worked with you on research before. If you feel awkward about asking for letters of recommendation on short notice (which multiple people have asked me for in the past week, if it helps, so this is pretty normal), one thing that makes it lower effort for the letter writer is giving them a bunch of notes on specific things you did while working with them and what traits of your’s this demonstrates or, even better, offering to write a rough first draft letter for them to edit (try not to give very similar letters to all your recommenders though!).
Thanks Neel, this comment pushed me over the edge into deciding to apply to PhDs! Offering to write a draft and taking off days of work are both great ideas. I just emailed my prospective letter writers and got 2⁄3 yeses so far.
I just wrote another top-level comment on this post asking about the best schools to apply to, feel free to reply if you have opinions :)
I started working on PhD applications about 12 days ago. I expect to have fairly polished applications for the first deadline on December 1, despite not working on this full time. So I think it’s quite possible to do applications for the December 15 deadlines. You would need to contact your referees (and potential supervisors for UK universities) in the next couple of days.
Great post! I especially agree that for most independent researchers, applying to PHDs before you necessarily want one would be a helpful option to have as a backstop for if your near term career plans don’t work out—and people should apply early because there’s such a long lag time between application and starting.
I think it’s also worth emphasising that if you have a non-standard work history (or are a bit junior), but might want to work in the United States, pursuing higher education in the US is one of the easiest ways to secure long-term work authorisation (And if someone funds your PhD, is radically cheaper than almost every alternative)
Great post, thanks for writing it up!
In addition to the 80K list, I can recommend the Arkose database of professors with open positions (you can filter specifically for PhD openings at the top).
thanks! added to post