TLDR: In this post, I argue that if you are a junior AI safety researcher, you should consider applying to PhD programs in ML soon, especially if you have recently participated in an AIsafetyupskilling or researchprogram like MATS or ARENA and might be interested in working on AI safety long term, but don’t have immediate career plans. It is relatively cheap to apply and provides good future option value. I don’t argue that you should necessarily do a PhD, but some other posts do. I also point out that starting a PhD does not lock you in if better opportunities arise. PhD application deadlines for Fall 2025 start are coming up soon; many application deadlines are December 15th, though some are as early as next week. For the uninitiated, I provide a step by step guide to applying at the end of this post.
Applying to PhD programs might, in expectation, be worth your time.
This might be true even if you are not sure you:
Want to do a PhD.
Think being a researcher is the best way for you to contribute to AI safety.
Think you even want to work on AI safety.
This is provided that you assign sufficient probability to all of the above, and you don’t currently have better plans. PhD applications can be annoying, but in general applying is cheap, and can often teach you things. A PhD might be something you might want to start in a year. Applying now gives you the optionality to do this, which future you might thank you for.
What is the basic case for a PhD?
A PhD is the world’s default program for training researchers.
“Making AI go well” is a hard problem, which we do not know how to solve. It therefore requires research. A PhD lets you work on, and get empirical feedback on, your own research ideas, which seems important for building “research taste”: the important skill of developing and evaluating research ideas.
More ambitiously, AI safety lacks “research leads”—people who are capable of coming up with and leading new research agendas; such people often have PhDs (though note this is not the only or even most common path for PhD graduates).
Being a PhD student might put you in a better environment to do productive research.
Compared to being an independent researcher; you have access to more resources: an advisor (who will either be somewhere between very helpful, not very helpful, and net negative, but who you get to choose), funding (do not do a PhD if you don’t get funding), compute, a default set of collaborators, structure, etc. Importantly, it is a much more stable career option than short-term independent research grants (or indeed short term programs), while offering approximately the same amount of research freedom. Getting funding for independent research is significantly harder than it used to be, and the state of AI safety funding for individuals is often unpredictable over the long term. Security and stability is often good.
Compared to working at an organisation; the PhD offers significantly more intellectual freedom. You often have near complete freedom to work on your own ideas, rather than some direction dictated by someone else. If you constantly feel that all the researchers you talk to are wrong and misguided, then a PhD could be for you!
A PhD does not lock you in for 6 years.
If it’s going badly, or some better/higher impact opportunity comes up, you can just leave and go and do that. If you think timelines might be short, and you want to leave and go somewhere with higher influence, you can do that. If your particular institution is a bad environment, you can often visit other labs or work from an AI safety hub (e.g. Berkeley, Boston, or London).
Doing a PhD does not close doors, but might have an opportunity cost.
I argue you should consider applying to a PhD, but do not take a strong position here on whether you should do it if you have other competitive options. This post is mostly targeted towards those who don’t currently have such options, which is not to say that a PhD might not be the right path even if you do have other options!
The world is credentialist.
Many institutions relevant for AI safety are too – especially larger organisations without the ability to determine technical ability without the academic signal given by a PhD. For example, government agencies tend to be substantially more credentialist than small start-ups.
More people should potentially consider pursuing an academic career in general, i.e., trying to set-up an AI safety lab as a professor at a prestigious university. A PhD is a necessary prerequisite for this.
Why might I want to apply even if I’m confident a PhD is not my current favourite choice?
Many people I talk to have just finished an AI safety research program like MATS, and don’t have great concrete plans for what to do next. Some pursue independent research, others apply for various AI safety jobs.
I argue above that a PhD is often better than doing research independently.
The AI safety job market is very competitive, so you might not immediately find a job, which can be disheartening. Having a backup plan is important. There’s some sense in which PhDs aren’t a real backup plan; they’re instead a good place to develop a plan.
Academic timelines are rigid and mean that if you apply now, you would not start until ~September 2025 (almost a year!). Similarly if you don’t apply now, you wouldn’t be able to start until September 2026, at the earliest. It’s possible that the world and your views about where you are best placed to contribute to AI safety may significantly evolve over the next year before you start. Even if you are currently not sure whether a PhD is right for you, nothing stops you from waiting until September 2025 to decide whether to start or not (though I do recommend being open with your advisor about this prospect if you do get an offer), so applying now gives you significant option value.
In what cases might applying not be a good idea?
After doing some cheap tests (e.g. a research program), you decide that technical AI safety research is not for you. In this case, you might want to consider other options.
There are many ways to contribute to AI safety that do not require a PhD, or even research ability. Some of these paths might be better options for you. If you are a strong engineer already, you might be better placed to be a research helper. I might still weakly recommend applying, as applying is cheap, and the job market remains competitive.
PhD programs in ML are now very competitive. You might struggle to get an offer from a good program if you don’t have legible evidence of research competency, and strong letters of recommendation from established researchers who have mentored you in research projects. The program choice matters; your experience in a worse program might be closer to “freedom and funding” than to a structured program with lots of support. I still think being a PhD student in a non-top program might be better than doing independent research, for most people.
Applying might not be as cheap as you think. I would guess it might take a few days of full time work at minimum, and up to a few weeks if you are putting in a high effort application to many places.
Convinced? Here is a step by step guide to applying.
Read a guide (there are likely other good ones). Some of the application process involves arbitrary customs and conventions. If you get these wrong you may signal that you are an inexperienced outsider.
Reach out to three people who might be able to write you a letter of reference ASAP. Note that it’s already kind of late to be asking for this application round, so be prepared with backup letter writers.
Figure out where you might want to apply, and when the application deadlines are.
Fill out the application forms up until the “request references” point, so your referees have as much time as possible to submit references. They are busy people!
(Optionally) research and email the professors you want to apply to ASAP. Be non generic.
Write a Statement of Purpose that summarises who you are, what you’re interested in, what cool work you have done before, and who you might want to work with.
Use an LLM to point out all the obvious flaws in your application, and fix them.
Pay and apply! Application fees are generally around $80.
A personal anecdote.
I applied to PhD programs last year, after having done an AI safety research program and having worked on technical AI safety for ~one year. I probably spent too long on the application process, but found it informative. It forced me to write out what exactly my interests were, and I had many good chats with professors who were working in areas I was interested in. I was pretty unsure about whether doing a PhD was right for me at all when I applied, and remained unsure for much of the subsequent year. I ended up getting a few good offers. As the year went by, my views changed quite substantially, and I became more convinced that a PhD was a good option for me and am now quite excited about the prospect. I may still not end up doing my PhD, but I’m pretty confident past me made the correct expected-value calculation when deciding to apply, and appreciate the optionality he afforded me.
Resources
Finally, some existing resources on PhDs in AI safety, that both do a better job making the case for/against PhDs than I do in this post, and paint a clearer picture of what doing a PhD might be like.
Thanks to Andy Arditi, Rudolf Laine, Joseph Miller, Sören Mindermann, Neel Nanda, Jake Mendel, Alejandro Ortega and Francis Rhys Ward for helpful feedback on this post.
You should consider applying to PhDs (soon!)
TLDR: In this post, I argue that if you are a junior AI safety researcher, you should consider applying to PhD programs in ML soon, especially if you have recently participated in an AI safety upskilling or research program like MATS or ARENA and might be interested in working on AI safety long term, but don’t have immediate career plans. It is relatively cheap to apply and provides good future option value. I don’t argue that you should necessarily do a PhD, but some other posts do. I also point out that starting a PhD does not lock you in if better opportunities arise. PhD application deadlines for Fall 2025 start are coming up soon; many application deadlines are December 15th, though some are as early as next week. For the uninitiated, I provide a step by step guide to applying at the end of this post.
Applying to PhD programs might, in expectation, be worth your time.
This might be true even if you are not sure you:
Want to do a PhD.
Think being a researcher is the best way for you to contribute to AI safety.
Think you even want to work on AI safety.
This is provided that you assign sufficient probability to all of the above, and you don’t currently have better plans. PhD applications can be annoying, but in general applying is cheap, and can often teach you things. A PhD might be something you might want to start in a year. Applying now gives you the optionality to do this, which future you might thank you for.
What is the basic case for a PhD?
A PhD is the world’s default program for training researchers.
“Making AI go well” is a hard problem, which we do not know how to solve. It therefore requires research. A PhD lets you work on, and get empirical feedback on, your own research ideas, which seems important for building “research taste”: the important skill of developing and evaluating research ideas.
More ambitiously, AI safety lacks “research leads”—people who are capable of coming up with and leading new research agendas; such people often have PhDs (though note this is not the only or even most common path for PhD graduates).
Being a PhD student might put you in a better environment to do productive research.
Compared to being an independent researcher; you have access to more resources: an advisor (who will either be somewhere between very helpful, not very helpful, and net negative, but who you get to choose), funding (do not do a PhD if you don’t get funding), compute, a default set of collaborators, structure, etc. Importantly, it is a much more stable career option than short-term independent research grants (or indeed short term programs), while offering approximately the same amount of research freedom. Getting funding for independent research is significantly harder than it used to be, and the state of AI safety funding for individuals is often unpredictable over the long term. Security and stability is often good.
Compared to working at an organisation; the PhD offers significantly more intellectual freedom. You often have near complete freedom to work on your own ideas, rather than some direction dictated by someone else. If you constantly feel that all the researchers you talk to are wrong and misguided, then a PhD could be for you!
A PhD does not lock you in for 6 years.
If it’s going badly, or some better/higher impact opportunity comes up, you can just leave and go and do that. If you think timelines might be short, and you want to leave and go somewhere with higher influence, you can do that. If your particular institution is a bad environment, you can often visit other labs or work from an AI safety hub (e.g. Berkeley, Boston, or London).
Doing a PhD does not close doors, but might have an opportunity cost.
I argue you should consider applying to a PhD, but do not take a strong position here on whether you should do it if you have other competitive options. This post is mostly targeted towards those who don’t currently have such options, which is not to say that a PhD might not be the right path even if you do have other options!
The world is credentialist.
Many institutions relevant for AI safety are too – especially larger organisations without the ability to determine technical ability without the academic signal given by a PhD. For example, government agencies tend to be substantially more credentialist than small start-ups.
More people should potentially consider pursuing an academic career in general, i.e., trying to set-up an AI safety lab as a professor at a prestigious university. A PhD is a necessary prerequisite for this.
Why might I want to apply even if I’m confident a PhD is not my current favourite choice?
Many people I talk to have just finished an AI safety research program like MATS, and don’t have great concrete plans for what to do next. Some pursue independent research, others apply for various AI safety jobs.
I argue above that a PhD is often better than doing research independently.
The AI safety job market is very competitive, so you might not immediately find a job, which can be disheartening. Having a backup plan is important. There’s some sense in which PhDs aren’t a real backup plan; they’re instead a good place to develop a plan.
Academic timelines are rigid and mean that if you apply now, you would not start until ~September 2025 (almost a year!). Similarly if you don’t apply now, you wouldn’t be able to start until September 2026, at the earliest. It’s possible that the world and your views about where you are best placed to contribute to AI safety may significantly evolve over the next year before you start. Even if you are currently not sure whether a PhD is right for you, nothing stops you from waiting until September 2025 to decide whether to start or not (though I do recommend being open with your advisor about this prospect if you do get an offer), so applying now gives you significant option value.
In what cases might applying not be a good idea?
After doing some cheap tests (e.g. a research program), you decide that technical AI safety research is not for you. In this case, you might want to consider other options.
There are many ways to contribute to AI safety that do not require a PhD, or even research ability. Some of these paths might be better options for you. If you are a strong engineer already, you might be better placed to be a research helper. I might still weakly recommend applying, as applying is cheap, and the job market remains competitive.
PhD programs in ML are now very competitive. You might struggle to get an offer from a good program if you don’t have legible evidence of research competency, and strong letters of recommendation from established researchers who have mentored you in research projects. The program choice matters; your experience in a worse program might be closer to “freedom and funding” than to a structured program with lots of support. I still think being a PhD student in a non-top program might be better than doing independent research, for most people.
Applying might not be as cheap as you think. I would guess it might take a few days of full time work at minimum, and up to a few weeks if you are putting in a high effort application to many places.
Convinced? Here is a step by step guide to applying.
Read a guide (there are likely other good ones). Some of the application process involves arbitrary customs and conventions. If you get these wrong you may signal that you are an inexperienced outsider.
Reach out to three people who might be able to write you a letter of reference ASAP. Note that it’s already kind of late to be asking for this application round, so be prepared with backup letter writers.
Figure out where you might want to apply, and when the application deadlines are.
Fill out the application forms up until the “request references” point, so your referees have as much time as possible to submit references. They are busy people!
(Optionally) research and email the professors you want to apply to ASAP. Be non generic.
Write a Statement of Purpose that summarises who you are, what you’re interested in, what cool work you have done before, and who you might want to work with.
Use an LLM to point out all the obvious flaws in your application, and fix them.
Pay and apply! Application fees are generally around $80.
A personal anecdote.
I applied to PhD programs last year, after having done an AI safety research program and having worked on technical AI safety for ~one year. I probably spent too long on the application process, but found it informative. It forced me to write out what exactly my interests were, and I had many good chats with professors who were working in areas I was interested in. I was pretty unsure about whether doing a PhD was right for me at all when I applied, and remained unsure for much of the subsequent year. I ended up getting a few good offers. As the year went by, my views changed quite substantially, and I became more convinced that a PhD was a good option for me and am now quite excited about the prospect. I may still not end up doing my PhD, but I’m pretty confident past me made the correct expected-value calculation when deciding to apply, and appreciate the optionality he afforded me.
Resources
Finally, some existing resources on PhDs in AI safety, that both do a better job making the case for/against PhDs than I do in this post, and paint a clearer picture of what doing a PhD might be like.
Adam Gleave, More people getting into AI safety should do a PhD.
Rohin Shah, FAQ: Advice for AI Alignment Researchers.
Adam Gleave, Careers in Beneficial AI Research.
Benjamin Hilton, AI Safety Technical Research (via 80000 hours).
Eca, How to PhD.
Andrew Critch, Deliberate Grad School.
Andrew Critch, Leveraging Academia.
Find a PhD.
80000 hours, A (very incomplete) list of potential PhD supervisors working on AI safety.
Arkose. A list of AI safety interested professors.
Thanks to Andy Arditi, Rudolf Laine, Joseph Miller, Sören Mindermann, Neel Nanda, Jake Mendel, Alejandro Ortega and Francis Rhys Ward for helpful feedback on this post.