Levelling Up in AI Safety Research Engineering
Summary: A level-based guide for independently up-skilling in AI Safety Research Engineering that aims to give concrete objectives, goals, and resources to help anyone go from zero to hero.
Cross-posted to the EA Forum. View a pretty Google Docs version here.
Introduction
I think great career guides are really useful for guiding and structuring the learning journey of people new to a technical field like AI Safety. I also like role-playing games. Here’s my attempt to use levelling frameworks and break up one possible path from zero to hero in Research Engineering for AI Safety (e.g. jobs with the “Research Engineer” title) through objectives, concrete goals, and resources. I hope this kind of framework makes it easier to see where one is on this journey, how far they have to go, and some options to get there.
I’m mostly making this to sort out my own thoughts about my career development and how I’ll support other students through Stanford AI Alignment, but hopefully, this is also useful to others! Note that I assume some interest in AI Safety Research Engineering—this guide is about how to up-skill in Research Engineering, not why (though working through it should be a great way to test your fit). Also note that there isn’t much abstract advice in this guide (see the end for links to guides with advice), and the goal is more to lay out concrete steps you can take to improve.
For each level, I describe the general capabilities of someone at the end of that level, some object-level goals to measure that capability, and some resources to choose from that would help get there. The categories of resources within a level are listed in the order you should progress, and resources within a category are roughly ordered by quality. There’s some redundancy, so I would recommend picking and choosing between the resources rather than doing all of them. Also, if you are a student and your university has a good class on one of the below topics, consider taking that instead of one of the online courses I listed.
As a very rough estimate, I think each level should take at least 100-200 hours of focused work, for a total of 700-1400 hours. At 10 hours/week (quarter-time), that comes to around 16-32 months of study but can definitely be shorter (e.g. if you already have some experience) or longer (if you dive more deeply into some topics)! I think each level is about evenly split between time spent reading/watching and time spent building/testing, with more reading earlier on and more building later.
Confidence: mid-to-high. I am not yet an AI Safety Research Engineer (but I plan to be)—this is mostly a distillation of what I’ve read from other career guides (linked at the end) and talked about with people working on AI Safety. I definitely haven’t done all these things, just seen them recommended. I don’t expect this to be the “perfect” way to prepare for a career in AI Safety Research Engineering, but I do think it’s a very solid way.
Level 1: AI Safety Fundamentals
Objective
You are familiar with the basic arguments for existential risks due to advanced AI, models for forecasting AI advancements, and some of the past and current research directions within AI alignment/safety. You have an opinion on how much you buy these arguments and whether you want to keep exploring AI Safety Research Engineering.
Why this first? Exposing yourself to these fundamental arguments and ideas is useful for testing your fit for AI Safety generally, but that isn’t to say you should “finish” this Level first and move on. Rather, you should be coming back to these readings and keeping up to date with the latest work in AI Safety throughout your learning journey. It’s okay if you don’t understand everything on your first try—Level 1 kind of happens all the time.
Goals
Complete an AI Safety introductory reading group fellowship.
Write a reflection distilling, recontextualizing, or expanding upon some AI Safety topic and share it with someone for feedback.
Figure out how convinced you are of the arguments for AI risk.
Decide if you want to continue learning about AI Safety Research Engineering, Theoretical AI Alignment, AI Policy and Strategy, or another field.
Resources
AI Safety Reading Curriculum (Choose 1)
Additional Resources
Level 2: Software Engineering
Objective
You can program in Python at the level of an introductory university course. You also know some other general software engineering tools/skills like the command line, Git/GitHub, documentation, and unit testing.
Why Python? Modern Machine Learning work, and thus AI Safety work, is almost entirely written in Python. Python is also an easier language for beginners to pick up, and there are plenty of resources for learning it.
Goals
Solve basic algorithmic programming problems with Python.
Know the basics of scientific computing with Python, including NumPy, and Jupyter/Colab/iPython Notebooks.
Create a new Git repository on GitHub, clone it, and add/commit/push changes to it for a personal project.
Know other software engineering skills like how to use the command line, write documentation, or make unit tests.
Resources
Python Programming (Choose 1-2)
Scientific Python (Choose 1-2)
Command Line (Choose 1-3)
Git/GitHub (Choose 2+)
Documentation (Choose 1-2)
Unit Testing (Choose 1-3)
Additional Resources
Level 3: Machine Learning
Objective
You have the mathematical context necessary for understanding Machine Learning (ML). You know the differences between supervised and unsupervised learning and between classification and regression. You understand common models like linear regression, logistic regression, neural networks, decision trees, and clustering, and you can code some of them in a library like PyTorch or JAX. You grasp core ML concepts like loss functions, regularization, bias/variance, optimizers, metrics, and error analysis.
Why so much math? Machine learning at its core is basically applied statistics and multivariable calculus. It used to be that you needed to know this kind of math really well, but now with techniques like automatic differentiation, you can train neural networks without knowing much of what’s happening under the hood. These foundational resources are included for completeness, but you can probably spend a lot less time on math (e.g. the first few sections of each course) depending on what kind of engineering work you intend to do. You might want to come back and improve you math skills for understanding certain work in Levels 6-7, though, and if you find this math really interesting, you might be a good fit for theoretical AI alignment research.
Goals
Understand the mathematical basis of Machine Learning, especially linear algebra and multivariable calculus.
Write out the differences between supervised and unsupervised learning and between classification and regression.
Train and evaluate a simple neural network on a standard classification task like MNIST or a standard regression task like a Housing Dataset.
Resources
Basic Calculus (Choose 1)
Probability (Choose 1)
Linear Algebra (Choose 1)
Multivariable Calculus (Choose 1)
Introductory Machine Learning (Choose 1-2)
Additional Resources
Level 4: Deep Learning
Objective
You’ve dived deeper into Deep Learning (DL) through the lens of at least one subfield such as Natural Language Processing (NLP), Computer Vision (CV), or Reinforcement Learning (RL). You now have a better understanding of ML fundamentals, and you’ve reimplemented some core ML algorithms “from scratch.” You’ve started to build a portfolio of DL projects you can show others.
Goals
Be able to describe in moderate detail a wide range of modern deep learning architectures, techniques, and applications such as long short-term memory networks (LSTM) or convolutional neural networks (CNN).
Gain a more advanced understanding of machine learning by implementing autograd, backpropagation, and stochastic gradient descent “from scratch.”
Complete 1-3 deep learning projects, taking 10–20 hours each, in 1 or more sub-fields like NLP, CV, or RL.
Resources
General Deep Learning (Choose 1)
Advanced Machine Learning
Studying (Choose 1-2)
Implementing (Choose 1)
MiniTorch (reimplement the core of PyTorch, self-study tips here)
Natural Language Processing (Choose 1 Or Another Sub-Field)
Computer Vision (Choose 1 Or Another Sub-Field)
Reinforcement Learning (Choose 1 Or Another Sub-Field)
Additional Resources
Level 5: Understanding Transformers
Objective
You have a principled understanding of self-attention, cross-attention, and the general transformer architecture along with some of its variants. You are able to write a transformer like BERT or GPT-2 “from scratch” in PyTorch or JAX (a skill I believe Redwood Research looks for), and you can use resources like 🤗 Transformers to work with pre-trained transformer models. Through experimenting with deployed transformer models, you have a decent sense of what transformer-based language and vision models can and cannot do.
Why transformers? The transformer architecture is currently the foundation for State of the Art (SOTA) results on most deep learning benchmarks, and it doesn’t look like it’s going away soon. Much of the newest ML research involves transformers, so AI Safety organizations focused on prosaic AI alignment or conducting research on current models practically all focus on transformers for their research.
Goals
Play around with deployed transformer models and write up some things you notice about what they can and cannot do. See if you can get them to do unexpected or interesting behaviors.
Read and take notes about how transformers work.
Use 🤗 Transformers to import, load the pre-trained weights of, and fine-tune a transformer model on a standard NLP or CV task.
Implement basic transformer models like BERT or GPT-2 from scratch and test that they work by loading pre-trained weights and checking that they produce the same results as the reference model or generate interesting outputs.
Resources
Experiment With Deployed Transformers (Choose 1-3)
Study The Transformer Architecture (Choose 2-3)
Attention Is All You Need—Vaswani et al. (Sections 1-3)
Lectures 8, 9, and optionally 10 from CS224N—Stanford University
Using 🤗 Transformers (Choose 1-2)
CS224U: Natural Language Understanding—Stanford University (Supervised Sentiment Analysis unit only)
Implement Transformers From Scratch (Choose 1-2)
Compare Your Code With Other Implementations
BERT (Choose 1-3)
pytorchic-bert/models.py—dhlee347 (PyTorch)
BERT—Google Research (TensorFlow)
nlp-tutorial/BERT.py—graykode (PyTorch)
Transformer-Architectures-From-Scratch/BERT.py - ShivamRajSharma (PyTorch)
GPT-2 (Choose 1-3)
Transformer-Architectures-From-Scratch/GPT_2.py—ShivamRajSharma (PyTorch)
gpt-2/model.py—openai (TensorFlow)
minGPT/model.py—Andrej Karpathy (PyTorch)
The Annotated GPT-2 - Aman Arora (PyTorch)
Additional Resources
Study Transformers More
Other Transformer Models You Could Implement
Level 6: Reimplementing Papers
Objective
You can read a recently published AI research paper and efficiently implement the core technique they present to validate their results or build upon their research. You also have a good sense of the latest ML/DL/AI Safety research. You’re pretty damn employable now—if you haven’t started applying for Research Engineering jobs/internships, consider getting on that!
Why papers? I talked with research scientists or engineers from most of the empirical AI Safety organizations (i.e. Redwood Research, Anthropic, Conjecture, Ought, CAIS, Encultured AI, DeepMind), and they all said that being able to read a recent ML/AI research paper and efficiently implement it is both a signal of a strong engineering candidate and a good way to build useful skills for actual AI Safety work.
Goals
Learn how to efficiently read Computer Science research papers.
Learn tips on how to implement papers and learn efficiently by doing so.
Reimplement the key contribution and evaluate the key results of 5+ AI research papers in topics of your choosing.
Resources
How to Read Computer Science Papers (Choose 1-3)
How to Implement Papers (Choose 2-4)
Implement Papers (Choose 5+, look beyond these)
General Lists
Interpretability
Robustness/Anomaly Detection
Value/Preference Learning
Reinforcement Learning
Level 7: Original Experiments
Objective
You can now efficiently grasp the results of AI research papers and come up with novel research questions to ask as well as empirical ways to answer them. You might already have a job at an AI Safety organization and have picked up these skills as you got more Research Engineering experience. If you can generate and test these original experiments particularly well, you might consider Research Scientist roles, too. You might also want to apply for AI residencies or Ph.D. programs to explore some research directions further in a more structured academic setting.
Goals
Write an explanation of what research directions fit your tastes.
Create 5+ concrete research questions you might want to explore. These can be from lists like those below, the future research sections at the ends of ML papers, or your own brainstorming.
Conduct AI Safety research and publish or share your results.
Resources
Research Advice
General Lists of Open Questions to Start Researching
Open Questions in Interpretability
Open Questions in Robustness/Anomaly Detection
Open Questions in Adversarial Training
Epilogue: Risks
Embarking on this quest brings with it a few risks. By keeping these in mind, you may be less likely to fail in these ways:
Capabilities
There is a real risk of novel AI Safety research advancing AI capabilities, reducing AI timelines, and giving us less time to solve alignment. If you go through this or other career guides, you may produce work that inadvertently leads to capabilities externalities.
That said, I expect most of the work from following this guide to be directly harmless, at least until you get to Level 6. Before then, it’s probably fine (and helpful motivation) to share what you are working on with others! After then, if you think you might have produced research that could be dangerous, err on the side of secrecy and ask for advice from a trusted and willing AI Safety researcher.
As a general guideline, consider building the habit of marking how hazardous your work is, evaluating other researchers’ work, and reading other AI Safety researchers’ evaluations of other AI work (e.g. in the comments of the AI Alignment Forum) in order to grow your sense of what various forms of dangerous work “smells” like.
Difficulty
Following this career guide alone will be extremely difficult, and there’s a high chance you could fail not because of your own lack of skill, but just because you don’t have the right support structures behind you.
To stay supported, find a mentor. This could be a willing AI Safety researcher or anyone else you trust who is skilled in machine learning with whom you could meet every so often to ask conceptual questions, get feedback, or discover new learning resources.
To stay supported, find coworkers. These could be friends who are also up-skilling in AI Safety with whom you could meet once or a few times per week to discuss things you are learning, collaborate on projects, or set accountable goals (e.g. “If I don’t read and take notes on this paper by next Monday, I owe you $20.”).
Also, keep in mind that this guide is kind of fake: breaking up the challenge of developing Research Engineering skills into a series of levels is especially reductive of the reality of learning. You probably shouldn’t do every suggested thing in a level once, move on to the next level, and never look back—rather, I think a better approach involves frequently revisiting previous levels, exploring other resources not listed here, and moving on from things if the marginal benefit you’d get from immediately spending extra time on those things doesn’t outweigh the opportunity costs of learning new things.
With all that said, this guide can be a way to test your fit. If you give programming or machine learning an honest effort and find you really don’t like any of it, then maybe Research Engineering just isn’t for you, and that’s okay! If you still think AI Safety is important, consider exploring AI Policy and Strategy.
Early Over-Specialization
Don’t be afraid to try new things! One failure mode I imagine here is diving deep into a small niche sub-field of AI Safety early in your journey and then either never making any useful contributions or getting bored and quitting. You actually might have been better at or derived more enjoyment from a different sub-field.
This could be important for getting jobs in AI Safety. Unless you get real lucky and happen to choose right, you might over-specialize in a narrow band of skills and lack knowledge in other domains that are important for the jobs you are interested in.
AI in general and even AI Safety in specific are very diverse fields with too many different things for any one person to specialize in, but you can still aim for a breadth of knowledge early on. When choosing projects in Levels 3-5, consider trying things you haven’t tried yet. It might feel scary, but breaking out of your comfort zone in this way is probably better for efficient learning and exploration.
Somewhat relatedly, AI moves super fast, and soon many of the resources here might become outdated or new promising research areas might emerge. I intend to keep this guide updated, but I encourage you to look beyond what’s here to explore new and interesting things in the future!
On a meta-level, it’s also possible to over-specialize in Research Engineering. Consider exploring other impactful career paths—many of the skills here could be really useful for Theoretical AI Alignment, AI Policy and Strategy, Information Security, and even Operations Management.
Late Over-Generalization
That said, you should eventually find some areas you particularly like and dig deeper into them (between Levels 6-7). I just think it’s worth exploring broadly before then so you can have a good taste of areas to choose from. Specialization is good; early over-specialization is bad.
This is also important if you intend to work in an AI Safety research lab: A common failure mode here is skimming across many different areas at a surface level but never diving into any enough to gain deeper insights and produce tangible results.
This failure mode seems to be pretty common with Ph.D. students, many of whom repeatedly hear from their advisors to make their projects even narrower in scope.
Sources
Here are some of the other great career guides and resources I used in the making of this. Most of the guides here also have good general advice that would be useful to read even if you don’t do the other things they suggest. Consider checking them out!
How to pursue a career in technical AI alignment—Charlie Rogers-Smith
ML for Alignment Bootcamp (MLAB 2) - Redwood Research (and the public GitHub repo)
Talking with various AI Safety Research Engineers and Scientists
Many thanks to Jakub Nowak, Peter Chatain, Thomas Woodside, Erik Jenner, Jacy Reese Anthis, and Konstantin Pilz for review and suggestions!
- Levelling Up in AI Safety Research Engineering by 2 Sep 2022 4:59 UTC; 163 points) (EA Forum;
- AGI safety field building projects I’d like to see by 19 Jan 2023 22:40 UTC; 68 points) (
- Join ASAP (AI Safety Accountability Programme) by 10 Sep 2022 11:15 UTC; 54 points) (EA Forum;
- List of technical AI safety exercises and projects by 19 Jan 2023 9:35 UTC; 41 points) (
- AGI safety field building projects I’d like to see by 24 Jan 2023 23:30 UTC; 25 points) (EA Forum;
- Join ASAP! (AI Safety Accountability Programme) 🚀 by 10 Sep 2022 11:15 UTC; 19 points) (
- List of technical AI safety exercises and projects by 19 Jan 2023 9:35 UTC; 15 points) (EA Forum;
This looks like a guide for [working in a company that already has a research agenda, and doing engineering work for them based on what they ask for] and not for [trying to come up with a new research direction that is better than what everyone else is doing], right?
Mostly, yes, that’s right. The exception is in Level 7: Original Experiments which suggests several resources for forming an inside view and coming up with new research directions, but I think many people could get hired as research engineers before doing that stuff (though maybe they do that stuff while working as a research engineer and that leads them to come up with new better research directions).
This is a great guide—thank you. However, in my experience as someone completely new to the field, 100-200 hours on each level is very optimistic. I’ve easily spent double/triple the duration on the first two levels and not get to a comfortable level.
Thanks, yeah that’s a pretty fair sentiment. I’ve changed the wording to “at least 100-200 hours,” but I guess the idea was more to present a very efficient way of learning things that maybe 80⁄20′s some of the material. This does mean there will be more to learn—rather than these being strictly linear progression levels, I imagine someone continuously coming back to AI safety readings and software/ML engineering skills often throughout their journey, as it sounds like you have.
I’d have Level 1 (AI Safety Fundamentals) be Level 4 or 5, probably. I’m pretty happy to hire engineers who have good ML skills but are rusty on “AI safety fundamentals”; I think they can pick that up on the job much more easily than the coding / ML skills.
Interesting, that is the level that feels most like it doesn’t have a solid place in a linear progression of skills. I wrote “Level 1 kind of happens all the time” to try to reflect this, but I ultimately decided to put it at the start because I feel that for people just starting out it can be a good way to test their fit for AI safety broadly (do they buy the arguments?) and decide whether they want to go down a more theoretical or empirical path. I just added some language to Level 1 to clarify this.
My understanding is that Level 1 is supposed to happen in parallel with the others, but this might be clearer by separating it outside the numbered system entirely, like Level 0 or Level −1 or something.
As for why it’s included as the first step, I think the reasoning is that if someone knows nothing about AI safety, their first question is going to be “Should I actually care about this problem”, and to answer that question they do need to do a little bit of AI safety specific reading.
I agree this could be made clearer—one of the first bits of advice I got when I started asking around about this stuff was “Technical ML ability is harder and takes longer to learn than AI safety knowledge does, so spend most of your time on this as opposed to AI safety” and I remember this being a very unintuitive insight.
I recognize many of the institutions you mentioned such as Nvidia and MIT. How confident are you that the more obscure ones like Hugging Face are trustworthy?
I’m not sure what you particularly mean by trustworthy. If you mean a place with good attitudes and practices towards existential AI safety, then I’m not sure HF has demonstrated that.
If you mean a company I can instrumentally trust to build and host tools that make it easy to work with large transformer models, then yes, it seems like HF pretty much has a monopoly on that for the moment, and it’s worth using their tools for a lot of empirical AI safety research.