80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
34% are working at a non-profit org (Apollo, Redwood, MATS, EleutherAI, FAR.AI, MIRI, ARC, Timaeus, LawZero, RAND, METR, etc.);
27% are working at a for-profit org (Anthropic, Google DeepMind, OpenAI, Goodfire, Meta, etc.);
18% are working as independent researchers, probably with grant funding from Open Philanthropy, LTFF, etc.;
15% are working as academic researchers, including PhDs/Postdocs at Oxford, Cambridge, MIT, ETH Zurich, UC Berkeley, etc.;
6% are working in government agencies, including in the US, UK, EU, and Singapore.
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the “non-profit AI safety organization” category instead of “government agency”. I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.
10% are working on AI capabilities, but only ~6 on pre-training at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated)
What about RL?
Why did you single out pre-training specifically?
The number I’d be interested in is the % that went on to work on capabilities at a frontier AI company.
Hi Nicholas! You are not in the data as you were not a MATS scholar, to my knowledge. Were you a participant in one of the MATS training programs instead? Or did I make a mistake?
80% of MATS alumni who completed the program before 2025 are still working on AI safety today, based on a survey of all available alumni LinkedIns or personal websites (242/292 ~ 83%). 10% are working on AI capabilities, but only ~6 at a frontier AI company (2 at Anthropic, 2 at Google DeepMind, 1 at Mistral AI, 1 extrapolated). 2% are still studying, but not in a research degree focused on AI safety. The last 8% are doing miscellaneous things, including non-AI safety/capabilities software engineering, teaching, data science, consulting, and quantitative trading.
Of the 193+ MATS alumni working on AI safety (extrapolated: 234):
34% are working at a non-profit org (Apollo, Redwood, MATS, EleutherAI, FAR.AI, MIRI, ARC, Timaeus, LawZero, RAND, METR, etc.);
27% are working at a for-profit org (Anthropic, Google DeepMind, OpenAI, Goodfire, Meta, etc.);
18% are working as independent researchers, probably with grant funding from Open Philanthropy, LTFF, etc.;
15% are working as academic researchers, including PhDs/Postdocs at Oxford, Cambridge, MIT, ETH Zurich, UC Berkeley, etc.;
6% are working in government agencies, including in the US, UK, EU, and Singapore.
10% of MATS alumni co-founded an active AI safety start-up or team during or after the program, including Apollo Research, Timaeus, Simplex, ARENA, etc.
Errata: I mistakenly included UK AISI in the “non-profit AI safety organization” category instead of “government agency”. I also mistakenly said that the ~6 alumni working on AI capabilities at frontier AI companies were all working on pre-training.
What about RL?
Why did you single out pre-training specifically?
The number I’d be interested in is the % that went on to work on capabilities at a frontier AI company.
Sorry, I should have said “~6 on capabilities at a frontier AI company”.
What are some representative examples of the rest? I’m wondering if it’s:
AI wrappers like Cursor
Model training for entirely mundane stuff like image gen at Stablediffusion
Narrow AI like AlphaFold at Isomorphic
An AGI-ish project but not LLMs, e.g. a company that just made AlphaGo type stuff
General-purpose LLMs but not at a frontier lab (I would honestly count Mistral here)
Here are the AI capabilities organizations where MATS alumni are working (1 at each except for Anthropic and GDM, where there are 2 each):
Anthropic
Barcelona Supercomputing Cluster
Conduit Intelligence
Decart
EliseAI
Fractional AI
General Agents
Google DeepMind
iGent AI
Imbue
Integuide
Kayrros
Mecha Health
Mistral AI
MultiOn
Norm AI
NVIDIA
Palantir
Phonic
RunRL
Salesforce
Sandbar
Secondmind
Yantran
Alumni also work at these organizations, which might be classified as capabilities or safety-adjacent:
Freestyle Research
Leap Labs
UK AISI is a government agency, so the pie chart is probably misleading on that segment!
Oh, shoot, my mistake.
I’m curious to see if I’m in this data, so I can help make it more accurate by providing info.
Hi Nicholas! You are not in the data as you were not a MATS scholar, to my knowledge. Were you a participant in one of the MATS training programs instead? Or did I make a mistake?