The goal of this post is to analyze the growth of the technical and non-technical AI safety fields in terms of the number of organizations and number of FTEs working in these fields.
In 2022, I estimated that there were about 300 FTEs (full-time equivalents) working in the field of technical AI safety research and 100 on non-technical AI safety work (400 in total).
Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).
The first step for analyzing the growth of the technical AI safety field is to create a spreadsheet listing the names of known technical AI safety organizations, when they were founded, and an estimated number of FTEs for each organization. The technical AI safety dataset contains 70 organizations working on technical AI safety and a total of 629 FTEs working at them (67 active organizations and 604 active FTEs in 2025).
Then I created two scatter plots showing the number of technical AI safety research organizations and FTEs working at them respectively. On each graph, the x-axis is the years from 2010 to 2025 and the y-axis is the number of active organizations or estimated number of total FTEs working at those organizations. I also created models to fit the scatter plots. For the technical AI safety organizations and FTE graphs, I found that an exponential model fit the data best.
Figure 1: Scatter plot showing estimates for the number of technical AI safety research organizations by year from 2010 to 2025 with an exponential curve to fit the data.
Figure 2: Scatter plot showing the estimated number of technical AI safety FTEs by year from 2010 to 2025 with an exponential curve to fit the data.
The two graphs show relatively slow growth from 2010 to 2020 and then the number of technical AI safety organizations and FTEs starts to rapidly increase around 2020 and continues rapidly growing until today (2025).
The exponential models describe a 24% annual growth rate in the number of technical AI safety organizations and a 21% growth rate in the number of technical AI safety FTEs.
I also created graphs showing the number of technical AI safety organizations and FTEs by category. The top three categories by number of organizations and FTEs are Misc technical AI safety research, LLM safety, and interpretability.
Misc technical AI safety research is a broad category that mostly consists of empirical AI safety research that is not purely focused on LLM safety research such as scalable oversight, adversarial robustness, jailbreaks, and otherwise research that covers a variety of different areas and is difficult to put into a single category.
Figure 3: Number of technical AI safety organizations in each category in every year from 2010 − 2025.
Figure 4: Estimated number of technical AI safety FTEs in each category in each year from 2010 − 2025.
Non-technical AI safety field growth analysis
I also applied the same analysis to a dataset of non-technical AI safety organizations. The non-technical AI safety landscape, which includes fields like AI policy, governance, and advocacy, has also expanded significantly. The non-technical AI safety dataset contains 49 organizations working on non-technical AI safety and a total of 500 FTEs working at them.
The graphs plotting the growth of the non-technical AI safety field show an acceleration in the rate of growth around 2023 though a linear model fits the data well from the years 2010 − 2025.
The linear models describe an approximately 30% annual growth rate in the number of non-technical AI safety organizations and FTEs.
Figure 5: Scatter plot showing estimates for the number of non-technical AI safety organizations by year from 2010 to 2025 with a linear model to fit the data.
Figure 6: Scatter plot showing the estimated number of non-technical AI safety FTEs by year from 2010 to 2025 with a linear curve to fit the data.
In the previous post from 2022, I counted 45 researchers on Google Scholar with the AI governance tag. There are now over 300 researchers with the AI governance tag, evidence that the field has grown.
I also created graphs showing the number of non-technical AI safety organizations and FTEs by category.
Figure 7: Number of non-technical AI safety organizations in each category in every year from 2010 − 2025.
Figure 8: Estimated number of non-technical AI safety FTEs in each category in each year from 2010 − 2025.
Acknowledgements
Thanks to Ryan Kidd from MATS for sharing data on AI safety organizations which was useful for writing this post.
Appendix
A Colab notebook for reproducing the graphs in this post can be found here.
The old model is the blue line and the new model is the orange line.
The old model predicts a value of 484 active technical FTEs in 2025 and the true value is 604. The percentage error between the predicted and true value is ~20%.
Technical AI safety organizations table
Name
Founded
Year of Closure
Category
FTEs
Machine Intelligence Research Institute (MIRI)
2000
2024
Agent foundations
10
Future of Humanity Institute (FHI)
2005
2024
Misc technical AI safety research
10
Google DeepMind
2010
Misc technical AI safety research
30
GoodAI
2014
Misc technical AI safety research
5
Jacob Steinhardt research group
2016
Misc technical AI safety research
9
David Krueger (Cambridge)
2016
RL safety
15
Center for Human-Compatible AI
2016
RL safety
10
OpenAI
2016
LLM safety
15
Truthful AI (Owain Evans)
2016
LLM safety
3
CORAL
2017
Agent foundations
2
Scott Niekum (University of Massachusetts Amherst)
2018
RL safety
4
Eleuther AI
2020
LLM safety
5
NYU He He research group
2021
LLM safety
4
MIT Algorithmic Alignment Group (Dylan Hadfield-Menell)
2021
LLM safety
10
Anthropic
2021
Interpretability
40
Redwood Research
2021
AI control
10
Alignment Research Center (ARC)
2021
Theoretical AI safety research
4
Lakera
2021
AI security
3
MATS
2021
Misc technical AI safety research
20
Constellation
2021
Misc technical AI safety research
18
NYU Alignment Research Group (Sam Bowman)
2022
2024
LLM safety
5
Center for AI Safety (CAIS)
2022
Misc technical AI safety research
5
Fund for Alignment Research (FAR)
2022
Misc technical AI safety research
15
Conjecture
2022
Misc technical AI safety research
10
Aligned AI
2022
Misc technical AI safety research
2
Epoch AI
2022
AI forecasting
5
AI Safety Student Team (Harvard)
2022
LLM safety
5
Tegmark Group
2022
Interpretability
5
David Bau Interpretability Group
2022
Interpretability
12
Apart Research
2022
Misc technical AI safety research
30
Dovetail Research
2022
Agent foundations
5
PIBBSS
2022
Interdisciplinary
5
METR
2023
Evals
31
Apollo Research
2023
Evals
19
Timaeus
2023
Interpretability
8
London Initiative for AI Safety (LISA) and related programs
2023
Misc technical AI safety research
10
Cadenza Labs
2023
LLM safety
3
Realm Labs
2023
AI security
6
ACS
2023
Interdisciplinary
5
Meaning Alignment Institute
2023
Value learning
3
Orthogonal
2023
Agent foundations
1
AI Security Institute (AISI)
2023
Evals
50
Shi Feng research group (George Washington University)
2024
LLM safety
3
Virtue AI
2024
AI security
3
Goodfire
2024
Interpretability
29
Gray Swan AI
2024
AI security
3
Transluce
2024
Interpretability
15
Guide Labs
2024
Interpretability
4
Aether research
2024
LLM safety
3
Simplex
2024
Interpretability
2
Contramont Research
2024
LLM safety
3
Tilde
2024
Interpretability
5
Palisade Research
2024
AI security
6
Luthien
2024
AI control
1
ARIA
2024
Provably safe AI
1
CaML
2024
LLM safety
3
Decode Research
2024
Interpretability
2
Meta superintelligence alignment and safety
2025
LLM safety
5
LawZero
2025
Misc technical AI safety research
10
Geodesic
2025
CoT monitoring
4
Sharon Li (University of Wisconsin Madison)
2020
LLM safety
10
Yaodong Yang (Peking University)
2022
LLM safety
10
Dawn Song
2020
Misc technical AI safety research
5
Vincent Conitzer
2022
Multi-agent alignment
8
Stanford Center for AI Safety
2018
Misc technical AI safety research
20
Formation Research
2025
Lock-in risk research
2
Stephen Byrnes
2021
Brain-like AGI safety
1
Roman Yampolskiy
2011
Misc technical AI safety research
1
Softmax
2025
Multi-agent alignment
3
70
645
Non-technical AI safety organizations table
Name
Founded
Category
FTEs
Centre for Security and Emerging Technology (CSET)
2019
research
20
Epoch AI
2022
forecasting
20
Centre for Governance of AI (GovAI)
2018
governance
40
Leverhulme Centre for the Future of Intelligence
2016
research
25
Center for the Study of Existential Risk (CSER)
2012
research
3
OpenAI
2016
governance
10
DeepMind
2010
governance
10
Future of Life Institute
2014
advocacy
10
Center on Long-Term Risk
2013
research
5
Open Philanthropy
2017
research
15
Rethink Priorities
2018
research
5
UK AI Security Institute (AISI)
2023
governance
25
European AI Office
2024
governance
50
Ada Lovelace Institute
2018
governance
15
AI Now Institute
2017
governance
15
The Future Society (TFS)
2014
advocacy
18
Centre for Long-Term Resilience (CLTR)
2019
governance
5
Stanford Institute for Human-Centered AI (HAI)
2019
research
5
Pause AI
2023
advocacy
20
Simon Institute for Longterm Governance
2021
governance
10
AI Policy Institute
2023
governance
1
The AI Whistleblower Initiative
2024
whistleblower support
5
Machine Intelligence Research Institute
2024
advocacy
5
Beijing Institute of AI Safety and Governance
2024
governance
5
ControlAI
2023
advocacy
10
International Association for Safe and Ethical AI
2024
research
3
International AI Governance Alliance
2025
advocacy
1
Center for AI Standards and Innovation (U.S. AI Safety Institute)
AI Safety Field Growth Analysis 2025
Summary
The goal of this post is to analyze the growth of the technical and non-technical AI safety fields in terms of the number of organizations and number of FTEs working in these fields.
In 2022, I estimated that there were about 300 FTEs (full-time equivalents) working in the field of technical AI safety research and 100 on non-technical AI safety work (400 in total).
Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).
Note that this post is an updated version of my old 2022 post Estimating the Current and Future Number of AI Safety Researchers.
Technical AI safety field growth analysis
The first step for analyzing the growth of the technical AI safety field is to create a spreadsheet listing the names of known technical AI safety organizations, when they were founded, and an estimated number of FTEs for each organization. The technical AI safety dataset contains 70 organizations working on technical AI safety and a total of 629 FTEs working at them (67 active organizations and 604 active FTEs in 2025).
Then I created two scatter plots showing the number of technical AI safety research organizations and FTEs working at them respectively. On each graph, the x-axis is the years from 2010 to 2025 and the y-axis is the number of active organizations or estimated number of total FTEs working at those organizations. I also created models to fit the scatter plots. For the technical AI safety organizations and FTE graphs, I found that an exponential model fit the data best.
The two graphs show relatively slow growth from 2010 to 2020 and then the number of technical AI safety organizations and FTEs starts to rapidly increase around 2020 and continues rapidly growing until today (2025).
The exponential models describe a 24% annual growth rate in the number of technical AI safety organizations and a 21% growth rate in the number of technical AI safety FTEs.
I also created graphs showing the number of technical AI safety organizations and FTEs by category. The top three categories by number of organizations and FTEs are Misc technical AI safety research, LLM safety, and interpretability.
Misc technical AI safety research is a broad category that mostly consists of empirical AI safety research that is not purely focused on LLM safety research such as scalable oversight, adversarial robustness, jailbreaks, and otherwise research that covers a variety of different areas and is difficult to put into a single category.
Non-technical AI safety field growth analysis
I also applied the same analysis to a dataset of non-technical AI safety organizations. The non-technical AI safety landscape, which includes fields like AI policy, governance, and advocacy, has also expanded significantly. The non-technical AI safety dataset contains 49 organizations working on non-technical AI safety and a total of 500 FTEs working at them.
The graphs plotting the growth of the non-technical AI safety field show an acceleration in the rate of growth around 2023 though a linear model fits the data well from the years 2010 − 2025.
The linear models describe an approximately 30% annual growth rate in the number of non-technical AI safety organizations and FTEs.
In the previous post from 2022, I counted 45 researchers on Google Scholar with the AI governance tag. There are now over 300 researchers with the AI governance tag, evidence that the field has grown.
I also created graphs showing the number of non-technical AI safety organizations and FTEs by category.
Acknowledgements
Thanks to Ryan Kidd from MATS for sharing data on AI safety organizations which was useful for writing this post.
Appendix
A Colab notebook for reproducing the graphs in this post can be found here.
Technical AI safety organizations spreadsheet in Google Sheets.
Non-Technical AI safety organizations spreadsheet in Google Sheets.
Old and new dataset and model comparison
The following graph shows the difference between the old dataset and model from the Estimating the Current and Future Number of AI Safety Researchers (2022) post compared with the updated dataset and model.
The old model is the blue line and the new model is the orange line.
The old model predicts a value of 484 active technical FTEs in 2025 and the true value is 604. The percentage error between the predicted and true value is ~20%.
Technical AI safety organizations table
Non-technical AI safety organizations table