Based on updated data and estimates from 2025, I estimate that there are now approximately 600 FTEs working on technical AI safety and 500 FTEs working on non-technical AI safety (1100 in total).
I think it’s suggestive to compare with e.g. the number of FTEs related to addressing climate change, for a hint at how puny the numbers above are:
I think it’s hard to pick a reference class for the field of AI safety because the number of FTEs working on comparable fields or projects can vary widely.
Two extremes examples: - Apollo Program: ~400,000 FTEs - Law of Universal Gravitation: 1 FTE (Newton)
Here are some historical challenges which seem comparable to AI safety since they are technical, focused on a specific challenge, and relatively recent [1]:
Pfizer-BioNTech vaccine (2020): ~2,000 researchers and ~3,000 FTEs for manufacturing and logistics
Human genome project (1990 − 2003): ~3,000 researchers across ~20 major centers
ITER fusion experiment (2006 - present): ~2,000 engineers and scientists, ~5000 FTEs in total
CERN and LHC (1994 - present): ~3000 researchers working onsite, ~15,000 collaborators arouond the world.
I think these projects show that it’s possible to make progress on major technical problems with a few thousand talented and focused people.
I think these projects show that it’s possible to make progress on major technical problems with a few thousand talented and focused people.
I don’t think it’s impossible that this would be enough, but it seems much worse to risk undershooting than overshooting in terms of the resources allocated and the speed at which this happens; especially when, at least in principle, the field could be deploying even its available resources much faster than it currently is.
While I like the idea of the comparison, I don’t think the gov’t definition of “green jobs” is the right comparison point. (e.g. those are not research jobs)
I think it’s suggestive to compare with e.g. the number of FTEs related to addressing climate change, for a hint at how puny the numbers above are:
I think it’s hard to pick a reference class for the field of AI safety because the number of FTEs working on comparable fields or projects can vary widely.
Two extremes examples:
- Apollo Program: ~400,000 FTEs
- Law of Universal Gravitation: 1 FTE (Newton)
Here are some historical challenges which seem comparable to AI safety since they are technical, focused on a specific challenge, and relatively recent [1]:
Pfizer-BioNTech vaccine (2020): ~2,000 researchers and ~3,000 FTEs for manufacturing and logistics
Human genome project (1990 − 2003): ~3,000 researchers across ~20 major centers
ITER fusion experiment (2006 - present): ~2,000 engineers and scientists, ~5000 FTEs in total
CERN and LHC (1994 - present): ~3000 researchers working onsite, ~15,000 collaborators arouond the world.
I think these projects show that it’s possible to make progress on major technical problems with a few thousand talented and focused people.
These estimates were produced using ChatGPT with web search.
I don’t think it’s impossible that this would be enough, but it seems much worse to risk undershooting than overshooting in terms of the resources allocated and the speed at which this happens; especially when, at least in principle, the field could be deploying even its available resources much faster than it currently is.
While I like the idea of the comparison, I don’t think the gov’t definition of “green jobs” is the right comparison point. (e.g. those are not research jobs)