Tom Davidson thinks we have <3 years from 20%-AI to 100%-AI; assume we have ~3 years to align AGI with the aid of Alignment MVPs
Assume the hardness of aligning TAI is equivalent to the Apollo Program (90k engineer/scientist FTEs x 9 years = 810k FTE-years); therefore, we need ~9k more AIS technical researchers
The technical AIS field is currently ~500 people; at the current growth rate of 28% per year, it will take 12 years to grow to 9k people (Oct 2036)
Alternatively, if we bound by the Manhattan Project (25k FTEs x 5 years = 125 FTE-years), this will take 6.5 years (Jul 2031)
Metaculus predicts weak AGI in 2026 and strong AGI in 2030; clearly, more talent development is needed if we want to make the Nov 2030 AGI deadline!
If we want to make the 9k researchers goal by Nov 2030 AGI deadline, we need an annual growth rate of 65%, 2.3x the current growth rate of 28%
I appreciate the spirit of this type of calculation, but think that it’s a bit too wacky to be that informative. I think that it’s a bit of a stretch to string these numbers together. E.g. I think Ryan and Tom’s predictions are inconsistent, and I think that it’s weird to identify 100%-AI as the point where we need to have “solved the alignment problem”, and I think that it’s weird to use the Apollo/Manhattan program as an estimate of work required. (I also don’t know what your Manhattan project numbers mean: I thought there were more like 2.5k scientists/engineers at Los Alamos, and most of the people elsewhere were purifying nuclear material)
There’s the standard software engineer response of “You cannot make a baby in 1 month with 9 pregnant women”. If you don’t have a term in this calculation for the amount of research hours that must be done serially vs the amount of research hours that can be done in parallel, then it will always seem like we have too few people, and should invest vastly more in growth growth growth!
If you find that actually your constraint is serial research output, then you still may conclude you need a lot of people, but you will sacrifice a reasonable amount of growth speed for attracting better serial researchers.
(Possibly this shakes out to mathematicians and physicists, but I don’t want to bring that conversation into here)
I also note that 30x seems like an under-estimate to me, but also too simplified. AIs will make some tasks vastly easier, but won’t help too much with other tasks. We will have a new set of bottlenecks once we reach the “AIs vastly helping with your work” phase. The question to ask is “what will the new bottlenecks be, and who do we have to hire to be prepared for them?”
If you are uncertain, this consideration should lean you much more towards adaptive generalists than the standard academic crop.
How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.
Ryan Greenblatt seems to think we can get a 30x speed-up in AI R&D using near-term, plausibly safe AI systems; assume every AIS researcher can be 30x’d by Alignment MVPs
Tom Davidson thinks we have <3 years from 20%-AI to 100%-AI; assume we have ~3 years to align AGI with the aid of Alignment MVPs
Assume the hardness of aligning TAI is equivalent to the Apollo Program (90k engineer/scientist FTEs x 9 years = 810k FTE-years); therefore, we need ~9k more AIS technical researchers
The technical AIS field is currently ~500 people; at the current growth rate of 28% per year, it will take 12 years to grow to 9k people (Oct 2036)
Alternatively, if we bound by the Manhattan Project (25k FTEs x 5 years = 125 FTE-years), this will take 6.5 years (Jul 2031)
Metaculus predicts weak AGI in 2026 and strong AGI in 2030; clearly, more talent development is needed if we want to make the Nov 2030 AGI deadline!
If we want to make the 9k researchers goal by Nov 2030 AGI deadline, we need an annual growth rate of 65%, 2.3x the current growth rate of 28%
I appreciate the spirit of this type of calculation, but think that it’s a bit too wacky to be that informative. I think that it’s a bit of a stretch to string these numbers together. E.g. I think Ryan and Tom’s predictions are inconsistent, and I think that it’s weird to identify 100%-AI as the point where we need to have “solved the alignment problem”, and I think that it’s weird to use the Apollo/Manhattan program as an estimate of work required. (I also don’t know what your Manhattan project numbers mean: I thought there were more like 2.5k scientists/engineers at Los Alamos, and most of the people elsewhere were purifying nuclear material)
There’s the standard software engineer response of “You cannot make a baby in 1 month with 9 pregnant women”. If you don’t have a term in this calculation for the amount of research hours that must be done serially vs the amount of research hours that can be done in parallel, then it will always seem like we have too few people, and should invest vastly more in growth growth growth!
If you find that actually your constraint is serial research output, then you still may conclude you need a lot of people, but you will sacrifice a reasonable amount of growth speed for attracting better serial researchers.
(Possibly this shakes out to mathematicians and physicists, but I don’t want to bring that conversation into here)
I also note that 30x seems like an under-estimate to me, but also too simplified. AIs will make some tasks vastly easier, but won’t help too much with other tasks. We will have a new set of bottlenecks once we reach the “AIs vastly helping with your work” phase. The question to ask is “what will the new bottlenecks be, and who do we have to hire to be prepared for them?”
If you are uncertain, this consideration should lean you much more towards adaptive generalists than the standard academic crop.