To be clear, I agree that reducing availability of compute will substantially slow algorithmic research. So, export controls which do a good job of reducing the available amount of compute would slow algorithmic research progress.
If we have a fixed quantity (and quality) of human researchers and reduce the amount of compute by 5x at current margins, I’d expect algorithm progress would go maybe 2x slower.[1]
If AI R&D is fully automated, then I’d expect 5x less compute would make algorithmic progress go maybe 3.5x slower as both additional parallel researchers and experiments require compute.[2][3]
This is based on assuming Cobb-Douglas for the marginal returns and using something like serial_labor_speed0.5compute0.5 for 50.5≈2.2. If you back out the numbers from AI 2027′s survey numbers you get a compute exponent more like 0.43 which yields 50.43≈2.
Naively, cutting compute reduces experiment compute and also reduces the number of parallel AI researchers (because AI R&D is fully automated with AIs), but doesn’t alter the serial speed or quality of these AI researchers. But, there is a parallelization penalty because 10x more parallel workers is less good than 10x faster workers. We’ll say the marginal penalty is an exponent of around 0.5. So, you maybe get an overall speed up of parallel_labor0.5⋅0.5compute0.5=50.5⋅0.5+0.5≈3.34. If you back out the numbers from AI 2027′s survey numbers you get a compute exponent of 0.43 and an overall returns to parallel labor of 0.32 for 50.43+0.3≈3.34. My guess is both of these are slight underestimates of the slow down as I expect the exponent for returns to compute for experiments to be higher in the fully automated AI R&D regime and having less compute would also somewhat hit speed (and maybe quality?) in some cases. So, I rounded up to 3.5.
If external data is a key limiting factor, you’d expect a smaller slowdown, but I’m skeptical this will make a big difference. Also, external data would still presumably come in faster if you have more inference compute both so you can serve more AIs and have more AIs gather data.
To be clear, I agree that reducing availability of compute will substantially slow algorithmic research. So, export controls which do a good job of reducing the available amount of compute would slow algorithmic research progress.
If we have a fixed quantity (and quality) of human researchers and reduce the amount of compute by 5x at current margins, I’d expect algorithm progress would go maybe 2x slower.[1]
If AI R&D is fully automated, then I’d expect 5x less compute would make algorithmic progress go maybe 3.5x slower as both additional parallel researchers and experiments require compute.[2][3]
This is based on assuming Cobb-Douglas for the marginal returns and using something like serial_labor_speed0.5compute0.5 for 50.5≈2.2. If you back out the numbers from AI 2027′s survey numbers you get a compute exponent more like 0.43 which yields 50.43≈2.
Naively, cutting compute reduces experiment compute and also reduces the number of parallel AI researchers (because AI R&D is fully automated with AIs), but doesn’t alter the serial speed or quality of these AI researchers. But, there is a parallelization penalty because 10x more parallel workers is less good than 10x faster workers. We’ll say the marginal penalty is an exponent of around 0.5. So, you maybe get an overall speed up of parallel_labor0.5⋅0.5compute0.5=50.5⋅0.5+0.5≈3.34. If you back out the numbers from AI 2027′s survey numbers you get a compute exponent of 0.43 and an overall returns to parallel labor of 0.32 for 50.43+0.3≈3.34. My guess is both of these are slight underestimates of the slow down as I expect the exponent for returns to compute for experiments to be higher in the fully automated AI R&D regime and having less compute would also somewhat hit speed (and maybe quality?) in some cases. So, I rounded up to 3.5.
If external data is a key limiting factor, you’d expect a smaller slowdown, but I’m skeptical this will make a big difference. Also, external data would still presumably come in faster if you have more inference compute both so you can serve more AIs and have more AIs gather data.