Both labor and compute have been scaled up over the last several years at big AI companies. My understanding is the scaling in compute was more important for algorithmic progress
That may be the case, but I suppose that in the last several years, compute has been scaled up more than labor. (Labor cost is entirely reoccurring, while compute cost is a one-time cost plus a reoccurring electricity cost, and a progress in compute hardware, from smaller integrated circuits, means that compute cost is decreasing over time.) Then obviously that doesn’t necessarily mean that an AI company A with access to 2x FLOP/s compute and y AI researchers has an advantage over a company B with only x FLOP/s compute but 2y researchers.
In fact I think in that sense labor is likely more important than compute for algorithmic progress. And that doesn’t seem so far away from reality, if you model A as a US company with cheaper access to compute and B as a Chinese company with cheaper access to labor (due to lower wages).
I don’t think parallelism works very well among employees while it works great for compute.
I agree that labor is probably a somewhat more important input (as in, if you offered an AI company the ability to make its workers 2x faster in serial speed or 2x more compute, they would do better if they took the 2x serial speed. I’d guess the AI companies are roughly indifferent between 1.6x serial speed and 2x compute, but more like 1.35x vs 2x is also plausible.
It seems plausible to me that well enforced export controls cut compute by a factor of 3 for AI companies in china, and a larger factor is plausible longer term. This would substantially reduce the rate of algorithmic progress IMO.
That may be the case, but I suppose that in the last several years, compute has been scaled up more than labor. (Labor cost is entirely reoccurring, while compute cost is a one-time cost plus a reoccurring electricity cost, and a progress in compute hardware, from smaller integrated circuits, means that compute cost is decreasing over time.) Then obviously that doesn’t necessarily mean that an AI company A with access to 2x FLOP/s compute and y AI researchers has an advantage over a company B with only x FLOP/s compute but 2y researchers.
In fact I think in that sense labor is likely more important than compute for algorithmic progress. And that doesn’t seem so far away from reality, if you model A as a US company with cheaper access to compute and B as a Chinese company with cheaper access to labor (due to lower wages).
I don’t think parallelism works very well among employees while it works great for compute.
I agree that labor is probably a somewhat more important input (as in, if you offered an AI company the ability to make its workers 2x faster in serial speed or 2x more compute, they would do better if they took the 2x serial speed. I’d guess the AI companies are roughly indifferent between 1.6x serial speed and 2x compute, but more like 1.35x vs 2x is also plausible.
It seems plausible to me that well enforced export controls cut compute by a factor of 3 for AI companies in china, and a larger factor is plausible longer term. This would substantially reduce the rate of algorithmic progress IMO.