Sure, but worth noting that a strong version of this view also implies that all algorithmic progress to date has no relevance to powerful AI (at least if powerful AI trained with 1-2 OOMs more compute than current frontier models).
Like, this view must implicitly think that there is a different good being produced over time, rather than thinking there is a single good “algorithmic progress” which takes in inputs “frontier scale experiments” and “labor” (because frontier scale isn’t a property that exists in isolation).
This is at least somewhat true as algorithmic progress often doesn’t transfer (as you note), but presumably isn’t totally true as people still use batch norm, MoE, transformers, etc.
Yes, I think that what it takes to advance the AI capability frontier has changed significantly over time, and I expect this to continue. That said, I don’t think that existing algorithmic progress is irrelevant to powerful AI. The gains accumulate, even though we need increasing resources to keep them coming.
AFAICT, it is not unusual for productivity models to account for stuff like this. Jones (1995) includes it in his semi-endogenous growth model where, as useful innovations are accumulated, the rate at which each unit of R&D effort accumulates more is diminished. That paper claims that it was already known in the literature as a “fishing out” effect.
Sure, but worth noting that a strong version of this view also implies that all algorithmic progress to date has no relevance to powerful AI (at least if powerful AI trained with 1-2 OOMs more compute than current frontier models).
Like, this view must implicitly think that there is a different good being produced over time, rather than thinking there is a single good “algorithmic progress” which takes in inputs “frontier scale experiments” and “labor” (because frontier scale isn’t a property that exists in isolation).
This is at least somewhat true as algorithmic progress often doesn’t transfer (as you note), but presumably isn’t totally true as people still use batch norm, MoE, transformers, etc.
Yes, I think that what it takes to advance the AI capability frontier has changed significantly over time, and I expect this to continue. That said, I don’t think that existing algorithmic progress is irrelevant to powerful AI. The gains accumulate, even though we need increasing resources to keep them coming.
AFAICT, it is not unusual for productivity models to account for stuff like this. Jones (1995) includes it in his semi-endogenous growth model where, as useful innovations are accumulated, the rate at which each unit of R&D effort accumulates more is diminished. That paper claims that it was already known in the literature as a “fishing out” effect.