I have the impression that for reasons I don’t fully understand, scaling up training compute isn’t just a matter of being willing to spend more. One does not simply spend $1B on compute.
Ideas and training compute substitute for each other sufficiently well enough that I don’t think it’s useful to talk about “[figuring] out how to make AGI before or after we [have] the compute to implement it.” (And when “‘hardware overhang’ first came about” it had very different usage, e.g. the AI Impacts definition.)
Agree in part.
I have the impression that for reasons I don’t fully understand, scaling up training compute isn’t just a matter of being willing to spend more. One does not simply spend $1B on compute.
Ideas and training compute substitute for each other sufficiently well enough that I don’t think it’s useful to talk about “[figuring] out how to make AGI before or after we [have] the compute to implement it.” (And when “‘hardware overhang’ first came about” it had very different usage, e.g. the AI Impacts definition.)