Interesting point. But could I check why you and @Vladimir_M are confident Moore’s law continues at all?
I’d have guessed maintaining the rate of gains in hardware efficiency will require exponentially increasing chip R&D spending and researcher hours.
But if total spending on chips has plateaued, then Nvidia etc.’s R&D spending will have also plateaued, which I think would imply hardware efficiency gains drop to near zero.
It’s an anchor, something concrete to adjust predictions around, the discussion in this thread is about implications of the anchor rather than its strength (so being confident in it isn’t really implied). Moore’s law about transistor count per die mostly stopped, but the historical trend seems to be surviving in its price-performance form (which should really be about compute per datacenter-level total cost of ownership). So maybe it keeps going as it did for decades, and specific predictions for what would keep Moore’s law going at any given time were always hard, even as it did continue. Currently this might be about advanced packaging (making the parts of a datacenter outside the chips cheaper per transistor).
If Moore’s law stops even for price-performance, then AI scaling slowdown gets even stronger in 2030-2050 than what this post explores. Also, growth in compute spending probably doesn’t completely plateau (progress in adoption alone would feed growth for many years), and that to some extent compensates for compute not getting cheaper as fast as it used to (if that happens).
Interesting point. But could I check why you and @Vladimir_M are confident Moore’s law continues at all?
I’d have guessed maintaining the rate of gains in hardware efficiency will require exponentially increasing chip R&D spending and researcher hours.
But if total spending on chips has plateaued, then Nvidia etc.’s R&D spending will have also plateaued, which I think would imply hardware efficiency gains drop to near zero.
It’s an anchor, something concrete to adjust predictions around, the discussion in this thread is about implications of the anchor rather than its strength (so being confident in it isn’t really implied). Moore’s law about transistor count per die mostly stopped, but the historical trend seems to be surviving in its price-performance form (which should really be about compute per datacenter-level total cost of ownership). So maybe it keeps going as it did for decades, and specific predictions for what would keep Moore’s law going at any given time were always hard, even as it did continue. Currently this might be about advanced packaging (making the parts of a datacenter outside the chips cheaper per transistor).
If Moore’s law stops even for price-performance, then AI scaling slowdown gets even stronger in 2030-2050 than what this post explores. Also, growth in compute spending probably doesn’t completely plateau (progress in adoption alone would feed growth for many years), and that to some extent compensates for compute not getting cheaper as fast as it used to (if that happens).
You should actually tag @Vladimir_Nesov instead of Vladimir M, as Vladimir Nesov was the original author.