It’s an anchor, something concrete to adjust predictions around, the discussion in this thread is about implications of the anchor rather than its strength (so being confident in it isn’t really implied). Moore’s law about transistor count per die mostly stopped, but the historical trend seems to be surviving in its price-performance form (which should really be about compute per datacenter-level total cost of ownership). So maybe it keeps going as it did for decades, and specific predictions for what would keep Moore’s law going at any given time were always hard, even as it did continue. Currently this might be about advanced packaging (making the parts of a datacenter outside the chips cheaper per transistor).
If Moore’s law stops even for price-performance, then AI scaling slowdown gets even stronger in 2030-2050 than what this post explores. Also, growth in compute spending probably doesn’t completely plateau (progress in adoption alone would feed growth for many years), and that to some extent compensates for compute not getting cheaper as fast as it used to (if that happens).
It’s an anchor, something concrete to adjust predictions around, the discussion in this thread is about implications of the anchor rather than its strength (so being confident in it isn’t really implied). Moore’s law about transistor count per die mostly stopped, but the historical trend seems to be surviving in its price-performance form (which should really be about compute per datacenter-level total cost of ownership). So maybe it keeps going as it did for decades, and specific predictions for what would keep Moore’s law going at any given time were always hard, even as it did continue. Currently this might be about advanced packaging (making the parts of a datacenter outside the chips cheaper per transistor).
If Moore’s law stops even for price-performance, then AI scaling slowdown gets even stronger in 2030-2050 than what this post explores. Also, growth in compute spending probably doesn’t completely plateau (progress in adoption alone would feed growth for many years), and that to some extent compensates for compute not getting cheaper as fast as it used to (if that happens).