I agree with all those points. The main point I’m making is just that hardware efficiency seems like it should also be a function of total compute capex, which would mean it moves in the same way as algo progress, which would further exaggerate a slow down. Do you basically agree with that?
We haven’t yet seen the efficiency fallout of the 2022-2030 rapid scaling, and semi advancements take many years. So if some kind of experience curve effect wakes up as a central factor in semi efficiency, then 2030s might be fine on Moore’s law front. But if it wasn’t legibly a major factor recently, it’s not obvious it must become all that important even with the unusual inputs from AI datacenter scaling.
I agree with all those points. The main point I’m making is just that hardware efficiency seems like it should also be a function of total compute capex, which would mean it moves in the same way as algo progress, which would further exaggerate a slow down. Do you basically agree with that?
We haven’t yet seen the efficiency fallout of the 2022-2030 rapid scaling, and semi advancements take many years. So if some kind of experience curve effect wakes up as a central factor in semi efficiency, then 2030s might be fine on Moore’s law front. But if it wasn’t legibly a major factor recently, it’s not obvious it must become all that important even with the unusual inputs from AI datacenter scaling.
interesting point