AI infrastructure seems really expensive. I need to actually do the math here (and I haven’t! hence this is uncertain) but do we really expect growth on trend given the cost of this buildout in both chips and energy? Can someone really careful please look at this?
This is not a really careful look, but: The world has managed extremely fast (well, trains and highways fast, not FOOM-fast) large-scale transformations of the planet before. Mostly this requires that 1) the cost is worth the benefit to those spending and 2) we get out of our own way and let it happen. I don’t think money or fundamental feasibility will be the limiters here.
Also, consider that training is now, or is becoming, a minority of compute. More and more is going towards inference—aka that which generates revenue. If building inference compute is profitable and becoming more profitable, then it doesn’t really matter how little of the value is captured by the likes of OpenAI. It’s worth building, so it’ll get built. And some of it will go towards training and research, in ever-increasing absolute amounts.
Even if many of the companies building data centers die out because of a slump of some kind, the data centers themselves, and the energy to power them, will still exist. Plausibly the second buyers then get the infrastructural benefits at a much lower price—kinda like the fiber optic buildout of the 1990s and early 2000s. AKA “AI slump wipes out the leaders” might mean “all of a sudden there’s huge amounts of compute available at much lower cost.”
I think this is a question on which we should spend lots of time actually thinking and writing. I’m not sure my approximations will be good at guessing the final result.
This is not a really careful look, but: The world has managed extremely fast (well, trains and highways fast, not FOOM-fast) large-scale transformations of the planet before. Mostly this requires that 1) the cost is worth the benefit to those spending and 2) we get out of our own way and let it happen. I don’t think money or fundamental feasibility will be the limiters here.
Also, consider that training is now, or is becoming, a minority of compute. More and more is going towards inference—aka that which generates revenue. If building inference compute is profitable and becoming more profitable, then it doesn’t really matter how little of the value is captured by the likes of OpenAI. It’s worth building, so it’ll get built. And some of it will go towards training and research, in ever-increasing absolute amounts.
Even if many of the companies building data centers die out because of a slump of some kind, the data centers themselves, and the energy to power them, will still exist. Plausibly the second buyers then get the infrastructural benefits at a much lower price—kinda like the fiber optic buildout of the 1990s and early 2000s. AKA “AI slump wipes out the leaders” might mean “all of a sudden there’s huge amounts of compute available at much lower cost.”
I think this is a question on which we should spend lots of time actually thinking and writing. I’m not sure my approximations will be good at guessing the final result.