It’s an open secret that essentially all major AI companies are burning cash and running at massive losses.
If progress is slow enough such that it requires X years of continued funding to achieve AI capabilities at least useful enough to produce a net ROI, at what value of X will the economics collapse, resulting in a major downscaling or total collapse of these companies?
I don’t think significant capability gains are being priced in, so that’s not a crux. Burning cash is normal/healthy for startups, the unusual thing with AI startups is their size and growth rate. It’s plausible that OpenAI and Anthropic will grow into revenues of about $100bn with adoption alone, even at a level of capabilities reasonably close to what’s already available. A 1 GW AI datacenter campus costs $40-50bn in capex (buildings, infrastructure, and compute) or $10-15bn per year to use (with a long term commitment). So they might be able to afford about 5 GW of AI compute (each, in total across that company’s datacenters). This scale is still within means of tech giants to backstop, so Meta and Google could also persist at this level.
The conservative estimate does imply a slowdown in frontier training AI compute, since the current trend would ask for 5 GW individual training systems per AI company in 2028 (rather than in total, inference and all). It would take significant capability gains to sustain the trend (even ignoring the logistical difficulties), and at that point there will be more debt involved in an essential way. So the consequences of a slowdown that happens after such capability gains will be more serious than if it starts in the near future.
There’s no hard delimiter on financially induced sector collapse, and it’s often not directly attributable to the sector that collapses. The dot com crash was tied to federal reserve interest rate increases that resulted in a sell off as investors moved towards less speculative investments.
AI is in a fairly safe position right now through sheer variety of vested interests. Government, construction, infrastructure, computing hardware, software, and early corporate adopters of AI are all doing everything they can to keep the ball rolling. They’ve crossed a line where sunk costs have an outsized role in future decisions. There’s also the wildcard of AI being deemed a strategic defense asset.
The present state of spending will continue until there’s some catastrophic event that scares people off or public pressure forces a reduction in scale.
My money is on public pressure leading to change. Data centers are being publicly subsidized and the general public is beginning to push back. One or two public service commissions refusing to comply with energy and water subsidies will lead to wide scale changes that will put the brakes on. Things will move out of startup mode and into the realm of actual business.
The dot com crash was tied to federal reserve interest rate increases that resulted in a sell off as investors moved towards less speculative investments.
I was working in the industry during the dotcom boom. And I can say with confidence that by the start of 2000, the startups had gotten very, very dumb. In many cases, they were dumber than the also-ran crypto and AI startups we’ve seen recently. A couple of them back in 2000 were my clients, though I tried to get rid of the dumb ones quickly.
There’s a critical moment in every bubble where you start hearing investment advice from normies at cocktail parties, and people start publishing books insisting, “This time is different and the market will go up forever!” When your non-technical grand-uncle starts button-holing you about obscure cryptocurrencies, then the market has run out of suckers. And a crash is coming.
The Fed will generally cut interest rates around this point, on the theory that if people have enough money to invest in ideas that dumb, it’s time to “take the punchbowl away.” There’s a bunch of economic modeling behind this, of course. But a good intuition is that if there’s enough money in the system to invest in thousands of extremely dumb companies, we probably have too much liquidity.
I suppose the AI bubble could genuinely be different, in that we might build SkyNet before the bubble collapses on its own. That would, I guess, represent the popping of the 10,000-year Homo sapiens bubble. But I’d prefer to avoid that.
This is not a response per se, but an expression of displeasure. A few companies decided it was appropriate to gamble a significant fraction of the GDP on behalf of everyone—without anyone’s permission. And now we are all locked into an economic gambit with no offramp. We either dedicate everything to making this work or we are all living in the hellish aftermath of a hideously large bubble bursting. It’s as if we conjured the economic equivalent of Roko’s Basilisk into being.
To answer the question posed, X is between 0 and 1.
Why do I believe this? Because if companies are already resorting to financial engineering in order to buy themselves a little more time to find PMF, then they have already resorted to extraordinary measures. Which means they have run out of alternatives. Which means we are not in an early stage of the bubble.
I am willing to bet you at 5:1 odds in your favour that OpenAI does not lose more than 50% of its valuation within 1 year from today for up to $1000 of my own money.
I agree with Eliezer’s point here that the AI bubble could pop without a recession under a competent Fed: https://xcancel.com/ESYudkowsky/status/1971311526767476760#m, and I think Jerome Powell is likely competent enough to handle this (less certain about potential successors).
That tweet doesn’t sound right to me. Or at least, to me there’s a simpler and more direct explanation of bubbles in terms of real resources, without having to mention money supply or central banks at all.
During a bubble, people are having fun because resources are being misallocated: misallocated to their fun. Some rich chumps are throwing their resources at something useless, like buying tulips. That bankrolls the good times for everyone else: the tulip-growers, the hairdressers that serve the tulip-growers and so on. But at some point the rich chumps realize that tulips aren’t that great, and that they burned their resources just to make a big bonfire and make everyone warm for awhile. When they realize that, the tulip growers will lose their jobs, and then the hairdressers who served them and so on. That’s the pain of the bubble ending, and it’s unavoidable, central bank or no.
[Question] How long do AI companies have to achieve significant capability gains before funding collapses?
It’s an open secret that essentially all major AI companies are burning cash and running at massive losses.
If progress is slow enough such that it requires X years of continued funding to achieve AI capabilities at least useful enough to produce a net ROI, at what value of X will the economics collapse, resulting in a major downscaling or total collapse of these companies?
I don’t think significant capability gains are being priced in, so that’s not a crux. Burning cash is normal/healthy for startups, the unusual thing with AI startups is their size and growth rate. It’s plausible that OpenAI and Anthropic will grow into revenues of about $100bn with adoption alone, even at a level of capabilities reasonably close to what’s already available. A 1 GW AI datacenter campus costs $40-50bn in capex (buildings, infrastructure, and compute) or $10-15bn per year to use (with a long term commitment). So they might be able to afford about 5 GW of AI compute (each, in total across that company’s datacenters). This scale is still within means of tech giants to backstop, so Meta and Google could also persist at this level.
The conservative estimate does imply a slowdown in frontier training AI compute, since the current trend would ask for 5 GW individual training systems per AI company in 2028 (rather than in total, inference and all). It would take significant capability gains to sustain the trend (even ignoring the logistical difficulties), and at that point there will be more debt involved in an essential way. So the consequences of a slowdown that happens after such capability gains will be more serious than if it starts in the near future.
There’s no hard delimiter on financially induced sector collapse, and it’s often not directly attributable to the sector that collapses. The dot com crash was tied to federal reserve interest rate increases that resulted in a sell off as investors moved towards less speculative investments.
AI is in a fairly safe position right now through sheer variety of vested interests. Government, construction, infrastructure, computing hardware, software, and early corporate adopters of AI are all doing everything they can to keep the ball rolling. They’ve crossed a line where sunk costs have an outsized role in future decisions. There’s also the wildcard of AI being deemed a strategic defense asset.
The present state of spending will continue until there’s some catastrophic event that scares people off or public pressure forces a reduction in scale.
My money is on public pressure leading to change. Data centers are being publicly subsidized and the general public is beginning to push back. One or two public service commissions refusing to comply with energy and water subsidies will lead to wide scale changes that will put the brakes on. Things will move out of startup mode and into the realm of actual business.
I was working in the industry during the dotcom boom. And I can say with confidence that by the start of 2000, the startups had gotten very, very dumb. In many cases, they were dumber than the also-ran crypto and AI startups we’ve seen recently. A couple of them back in 2000 were my clients, though I tried to get rid of the dumb ones quickly.
There’s a critical moment in every bubble where you start hearing investment advice from normies at cocktail parties, and people start publishing books insisting, “This time is different and the market will go up forever!” When your non-technical grand-uncle starts button-holing you about obscure cryptocurrencies, then the market has run out of suckers. And a crash is coming.
The Fed will generally cut interest rates around this point, on the theory that if people have enough money to invest in ideas that dumb, it’s time to “take the punchbowl away.” There’s a bunch of economic modeling behind this, of course. But a good intuition is that if there’s enough money in the system to invest in thousands of extremely dumb companies, we probably have too much liquidity.
I suppose the AI bubble could genuinely be different, in that we might build SkyNet before the bubble collapses on its own. That would, I guess, represent the popping of the 10,000-year Homo sapiens bubble. But I’d prefer to avoid that.
I like the idea of cocktail party investment advice as an economic bellwether. That’s a good observation.
This is not a response per se, but an expression of displeasure. A few companies decided it was appropriate to gamble a significant fraction of the GDP on behalf of everyone—without anyone’s permission. And now we are all locked into an economic gambit with no offramp. We either dedicate everything to making this work or we are all living in the hellish aftermath of a hideously large bubble bursting. It’s as if we conjured the economic equivalent of Roko’s Basilisk into being.
To answer the question posed, X is between 0 and 1.
Why do I believe this? Because if companies are already resorting to financial engineering in order to buy themselves a little more time to find PMF, then they have already resorted to extraordinary measures. Which means they have run out of alternatives. Which means we are not in an early stage of the bubble.
I am willing to bet you at 5:1 odds in your favour that OpenAI does not lose more than 50% of its valuation within 1 year from today for up to $1000 of my own money.
I agree with Eliezer’s point here that the AI bubble could pop without a recession under a competent Fed: https://xcancel.com/ESYudkowsky/status/1971311526767476760#m, and I think Jerome Powell is likely competent enough to handle this (less certain about potential successors).
That tweet doesn’t sound right to me. Or at least, to me there’s a simpler and more direct explanation of bubbles in terms of real resources, without having to mention money supply or central banks at all.
During a bubble, people are having fun because resources are being misallocated: misallocated to their fun. Some rich chumps are throwing their resources at something useless, like buying tulips. That bankrolls the good times for everyone else: the tulip-growers, the hairdressers that serve the tulip-growers and so on. But at some point the rich chumps realize that tulips aren’t that great, and that they burned their resources just to make a big bonfire and make everyone warm for awhile. When they realize that, the tulip growers will lose their jobs, and then the hairdressers who served them and so on. That’s the pain of the bubble ending, and it’s unavoidable, central bank or no.