Huh? Yes we’re unprepared to capitalize on a crash because how would we? This post doesn’t say how one might do that. It seems you’ve got ideas but why write this if you weren’t going to say what they are or what you want us to do or think about?
Yes, I get you don’t just want to read about the problem but a potential solution.
The next post in this sequence will summarise the plan by those experienced organisers.
These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement.
So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.
Check back here in a month. We should have a summary ready by then.
The scale of training and R&D spending by AI companies can be reduced on short notice, while global inference buildout costs much more and needs years of use to pay for itself. So an AI slowdown mostly hurts clouds and makes compute cheap due to oversupply, which might be a wash for AI companies. Confusingly major AI companies are closely tied to cloud providers, but OpenAI is distancing itself from Microsoft, and Meta and xAI are not cloud providers, so wouldn’t suffer as much. In any case the tech giants will survive, it’s losing their favor that seems more likely to damage AI companies, making them no longer able to invest as much in R&D.
This is a solid point that I forgot to take into account here.
What happens to GPU clusters inside the data centers build out before the market crash?
If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last.
I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality.
~ ~ ~
I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build?
Or more precisely:
that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the ‘AI winter’ until investors and consumers are willing to commit their money again.
That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
Just realised that your point seems similar to Sequoia Capital’s: “declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”
~ ~ ~
A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.
I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that.
E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
VC money, in my experience, doesn’t typically mean that the VC writes a check and then the startup has it to do with as they want; it’s typically given out in chunks and often there are provisions for the VC to change their mind if they don’t think it’s going well. This may be different for loans, and it’s possible that a sufficiently hot startup can get the money irrevocably; I don’t know.
I agree that AI company finances aren’t that good, but my personal opinion is that there won’t be a dramatic collapse which significantly affects how people and policymakers perceive AI and AI companies.
AI companies which fail will be bought up by companies which want an AI team. E.g. if OpenAI fails, it might be folded into Microsoft.
AI companies probably won’t fail at the same time.
Even if many AI companies fail, actual usage of AI won’t drop. Whatever brand name the most popular AI has will be perceived as the winning AI company (even if it isn’t an AI company).
Many companies do interesting research for long times without expecting much profit, e.g. Bell Labs and Google. Jeff Bezos funded his space company without expecting profit, purely driven by spacetravel mania. Now that AI is as cool as spacetravel, Elon Musk may fund xAI in the same way.
The only “outrage” will come from investors who lost their money, but they are too money-driven to do anything with their outrage. They are used to losing money from failed bets, and they had always expected AI companies to be high risk high reward.
This is just my current vague opinion, I’m not saying that you’re wrong and I’m right.
Thanks for your takes! Some thoughts on your points:
Yes, OpenAI has useful infrastructure and brands. It’s hard to imagine a scenario where they wouldn’t just downsize and/or be acquired by e.g. Microsoft.
If OpenAI or Anthropic goes down like that, I’d be surprised if some other AI companies don’t go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek’s, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn’t changing much anymore, and is not actually delivering on OpenAI’s proclamations of how AI will rapidly improve.
Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
Other communities I’m in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a ‘rot economy’ that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. ‘AI’ is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.
Those are my takes. Curious if this raises new thoughts.
You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
I don’t follow the economics of AI at all, but my model is that Google (Gemini) has oceans of money and would therefore be less vulnerable in a crash, and that OpenAI and Anthropic have rich patrons (Microsoft and Amazon respectively) who would have the power to bail them out. xAI is probably safe for the same reason, the patron being Elon Musk. China is a similar story, with the AI contenders either being their biggest tech companies (e.g. Baidu) or sponsored by them (Alibaba and Tencent being big investors in “AI 2.0”).
There is a possibility of self-reinforcing negative cycle: models don’t show rapid capabilities improvement → investors halt pouring money into AI sector → AI labs focus on cutting costs → models don’t show rapid capabilities improvement.
Huh? Yes we’re unprepared to capitalize on a crash because how would we? This post doesn’t say how one might do that. It seems you’ve got ideas but why write this if you weren’t going to say what they are or what you want us to do or think about?
Yes, I get you don’t just want to read about the problem but a potential solution.
The next post in this sequence will summarise the plan by those experienced organisers.
These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement.
So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.
Check back here in a month. We should have a summary ready by then.
The scale of training and R&D spending by AI companies can be reduced on short notice, while global inference buildout costs much more and needs years of use to pay for itself. So an AI slowdown mostly hurts clouds and makes compute cheap due to oversupply, which might be a wash for AI companies. Confusingly major AI companies are closely tied to cloud providers, but OpenAI is distancing itself from Microsoft, and Meta and xAI are not cloud providers, so wouldn’t suffer as much. In any case the tech giants will survive, it’s losing their favor that seems more likely to damage AI companies, making them no longer able to invest as much in R&D.
This is a solid point that I forgot to take into account here.
What happens to GPU clusters inside the data centers build out before the market crash?
If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last.
I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality.
~ ~ ~
I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build?
Or more precisely:
that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the ‘AI winter’ until investors and consumers are willing to commit their money again.
That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
Just realised that your point seems similar to Sequoia Capital’s:
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”
~ ~ ~
A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.
I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that.
VC money, in my experience, doesn’t typically mean that the VC writes a check and then the startup has it to do with as they want; it’s typically given out in chunks and often there are provisions for the VC to change their mind if they don’t think it’s going well. This may be different for loans, and it’s possible that a sufficiently hot startup can get the money irrevocably; I don’t know.
I agree that AI company finances aren’t that good, but my personal opinion is that there won’t be a dramatic collapse which significantly affects how people and policymakers perceive AI and AI companies.
AI companies which fail will be bought up by companies which want an AI team. E.g. if OpenAI fails, it might be folded into Microsoft.
AI companies probably won’t fail at the same time.
Even if many AI companies fail, actual usage of AI won’t drop. Whatever brand name the most popular AI has will be perceived as the winning AI company (even if it isn’t an AI company).
Many companies do interesting research for long times without expecting much profit, e.g. Bell Labs and Google. Jeff Bezos funded his space company without expecting profit, purely driven by spacetravel mania. Now that AI is as cool as spacetravel, Elon Musk may fund xAI in the same way.
The only “outrage” will come from investors who lost their money, but they are too money-driven to do anything with their outrage. They are used to losing money from failed bets, and they had always expected AI companies to be high risk high reward.
This is just my current vague opinion, I’m not saying that you’re wrong and I’m right.
Thanks for your takes! Some thoughts on your points:
Yes, OpenAI has useful infrastructure and brands. It’s hard to imagine a scenario where they wouldn’t just downsize and/or be acquired by e.g. Microsoft.
If OpenAI or Anthropic goes down like that, I’d be surprised if some other AI companies don’t go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek’s, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn’t changing much anymore, and is not actually delivering on OpenAI’s proclamations of how AI will rapidly improve.
Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
Other communities I’m in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a ‘rot economy’ that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. ‘AI’ is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.
Those are my takes. Curious if this raises new thoughts.
:) thank you for saying thanks and replying.
You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
I don’t follow the economics of AI at all, but my model is that Google (Gemini) has oceans of money and would therefore be less vulnerable in a crash, and that OpenAI and Anthropic have rich patrons (Microsoft and Amazon respectively) who would have the power to bail them out. xAI is probably safe for the same reason, the patron being Elon Musk. China is a similar story, with the AI contenders either being their biggest tech companies (e.g. Baidu) or sponsored by them (Alibaba and Tencent being big investors in “AI 2.0”).
Preparing now builds resilience; we can lead after the crash.
There is a possibility of self-reinforcing negative cycle: models don’t show rapid capabilities improvement → investors halt pouring money into AI sector → AI labs focus on cutting costs → models don’t show rapid capabilities improvement.