I agree that AI company finances aren’t that good, but my personal opinion is that there won’t be a dramatic collapse which significantly affects how people and policymakers perceive AI and AI companies.
AI companies which fail will be bought up by companies which want an AI team. E.g. if OpenAI fails, it might be folded into Microsoft.
AI companies probably won’t fail at the same time.
Even if many AI companies fail, actual usage of AI won’t drop. Whatever brand name the most popular AI has will be perceived as the winning AI company (even if it isn’t an AI company).
Many companies do interesting research for long times without expecting much profit, e.g. Bell Labs and Google. Jeff Bezos funded his space company without expecting profit, purely driven by spacetravel mania. Now that AI is as cool as spacetravel, Elon Musk may fund xAI in the same way.
The only “outrage” will come from investors who lost their money, but they are too money-driven to do anything with their outrage. They are used to losing money from failed bets, and they had always expected AI companies to be high risk high reward.
This is just my current vague opinion, I’m not saying that you’re wrong and I’m right.
Thanks for your takes! Some thoughts on your points:
Yes, OpenAI has useful infrastructure and brands. It’s hard to imagine a scenario where they wouldn’t just downsize and/or be acquired by e.g. Microsoft.
If OpenAI or Anthropic goes down like that, I’d be surprised if some other AI companies don’t go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek’s, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn’t changing much anymore, and is not actually delivering on OpenAI’s proclamations of how AI will rapidly improve.
Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
Other communities I’m in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a ‘rot economy’ that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. ‘AI’ is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.
Those are my takes. Curious if this raises new thoughts.
You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
I agree that AI company finances aren’t that good, but my personal opinion is that there won’t be a dramatic collapse which significantly affects how people and policymakers perceive AI and AI companies.
AI companies which fail will be bought up by companies which want an AI team. E.g. if OpenAI fails, it might be folded into Microsoft.
AI companies probably won’t fail at the same time.
Even if many AI companies fail, actual usage of AI won’t drop. Whatever brand name the most popular AI has will be perceived as the winning AI company (even if it isn’t an AI company).
Many companies do interesting research for long times without expecting much profit, e.g. Bell Labs and Google. Jeff Bezos funded his space company without expecting profit, purely driven by spacetravel mania. Now that AI is as cool as spacetravel, Elon Musk may fund xAI in the same way.
The only “outrage” will come from investors who lost their money, but they are too money-driven to do anything with their outrage. They are used to losing money from failed bets, and they had always expected AI companies to be high risk high reward.
This is just my current vague opinion, I’m not saying that you’re wrong and I’m right.
Thanks for your takes! Some thoughts on your points:
Yes, OpenAI has useful infrastructure and brands. It’s hard to imagine a scenario where they wouldn’t just downsize and/or be acquired by e.g. Microsoft.
If OpenAI or Anthropic goes down like that, I’d be surprised if some other AI companies don’t go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek’s, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn’t changing much anymore, and is not actually delivering on OpenAI’s proclamations of how AI will rapidly improve.
Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
Other communities I’m in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a ‘rot economy’ that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. ‘AI’ is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.
Those are my takes. Curious if this raises new thoughts.
:) thank you for saying thanks and replying.
You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.