You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
:) thank you for saying thanks and replying.
You’re right, $600 billion/year sounds pretty unsustainable. That’s like 60 OpenAI’s, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they’re willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).
It’s also a good point how the failure might cascade. I’m reminded about people discussing whether something like the “dot-com bubble” will happen to AI, which I somehow didn’t think of when writing my comment.
Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don’t look more stable than them. It’s one possible future.
I still think the possible future where this doesn’t happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.
I agree that “AI Notkilleveryoneism” should be friends with these other communities who aren’t happy about AI.
I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.