To get people to worry about the dangers of superintelligence, it seems like you need to convince them of two things:
Current models are a strong signal that superintelligence is very likely someday. If the person you’re talking to has only encountered ChatGPT 3.5 or AI slop on Facebook, this seems like a wildly unlikely claim. If the person you’re talking to is a senior software developer who has been using Claude Code Opus 4.5 in anger for 2 weeks, I’m starting to see the denial cracking. I’m not even saying LLMs will get us there, or that we don’t have another AI Winter in the immediate future. But more and more developers in the trenches are starting to see that future somewhere in the distance, much in the same way that middle-aged people start seeing their own eventual death.
Making human labor and intelligence competitively obsolete has a lot of weird consequences, many of them deeply scary. What if we lived in a world where human labor and human intelligence was ludicrously economically inefficient? Like, we’ve introduced a new species that’s smarter than us, and it’s just better at turning resources into results? This isn’t some complicated idea like “convergent instrumental goals,” it’s the most ancient form of political or biological understanding: If you don’t have anything to offer and if you can’t compete, then you’re standing on damn thin ice. Natural selection, economic competition, realpolitik, give it whatever name you want. If you’re utterly reliant on the charity of others, and if you can’t count on some shared social matrix, then you lose. Maybe you starve, maybe you get paperclipped, or maybe you get edged out over time and the future goes on without you.
The problem is that if you can’t convince people of (1), they won’t act. If you convince people of (1) but not (2), then a lot of them found AI labs or invest heavily in acceleration, making the problem worse. I don’t know how to convince people of (1) and (2). It requires too much wild speculation about the future. And humans have difficulty envisioning that a disease in Wuhan might spread to Europe, or that a disease in Europe might spread to the US.
To get people to worry about the dangers of superintelligence, it seems like you need to convince them of two things:
Current models are a strong signal that superintelligence is very likely someday. If the person you’re talking to has only encountered ChatGPT 3.5 or AI slop on Facebook, this seems like a wildly unlikely claim. If the person you’re talking to is a senior software developer who has been using Claude Code Opus 4.5 in anger for 2 weeks, I’m starting to see the denial cracking. I’m not even saying LLMs will get us there, or that we don’t have another AI Winter in the immediate future. But more and more developers in the trenches are starting to see that future somewhere in the distance, much in the same way that middle-aged people start seeing their own eventual death.
Making human labor and intelligence competitively obsolete has a lot of weird consequences, many of them deeply scary. What if we lived in a world where human labor and human intelligence was ludicrously economically inefficient? Like, we’ve introduced a new species that’s smarter than us, and it’s just better at turning resources into results? This isn’t some complicated idea like “convergent instrumental goals,” it’s the most ancient form of political or biological understanding: If you don’t have anything to offer and if you can’t compete, then you’re standing on damn thin ice. Natural selection, economic competition, realpolitik, give it whatever name you want. If you’re utterly reliant on the charity of others, and if you can’t count on some shared social matrix, then you lose. Maybe you starve, maybe you get paperclipped, or maybe you get edged out over time and the future goes on without you.
The problem is that if you can’t convince people of (1), they won’t act. If you convince people of (1) but not (2), then a lot of them found AI labs or invest heavily in acceleration, making the problem worse. I don’t know how to convince people of (1) and (2). It requires too much wild speculation about the future. And humans have difficulty envisioning that a disease in Wuhan might spread to Europe, or that a disease in Europe might spread to the US.