I think you’re probably right about that historical difference. But I don’t agree with the implication that people won’t believe AGI is coming until too late. (I realize this isn’t the main claim you’re making here, but I think you’d agree that’s the most important implication.)
It’s like January 2020 now, when those concerned with Covid were laughed off. That doesn’t mean AGI concerns will be dismissed when more evidence hits. The public could easily go from not nearly concerned enough to making panicked demands for mass action like shutting down half the economy as a precautionary measure.
Yes, the modern assumption that nothing really changes will slow down recognition of AI’s dangers. But not for long if we’re fortunate enough to get a slowish takeoff and public deployments of useful (and therefore creepy) LLM agents. Of course, that might not happen until we’re too close to internal deployment of a misaligned takeover-capable system like Agent-4 from AI 2027. But it’s looking pretty likely we’ll get such deployments and job replacements before the point of no return, so I think we should at least have some contingency plans in case of dramatic public concern.
AI is in far-mode thinking for most people now, but I predict it’s going to be near-mode for a lot of people as soon as we’ve got inarguable job replacement and more common experience with agentic AI.
I’m the first to talk about how foolish people are compared to our idealized self-conception. People are terrible with abstract ideas. But I think the main reason is that they don’t spend time thinking seriously about them until they’re personally relevant. Humans take a long time to figure out new things. It takes a lot of thought. But it’s also a collective process. As it becomes a bigger part of public conversation, basic logic like “oh yeah they’re probably going to build a new species, and that sounds pretty dangerous” will become common.
Note that most of the people talking about AI now are entrepreneurs and AI developers—the small slice of humanity most prone to be pro-AI biased. Most other people intuitively fear it, arguably for good reasons.
I think you’re probably right about that historical difference. But I don’t agree with the implication that people won’t believe AGI is coming until too late. (I realize this isn’t the main claim you’re making here, but I think you’d agree that’s the most important implication.)
It’s like January 2020 now, when those concerned with Covid were laughed off. That doesn’t mean AGI concerns will be dismissed when more evidence hits. The public could easily go from not nearly concerned enough to making panicked demands for mass action like shutting down half the economy as a precautionary measure.
Yes, the modern assumption that nothing really changes will slow down recognition of AI’s dangers. But not for long if we’re fortunate enough to get a slowish takeoff and public deployments of useful (and therefore creepy) LLM agents. Of course, that might not happen until we’re too close to internal deployment of a misaligned takeover-capable system like Agent-4 from AI 2027. But it’s looking pretty likely we’ll get such deployments and job replacements before the point of no return, so I think we should at least have some contingency plans in case of dramatic public concern.
AI is in far-mode thinking for most people now, but I predict it’s going to be near-mode for a lot of people as soon as we’ve got inarguable job replacement and more common experience with agentic AI.
I’m the first to talk about how foolish people are compared to our idealized self-conception. People are terrible with abstract ideas. But I think the main reason is that they don’t spend time thinking seriously about them until they’re personally relevant. Humans take a long time to figure out new things. It takes a lot of thought. But it’s also a collective process. As it becomes a bigger part of public conversation, basic logic like “oh yeah they’re probably going to build a new species, and that sounds pretty dangerous” will become common.
Note that most of the people talking about AI now are entrepreneurs and AI developers—the small slice of humanity most prone to be pro-AI biased. Most other people intuitively fear it, arguably for good reasons.
My logic and an attempt to convey my intuitions on this are in A country of alien idiots in a datacenter.