I believe (although I admit it is speculation, and the entirety of the argument is too long to have here) that a concrete example—specifically a vivid and violent example—helps ideas stick in the brain, and Hiroshima and Nagasaki were sufficiently vivid and violent to qualify.
As for nuclear war not having happened yet, you’re absolutely correct. I do hold, however, that the period of greatest risk is behind us. Humanity has had seven decades with nuclear weapons to think about what happens when they’re used in anger, as opposed to the few years we had in the period directly after WWII.
In other words, a new and dangerous technology seems most dangerous to me the newer it is, because people don’t understand the risks associated with it yet. It also takes time for the game-theoretic implications (i.e. MAD) to seep into the public mind.
With AI, we may not get a vivid and violent example of AGI gone wrong (because it killed us all). The technology (AGI specifically) will be brand-new, so we won’t have any disasters to point to and we won’t have time to adapt culturally to the risks.
Hence why I believe that, without a shocking and horrible incident to point to, people will be far more bullish on AGI than they were on nuclear, even though both technologies are potentially apocalyptic.
I believe (although I admit it is speculation, and the entirety of the argument is too long to have here) that a concrete example—specifically a vivid and violent example—helps ideas stick in the brain, and Hiroshima and Nagasaki were sufficiently vivid and violent to qualify.
As for nuclear war not having happened yet, you’re absolutely correct. I do hold, however, that the period of greatest risk is behind us. Humanity has had seven decades with nuclear weapons to think about what happens when they’re used in anger, as opposed to the few years we had in the period directly after WWII.
In other words, a new and dangerous technology seems most dangerous to me the newer it is, because people don’t understand the risks associated with it yet. It also takes time for the game-theoretic implications (i.e. MAD) to seep into the public mind.
With AI, we may not get a vivid and violent example of AGI gone wrong (because it killed us all). The technology (AGI specifically) will be brand-new, so we won’t have any disasters to point to and we won’t have time to adapt culturally to the risks.
Hence why I believe that, without a shocking and horrible incident to point to, people will be far more bullish on AGI than they were on nuclear, even though both technologies are potentially apocalyptic.