Well, it’s not clear that actually dropping the bombs has prevented nuclear holocaust. We only know it hasn’t happened YET. Causally, it’s not even sure whether those bombs were necessary for lasting this long. The development and testing that led up to having and using those bombs is pretty strong evidence that humans are willing to risk a (small and controversial) possibility that they’d ignite the atmosphere and kill everyone, and we don’t have much reason to believe we’re a lot more cautious now.
I think small-scale AI mishaps are more likely to be taken as encouragement (we just have to make it a little bit better) rather than cautionary (it’s utterly doomed to destroy us all). Any non-catastrophic disaster is evidence that disasters are recoverable.
Common human reasoning just isn’t able to deal with tail risks. I don’t suspect that any demonstration will convince most of them otherwise. Even experts who are worried about it will probably downplay any surmountable failures and the reactions to them as not enough to prove things.
I believe (although I admit it is speculation, and the entirety of the argument is too long to have here) that a concrete example—specifically a vivid and violent example—helps ideas stick in the brain, and Hiroshima and Nagasaki were sufficiently vivid and violent to qualify.
As for nuclear war not having happened yet, you’re absolutely correct. I do hold, however, that the period of greatest risk is behind us. Humanity has had seven decades with nuclear weapons to think about what happens when they’re used in anger, as opposed to the few years we had in the period directly after WWII.
In other words, a new and dangerous technology seems most dangerous to me the newer it is, because people don’t understand the risks associated with it yet. It also takes time for the game-theoretic implications (i.e. MAD) to seep into the public mind.
With AI, we may not get a vivid and violent example of AGI gone wrong (because it killed us all). The technology (AGI specifically) will be brand-new, so we won’t have any disasters to point to and we won’t have time to adapt culturally to the risks.
Hence why I believe that, without a shocking and horrible incident to point to, people will be far more bullish on AGI than they were on nuclear, even though both technologies are potentially apocalyptic.
Well, it’s not clear that actually dropping the bombs has prevented nuclear holocaust. We only know it hasn’t happened YET. Causally, it’s not even sure whether those bombs were necessary for lasting this long. The development and testing that led up to having and using those bombs is pretty strong evidence that humans are willing to risk a (small and controversial) possibility that they’d ignite the atmosphere and kill everyone, and we don’t have much reason to believe we’re a lot more cautious now.
I think small-scale AI mishaps are more likely to be taken as encouragement (we just have to make it a little bit better) rather than cautionary (it’s utterly doomed to destroy us all). Any non-catastrophic disaster is evidence that disasters are recoverable.
Common human reasoning just isn’t able to deal with tail risks. I don’t suspect that any demonstration will convince most of them otherwise. Even experts who are worried about it will probably downplay any surmountable failures and the reactions to them as not enough to prove things.
I believe (although I admit it is speculation, and the entirety of the argument is too long to have here) that a concrete example—specifically a vivid and violent example—helps ideas stick in the brain, and Hiroshima and Nagasaki were sufficiently vivid and violent to qualify.
As for nuclear war not having happened yet, you’re absolutely correct. I do hold, however, that the period of greatest risk is behind us. Humanity has had seven decades with nuclear weapons to think about what happens when they’re used in anger, as opposed to the few years we had in the period directly after WWII.
In other words, a new and dangerous technology seems most dangerous to me the newer it is, because people don’t understand the risks associated with it yet. It also takes time for the game-theoretic implications (i.e. MAD) to seep into the public mind.
With AI, we may not get a vivid and violent example of AGI gone wrong (because it killed us all). The technology (AGI specifically) will be brand-new, so we won’t have any disasters to point to and we won’t have time to adapt culturally to the risks.
Hence why I believe that, without a shocking and horrible incident to point to, people will be far more bullish on AGI than they were on nuclear, even though both technologies are potentially apocalyptic.