This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won’t be bound by MAD.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.