there have never existed enough nukes in the world to cover the entire area
Except that SOTA understanding of the consequences of a nuclear war between the USA and Russia or the USSR in the 1980s is that the consequences would likely mean that a major part of mankind would die in 2 years, including the entire Northern Hemisphere. And God save Argentina, Australia and other countries in the South Hemisphere if someone decides to nuke Yellowstone out of spite...
We’re discussing whether the US could have stopped the Soviet nuclear program in the late 1940s or early 1950s (to see whether that sheds any light on how practical it is to use military power to stop AI “progress”) so what is the relevance of your comment?
But since we’ve started on this tangent, allow me to point out that most of the public discussion about nuclear war (including by The Bulletin of the Atomic Scientists) is wildly wrong because no one had any strong motivation to step into the discussion and correct the misinformation (because no one had a strong motive to advance arguments that there should be a nuclear war) until the last few years, when advocates for AI “progress” starting arguing that AI “progress” should be allowed to continue because an aligned superintelligence is our best chance to avert nuclear war, which in their argument is the real extinction risk—at which time people like me who know that continued AI “progress” is a much more potent extinction risk than nuclear war acquired a strong motive to try to correct misinformation in the public discourse about nuclear war.
Except that SOTA understanding of the consequences of a nuclear war between the USA and Russia or the USSR in the 1980s is that the consequences would likely mean that a major part of mankind would die in 2 years, including the entire Northern Hemisphere. And God save Argentina, Australia and other countries in the South Hemisphere if someone decides to nuke Yellowstone out of spite...
We’re discussing whether the US could have stopped the Soviet nuclear program in the late 1940s or early 1950s (to see whether that sheds any light on how practical it is to use military power to stop AI “progress”) so what is the relevance of your comment?
But since we’ve started on this tangent, allow me to point out that most of the public discussion about nuclear war (including by The Bulletin of the Atomic Scientists) is wildly wrong because no one had any strong motivation to step into the discussion and correct the misinformation (because no one had a strong motive to advance arguments that there should be a nuclear war) until the last few years, when advocates for AI “progress” starting arguing that AI “progress” should be allowed to continue because an aligned superintelligence is our best chance to avert nuclear war, which in their argument is the real extinction risk—at which time people like me who know that continued AI “progress” is a much more potent extinction risk than nuclear war acquired a strong motive to try to correct misinformation in the public discourse about nuclear war.