We’re discussing whether the US could have stopped the Soviet nuclear program in the late 1940s or early 1950s (to see whether that sheds any light on how practical it is to use military power to stop AI “progress”) so what is the relevance of your comment?
But since we’ve started on this tangent, allow me to point out that most of the public discussion about nuclear war (including by The Bulletin of the Atomic Scientists) is wildly wrong because no one had any strong motivation to step into the discussion and correct the misinformation (because no one had a strong motive to advance arguments that there should be a nuclear war) until the last few years, when advocates for AI “progress” starting arguing that AI “progress” should be allowed to continue because an aligned superintelligence is our best chance to avert nuclear war, which in their argument is the real extinction risk—at which time people like me who know that continued AI “progress” is a much more potent extinction risk than nuclear war acquired a strong motive to try to correct misinformation in the public discourse about nuclear war.
We’re discussing whether the US could have stopped the Soviet nuclear program in the late 1940s or early 1950s (to see whether that sheds any light on how practical it is to use military power to stop AI “progress”) so what is the relevance of your comment?
But since we’ve started on this tangent, allow me to point out that most of the public discussion about nuclear war (including by The Bulletin of the Atomic Scientists) is wildly wrong because no one had any strong motivation to step into the discussion and correct the misinformation (because no one had a strong motive to advance arguments that there should be a nuclear war) until the last few years, when advocates for AI “progress” starting arguing that AI “progress” should be allowed to continue because an aligned superintelligence is our best chance to avert nuclear war, which in their argument is the real extinction risk—at which time people like me who know that continued AI “progress” is a much more potent extinction risk than nuclear war acquired a strong motive to try to correct misinformation in the public discourse about nuclear war.