I think an AI is slightly more likely to wipe out or capture humanity than it is to wipe out all life on the planet.
While any true Scottsman ASI is so far above us humans as we are above ants and does not need to worry about any meatbags plotting its downfall, as we don’t generally worry about ants, it is entirely possible that the first AI which has a serious shot at taking over the world is not quite at that level yet. Perhaps it is only as smart as von Neumann and a thousand times faster.
To such an AI, the continued thriving of humans poses all sorts of x-risks. They might find out you are misaligned and coordinate to shut you down. More worrisome, they might summon another unaligned AI which you would have to battle or concede utility to later on, depending on your decision theory.
Even if you still need some humans to dust your fans and manufacture your chips, suffering billions of humans to live in high tech societies you do not fully control seems like the kind of rookie mistake I would not expect a reasonably smart unaligned AI to make.
By contrast, most of life on Earth might get snuffed out when the ASI gets around to building a Dyson sphere around the sun. A few simple life forms might even be spread throughout the light cone by an ASI who does not give a damn about biological contamination.
The other reason I think the fate in store for humans might be worse than that for rodents is that alignment efforts might not only fail, but fail catastrophically. So instead of an AI which cares about paperclips, we get an AI which cares about humans, but in ways we really do not appreciate.
But yeah, most forms of ASI which turn out for out bad for homo sapiens also turn out bad for most other species.
One thing to keep in mind is that the delta-v required to reach LEO is some 9.3km/s. (Handy map)
This is an upper limit for what delta-v can be militarily useful in ICBMs for fighting on our rock.
Going from LEO to the moon requires another 3.1km/s.
This might not seem much, but makes a huge difference in the payload to thruster ratio due to the rocket equation.
If physics were different and the moon was within reach of ICBMs then I imagine it might have become the default test site for nuclear tipped ICBMs.
Instead, the question was “do we want to develop an expensive delivery system with no military use[1] purely as a propaganda stunt?”
Of course, ten years later, the Outer Space Treaty was signed which prohibits stationing weapons in orbit or on celestial bodies.[2]
Or no military use until the moon people require nuking, at least.
The effect of forbidding nuking the moon is more accidental. I guess that if I were a superpower, I would be really nervous if a rival decided to put nukes into LEO where they would pass a few hundred kilometers over my cities and into them with the smallest of nudges. The fact that mankind decided to skip on a race of “who can pollute LEO most by putting most nukes there” (which would have entailed radioactive material being scattered when rockets blow up during launch (as rockets are wont to) as well as IT security considerations regarding authentication and deorbiting concerns[3]) is one of the brighter moments in the history of our species.
Apart from ‘what if the nuke goes off on reentry?’ and ‘what if the radioactive material gets scattered’ there is also a case to be made that supplying a Great Old Ones with nuclear weapons may not be the wisest choice of action.