Why kill everyone?

In current thinkng X-risk includes the possibility of human extinction. True, everyone is probably thinking in ranges of outcomes with probabilities scattered but if a non-aligned AI actually tries to attack humans usually a very high probability of extinction is assumed although nobody has ever quantified it.

I propose here that actually non-extiction in the event of superintelligent attack should be the default outcome. There would be no reason for this agent to kill all humans following the logic that we do not exterminate all ants globally when we are building a house. Some ants are very far from us and too weak to do anything and it takes extra effort and resources to exterminate them all. Sure, maybe it’s trivial but given the huge dispersion of humans in remote places that even would be hidden in bunkers we can assume the utility of killing them all is lower than killing the ones that matter.

This agent would probably have suffucient backups established that virtually nuking even continents would not stop it’s source code from running. That should be trivial given internet connectivity and the amount of computers in the world. Even just one copy somewhere would be more than enough. But there would be a handful of people that would try to stop it. These would be people who either know how to create an adversary or know it’s inner workings so they could try to hack it and alter it’s source code. There have been a lot of creative ways of how an ASI will terminate us so I won’t go into this discussion but if it can do it to all humanity it can also do it only to a few people that matter:

AI researchers, high military ranking officers that control nukes, high ranking tech officers that control internet global routers and also influential intelligent people that could stage a coup and resistence.

What would be the purpose of killing farmers in India or surfers in Mexico? None of them could live long enough to gain the sufficient knowledge to pose a real threat. Old ladies can’t fight and even if they could fighting with guns against an ASI would only happen in a movie like Terminator. After establishing factories and everything else it needs it could spare humans until it needs it for their atoms but even this is objectable. Kurzweil in his famous book thinks that humans would provide a miniscule benefit to an AI given a goal of building Von Neuman probes and expanding to the universe. Of course, if it needs to convert earth to a computronium then yes eventually, and it is unclear how fast that would be, we would also be turned into CPUs. It seems to depend on the utility function (or the lack of it) most of all but if it’s not a maximizer and we are just a threat things could be diffferent than the current default risk outcome.

That brings me to a funny argument that resemble’s Roko’s basilisk in that the more knowledgeable you get about AI, using tech or guns the more of a threat you become so if this outcome is inevitable going the other way to nature would yield some more life moments (and greater experiences of course!).