AI-kills-everyone scenarios require robotic infrastructure, but not necessarily nanotech

Tl;DR: Nanotech seems the only option for AI to kill everybody; but we can imagine a prosaic-AI-killing-everyone using three steps: taking over a country, building robotic infrastructure, releasing biological weapons.

Arguments why nanotech is needed for AI-kills-everyone

The concept of AI as an existential threat, as portrayed by EY, largely hinges on the development of manufacturing nanotechnology, which entails the creation of microscopic self-replicating robots capable of manipulating matter with atomic precision. This enables the “copying of a berry.” A superintelligent AI must first establish a nanotech infrastructure to conquer the world and eliminate all humans, potentially by ordering protein-coding DNA sequences if it can calculate the correct sequences.

Criteria of robotic infrastructure

A nanotech infrastructure is essential for a comprehensive AI annihilation due to the following factors:

a) Concealment: AI’s actions will remain undetected until the final strike.

b) Rapid elimination: AI will eradicate all humans instantaneously, preventing any interference.

c) Human-independence: Can operate without human involvement.

d) Swift construction: Nanotech can be developed rapidly, taking only a few weeks to synthesize required DNA strains and a few days for the initial nanobot replication. This grants a significant strategic advantage to the first AGI.

e) Route to superintelligence: Nanotech provides superior computational power compared to chips, enabling the first AI to rapidly ascend to an unparalleled superintelligence level. However, it is assumed that the first AI is already mildly superintelligent, as it can design the initial nanobots.

These factors also represent potential vulnerabilities that our last line of AI safety defense could target. For instance, shutting down DNA synthesis labs or developing our own nanotechnology and detection methods could reduce the efficacy of an AI-nanotech assault.

Few alternative AI infrastructure ideas possess all these characteristics, except perhaps:

a) A scenario where AI takeover occurs in a fully robotized world, with every household owning a home robot;

b) A form of biotechnology where AI can program biological organisms to execute tasks. However, this is a variation of nanotechnology, and AI computations cannot migrate into biological substrates.

Why these criteria?

The necessity for nanotech infrastructure in AI-kills-all situations arises from several factors:

If AI constructs a “conventional” robotic infrastructure, it will be visible and attacked before completion, increasing risks for the AI.

If AI cannot replace all humans, it remains vulnerable, as it requires a constant electricity supply, unlike humans. Destroying the electrical grid exposes AI to danger.

If AI cannot eradicate everyone instantaneously, humans will have time to retaliate.

If AI does not migrate to a nanotech-based computational substrate operating on independent energy sources, it remains dependent on a few data centers which are susceptible to airstrikes, sabotage, kill-switch codes, and power outages.

If AI does not gain computational and data advantages from nanotechnology, other AIs will soon achieve similar intelligence levels.

However, many people assign low prior probabilities to both nanotechnology and superintelligence, and their combination yields an even lower estimate, explaining much of the skepticism surrounding AI risk.

Therefore, it is reasonable to investigate catastrophic scenarios that do not rely on ideas with low prior probabilities.

No-miracle scenario where AI kill everybody

Imagine a possible world, where AI with IQ above 1000 is impossible and also nanotech doesn’t work. No superintelligence and no nanobots there. Such a world-model is much easy to accept by default. It is a rather normal world-model for most people.

Does AI pose an existential risk in that world? Here I suggest three steps AI-kill-everybody scenario (without asking why AI wants to do this):

  1. AI takes power in one country via some social manipulation technology and cooperation with humans.

  2. AI produces a large robotic infrastructure in that country.

  3. AI uses biological weapons, nukes and robotic swarms to destroy other countries and kill most humans.

Now we will explore each point in detail.

Taking over a country

1. There could be many ways in which AI can take power in one country. It may be an advisor to a human puppet, who wins elections. It could be a system of electronic government. It could be a staged military coup. Note that the Skynet (or Colossus) scenario where AI is placed in control of nuclear weapons is unlikely as nuclear forces are very conservative in a good sense and very risk-averse. They will be afraid of a hack. But after the country’s takeover, AI will get access to its nuclear weapons.

A question arises: if AI takes one country, why not take the whole world using the same methods? But it could be more difficult as different countries have different governance systems. For example, the secret advisor scenario is good for North Korea, and a democratic puppet – for US. And it is difficult to synchronize such things in different countries.

Obviously, nobody will be happy with an AI takeover anywhere, so it should be kept secret and AI will rule via human puppets.

Also, taking over a country creates a natural time pause in AI’s plan, so other AIs may use that time to take other countries and we will end with a multipolar AI world, maybe bipolar.

Building robotic infrastructure

2. Building robotic infrastructure should be quick but initially invisible. This is when AI-country is most vulnerable to external strikes. Also, you may not like this, but robotic infrastructure will probably include humanoid robots – however not as weapons but as operators of human machinery. Such infrastructure has to be built recursively: robots build robots. Chips fabs are the main difficulty. It may take around 1 year.

There will be a drive to miniaturization, self-replication and energetic autonomy like in Lem’s Invincible, limited only by the mentioned impossibility of nanotech. Also, the tendency is to build it underground for secrecy and security – so no sunlight as an energy source.

Robotic infrastructure will need to grow to a size comparable to the whole world economy. It also has to include many autonomous robots and secret data centres which can survive any retaliation. Hiding it could be difficult.

Secret hacks and other operations abroad may be used to slow down other AIs and prevent a premature finding of such a secret economy. Grabbing power in other countries may be more productive than having a war with them later.

Killing everybody

3. Attack against all humans may be a combination of many genetically modified different biological agents with drone delivery systems, a nuclear attack on hardened bunkers and other nuclear weapons, and the use of drone swarms to ensure that everyone is killed.

Killing all humans without nanotech would require a combination of three things:

A. Biological weapons include:

- Artificial multipandemic. Use of many different viruses simultaneously.

- Genetic modification of bird flu, smallpox, covid etc.

- Simultaneous attacks in many places all over the world by drones, so closing borders will not help.

Note that humans can do this too, but the attack will also destroy most people in any country of origin.

B. Nuclear attack: surprise attack against any instruments which could be used for retaliation.

C. Drones swarm: use drones for any targets which survive nukes and bio attacks.

Human-independent robotic infrastructure is a core

There are many ways in which AI can kill everybody. But a short and good communication version is: agentic AI will create human-independent robotic infrastructure.

This is a central point for the nanotech scenario and for the slow takeover described above. Robotic infrastructure is key for any scenario where AI kills all humans AND takes over the planet. AI can kill everybody and itself without robots. AI can also become Singleton without killing humans and without robots, just by enslaving them. But if it wants to rule in the world without humans, it needs independent robotic infrastructure. Nanotech is the best candidate for this, but it can be done without it.