Grey Goo Requires AI

Link post

Summary: Risks from self-replicating machines or nanotechnology rely on the presence of a powerful artificial intelligence within the machines in order to overcome human control and the logistics of self-assembling in many domains.

The grey goo scenario posits that developing self-replicating machines could present an existential risk for society. These replicators would transform all matter on earth into copies of themselves, turning the planet into a swarming, inert mass of identical machines.

I think this scenario is unlikely compared to other existential risks. To see why, lets look that the components of a self-replicating machine.

Energy Source: Because you can’t do anything without a consistent source of energy.

Locomotion: Of course, our machine needs to move from place to place gathering new resources, otherwise it will eventually run out of materials in its local environment. The amount of mobility it has determines how much stuff it can transform into copies. If the machine has wheels, it could plausibly convert an entire continent. With a boat, it could convert the entire earth. With rockets, not even the stars would be safe from our little machine.

Elemental Analysis: Knowing what resources you have nearby is important. Possibilities for what you can build depend heavily on the available elements. A general purpose tool for elemental analysis is needed.

Excavation: Our machine can move to a location, and determine which elements are available. Now it needs to actually pull them out of the ground and convert them into a form which can be processed.

Processing: The raw materials our machine finds are rarely ready to be made into parts directly. Ore needs to be smelted into metal, small organics need to be converted into plastics, and so on.

Subcomponent Assembly: The purified metals and organics can now be converted into machine parts. This is best achieved by having specialized machines for different components. For example, one part of the machine might print plastic housing, another part builds motors, while a third part makes computer chips.

Global Assembly: With all of our subcomponents built, the parent machine needs to assemble everything into a fully functional copy.

Copies of Blueprint: Much like DNA, each copy of the machine must contain a blueprint of the entire structure. Without this, it will not be able to make another copy of itself.

Decision Making: Up to this point, we have a self replicator with everything needed to build a copy of itself. However, without some decision making process, the machine would do nothing. Without instructions, our machine is just an expensive Swiss army knife: a bunch of useful tools which just sits there. I am not claiming that these instructions need to be smart (they could simply read “go straight”, for example) but there has to be something.

So far, this just looks like a bunch of stuff that we already have, glued together. Most of these processes were invented by the mid-1900’s. Why haven’t we built this yet? Where is the danger?

Despite looking boring, this system has the capacity to be really dangerous. This is because once you create something with a general ability to self-replicate, the same forces of natural selection which made complex life start acting on your machine. Even a machine with simple instructions and high fidelity copies will have mutations. These mutations can be errors in software, malfunctions in how components are made, errors in the blueprint, and so on. Almost all of these mutations will break the machine. But some will make their offspring better off, and these new machines will come to dominate the population of self-replicators.

Lets look at a simple example.

You build a self-replicator with the instruction “Move West 1 km, make 1 copy, then repeat” which will build a copy of itself every kilometer and move east-to-west, forming a Congo line of self-replicators. You start your machine and move to a point a few kilometers directly west of it, ready to turn off the copies that arrive and declare the experiment a success. When the first machine in the line reaches you, it is followed by a tight formation of perfect copies spaced 1 meter apart. Success! Except, weren’t they supposed to be spaced 1 kilometer apart? You quickly turn off all of the machines and look at their code. It turns out that a freak cosmic ray deleted the ‘k’ in ‘km’ in the instructions, changing the spacing of machines to 1 meter and giving the machine 1000 times higher fitness than the others. Strange, you think, but at least you stopped things before they got out of hand! As you drive home with your truckload of defective machines, you notice another copy of the machine, dutifully making copies spaced 1 kilometer apart, but heading north this time. You quickly turn off this new line of machines formed by this mutant and discover that the magnet on their compass wasn’t formed properly, orienting these machines in the wrong direction. You shudder to think what would have happened if this line of replicators reached the nearest town.

This example is contrived of course, but mistakes like these are bound to happen. This will give your machine very undesirable behavior in the long term, either wiping out all of your replicators or making new machines with complex adaptations who’s only goal is self-replication. Life itself formed extremely complex adaptations to favor self-replication from almost nothing, and, given the opportunity, these machines will too. In fact, the possibility of mutation and growth in complexity was a central motivation for the Von Neumann universal constructor.

Fortunately, even after many generations, most of these machines will be pretty dumb, you could pick one up and scrap it for parts without any resistance. There is very little danger of a grey goo scenario here. So where is the danger? Crucially, nature did not only make complex organisms, but general intelligence. With enough time, evolution has created highly intelligent, cooperative, resource hoarding, self-replicators: us! Essentially, people, with their dreams of reaching the stars and populating the universe, are a physical manifestation of the grey goo scenario (“flesh-colored goo” doesn’t really roll off the tongue). Given enough time, there is no reason to think that self-replicating machines won’t do the same. But even before this happens, the machines will already be wreaking havoc: replicating too fast, going to places they aren’t supposed to, and consuming cities to make new machines.

But these issues aren’t fundamental problems with self-replicators. This is an AI alignment issue. The decision process for the new machines has become misaligned with what it’s original designers intended. Solutions to the alignment problem will immediately apply to these new systems, preventing or eliminating dangerous errors in replication. Like before, this has a precedent in biology. Fundamentally, self-replicators are dangerous, but only because they have the ability to develop intelligence or change their behavior. This means we can focus on AI safety instead of worrying about nanotechnology risks as an independent threat.

Practically, is this scenario likely? No. The previous discussion glossed over a lot of practical hurdles for self-replicating machines. For nanoscale machines, a lot of the components I listed have not yet been demonstrated and might not be possible (I hope to review what progress has been made here in a future post). Besides that, the process of self-replication is very fragile and almost entirely dependent on a certain set of elements. You simply cannot make metal parts if you only have hydrogen, for example. Additionally, these machines will have to make complicated choices about where to find new resources, how to design new machines with different resources, and how to compete with others for resources. Even intelligent machines will face resource shortages, energy shortages, or be destroyed by people when they become a threat.

Overall, the grey goo scenario and the proposed risks of nanotechnology are really just AI safety arguments wrapped in less plausible packaging. Even assuming these things are built, the problem can essentially be solved with whatever comes out of AI alignment research. More importantly, I expect that AI will be developed before general purpose nanotechnology or self-replication, so AI risk should be the focus of research efforts rather than studying nanotechnology risks themselves.