Autopoietic systems and difficulty of AGI alignment

I have recently come to the opinion that AGI alignment is probably extremely hard. But it’s not clear exactly what AGI or AGI alignment are. And there are some forms of aligment of “AI” systems that are easy. Here I operationalize “AGI” and “AGI alignment” in some different ways and evaluate their difficulties.


Autopoietic cognitive systems

From Wikipedia:

The term “autopoiesis” refers to a system capable of reproducing and maintaining itself.

This isn’t entirely technically crisp. I’ll elaborate on my usage of the term:

  • An autopoietic system expands, perhaps indefinitely. It will feed on other resources and through its activity gain the ability to feed on more things. It can generate complexity that was not present in the original system through e.g. mutation and selection. In some sense, an autopoietic system is like an independent self-sustaining economy.

  • An autopoietic system, in principle, doesn’t need an external source of autopoesis. It can maintain itself and expand regardless of whether the world contains other autopoietic systems.

  • An autopoietic cognitive system contains intelligent thinking.

Some examples:

  • A group of people on an island that can survive for a long time and develop technology is an autopoietic cognitive system.

  • Evolution is an autopoietic cognitive system (cognitive because it contains animals).

  • An economy made of robots that can repair themselves, create new robots, gather resources, develop new technology, etc is an autopoietic cognitive system.

  • A moon base that necessarily depends on Earth for resources is not autopoietic.

  • A car is not autopoietic.

  • A computer with limited memory not connected to the external world can’t be autopoietic.

Fully automated autopoietic cognitive systems

A fully automated autopoietic cognitive system is an autopoietic cognitive system that began from a particular computer program running on a computing substrate such as a bunch of silicon computers. It may require humans as actuators, but doesn’t need humans for cognitive work, and could in principle use robots as actuators.

Some might use the term “recursively self-improving AGI” to mean something similar to “fully automated autopoietic cognitive system”.

The concept seems pretty similar to “strong AI”, though not identical.

Difficulty of aligning a fully automated autopoietic cognitive system

Creating a good and extremely-useful fully automated autopoietic cognitive system requires solving extremely difficult philosophical and mathematical problems. In some sense, it requires answering the question of “what is good” with a particular computer program. The system can’t rely on humans for its cognitive work, so in an important sense it has to figure out the world and what is good by itself. This requires “wrapping up” large parts of philosophy.

For some intuitions about this, it might help to imagine a particular autopoietic system: an alien civilization. Imagine an artificial planet running evolution at an extremely fast speed, eventually producing intelligent aliens that form a civilization. The result of this process would be extremely unpredictable, and there is not much reason to think it would be particularly good to humans (other than the decision-theoretic argument of “perhaps smart agents cooperate with less-smart agents that spawned them because they want this cooperation to happen in general”, which is poorly understood and only somewhat decision-relevant).

Almost-fully-automated autopoietic cognitive systems

An almost-fully-automated autopoietic cognitive system is an autopoietic cognitive system that receives some input from humans, but a quite-limited amount (say, less than 1,000,000 total hours from humans). After receiving this much data, it is autopoietic in the sense that it doesn’t require humans for doing its cognitive work. It does a very large amount of expansion and cognition after receiving this data.

Some examples:

  • Any “raise the AGI like you would raise a child” proposal falls in this category.

  • An AGI that thinks on its own but sometimes gives queries to humans would fall in this category.

  • ALBA doesn’t use the ontology of “autopoietic systems”, but if Paul Christiano’s research agenda succeeded, it would eventually produce an aligned almost-fully-automated autopoietic cognitive system (in order to be competitive with an unaligned almost-fully-automated autopoietic cognitive system)

Difficulty of aligning an almost-fully-automated autopoietic cognitive system

My sense is that creating a good and extremely-useful almost-fully-automated autopoietic cognitive system also requires solving extremely difficult philosophical and mathematical problems. Although getting data from humans will help in guiding the system, there is only a limited amount of guidance available (the system does a bunch of cognitive work on its own). One can imagine an artificial planet running at an extremely fast speed that occasionally pauses to ask you a question. This does not require “wrapping up” large parts of philosophy immediately, but it does require “wrapping up” large parts of philosophy in the course of the execution of the system.

(Of course artifical planets running evolution aren’t the only autopoietic cognitive systems, but it seems useful to imagine life-based autopoietic cognitive systems in the absence of a clear alternative)

Like with unaligned fully automated autopoietic cognitive systems, unaligned almost-fully-automated autopoietic cognitive systems would be extremely dangerous to humanity: the future of the universe would be outside of humanity’s hands.

My impression is that the main “MIRI plan” is to create an almost-fully-automated autopoietic cognitive system that expands to a high level, stops, and then assists humans in accomplishing some task. (See: executable philosophy; task-directed AGI).

Non-autopoietic cognitive systems that extend human autopoiesis

An important category of cognitive systems are ones that extend human autopoiesis without being autopoietic themselves. The Internet is one example of such a system: it can’t produce or maintain itself, but it extends human activity and automates parts of it.

This is similar to but more expansive than the concept of “narrow AI”, since they in principle they could be domain-general (e.g. a neural net policy trained to generalize across different types of tasks). The concept of “weak AI” is similar.

Non-autopoietic automated cognitive systems can present existential risks, for the same reason other technologies and social organizations (nuclear weapons, surveillance technology, global dictatorship) present existential risk. But in an important sense, non-autopoietic cognitive systems are “just another technology” contiguous with other automation technology, and managing them doesn’t require doing anything like wrapping up large parts of philosophy.

Where does Paul’s agenda fit in?

[edit: see this comment thread]

As far as I can tell, Paul’s proposal is to create an almost-fully-automated autopoietic system that is “seeded with” human autopoiesis in such a way that, though afterwards it grows without human oversight, it eventually does things that humans would find to be good. In an important sense, it extends human autopoiesis, though without many humans in the system to ensure stability over time. It avoids value drift over time through some “basin of attraction” as in Paul’s post on corrigibility. (Paul can correct me if I got any of this wrong)

In this comment, Paul says he is not convinced that lack of philosophical understanding is a main driver of risk, with the implication that humans can perhaps create aligned AI systems without understanding philosophy; this makes sense to the extent that AI systems are extending human autopoiesis and avoiding value drift rather than having their own original autopoiesis.

I wrote up some thoughts on Paul Christiano’s agenda already. Roughly, my take is that is that getting corrigibility right (i.e. getting an autopoietic system to extend human autopoiesis without much human oversight and without having value drift) requires solving very difficult philosophical problems, and it’s not clear whether these are easier or harder than those required for the “MIRI plan” of creating an almost-fully-automated autopoietic cognitive system that does not extend human autopioesis but does assist humans in some task. Of course, I don’t have all of Paul’s intuitions on how to do corrigibility.

I would agree with Paul that, conditioned on the AGI alignment problem not being very hard, it’s probably because of corrigibility.

My position

I would summarize my position on AGI alignment as:

  • Aligning a fully automated autopoietic cognitive system, or an almost-fully-automated autopoietic cognitive system, both seem extremely difficult. My snap judgment is to assign about 1% probability to humanity solving this problem in the next 20 years. (My impression is that “the MIRI position” thinks the probability of this working is pretty low, too, but doesn’t see a a good alternative)

  • Consistent with this expectation, I hope that humans do not develop almost-fully-automated autopoietic cognitive systems in the near term. I hope that they instead continue to develop and use non-autopoietic cognitive systems that extend human autopoiesis. I also hope that, if necessary, humans can coordinate to prevent the creation of unaligned fully-automated or almost-fully-automated autopoietic cognitive systems, possibly using non-autopoietic cognitive systems to help them coordinate.

  • I expect that thinking about how to align almost-fully-automated autopoietic cognitive systems with human values has some direct usefulness and some indirect usefulness (for increasing some forms of philosophical/​mathematical competence), though actually solving the problem is very difficult.

  • I expect that non-autopoietic cognitive systems will continue to get better over time, and that their use will substantally change society in important ways.