Modeling the capabilities of advanced AI systems as episodic reinforcement learning

Here I’ll summarize the main abstraction I use for thinking about future AI systems. This is essentially the same model that Paul uses. I’m not actually introducing any new ideas in this post; mostly this is intended to summarize my current views.


Introduction

If we are to think of highly advanced AI systems, it is useful to treat some AI capabilities as a kind of black box: we need a good understanding of where the optimization power made possible by future hardware and algorithms is going, so that we can think about what the optimized things look like without knowing the exact details of the optimization algorithm. I’m going to state a generalization over some of these capabilities, which is based on current ML practice (training policies to optimize training objectives). The current single best model I know of for reasoning about highly capable AI systems is to assume that they have this general capability and no other capabilities.

The general capability can be stated as: trained policies will receive a high average within-episode training score, compared to alternative policies with similar resource bounds. I’ll call this capability “general episodic RL”.

To clarify “within-episode”, we could consider a single classification task (classify this picture) as a single episode in a supervised learning context, where the policy is the classifier; we could consider solving a single SAT problem to be an episode in a SAT solving context, where the policy is a SAT solver; and of course we have episodes in an episodic RL context. So systems with this general capability are, among other things, good supervised learners, SAT solvers, and episodic RL agents.

Implications

If advanced AI systems have this general capability (and no other capabilities), this implies that:

  1. We don’t have good performance guarantees when test episodes are distinguishable from training episodes (i.e. when some distinguisher can look at an episode and tell whether it is a training episode or a test episode).

  2. When the test episode is similar to training episodes (e.g. in an online learning context), we should expect trained policies to act like a rational agent maximizing its expected score in this test episode; otherwise, the policy that acts as a rational agent would would get a higher expected test score than this one, and would therefore receive the highest training score.

  3. In particular, what it means for a policy to “maximize its average test score” is “there is no alternative policy getting a higher expected test score on the same sequence of test episodes.”

  4. (3) implies that there is no automatic incentive to influence future test episodes (e.g. cause humans to give the AI easier-to-label images in the future).

  5. (3) also implies that collusion does not happen in most natural setups (e.g. under this policy, generative adversarial policy will not collude). Instead, agents will play something like a correlated equilibrium.

  6. If there is some lowish-complexity fact that would help a policy gain a higher average training score, then we should assume that the trained policy will act as if it knows this fact. Otherwise, it would get a lower training score than an alternative policy that does know this fact and uses this to achieve a higher average training score. In particular, trained policies will know general features about the training setup.

Why might episodic RL capabilities be avaliable?

To develop reliable general episodic RL systems, we would need:

  1. Methods for ensuring that trained policies notice when they may fail to generalize from a training context to a test context, such as KWIK learning and other methods for identifying inductive ambiguities.

  2. Better policy classes (e.g. neural network architectures) and theoretical analyses of them. For example, perhaps we could show that “adversarial” hypotheses are not problematic, since there will exist non-adversarial variants of adversarial hypotheses.

  3. Better optimization algorithms and theoretical analyses of them (which could prove e.g. non-collusion).

The hope is most of these technologies will be developed in the course of AI capabilities research. I find some aspects of these problems compelling as AI alignment research, especially those in (1), both because it isn’t guaranteed that AI capabilities researchers will automatically develop enough theory for understanding highly advanced episodic RL agents, and because it is useful to have concrete models of things like inductive ambiguity identification available ahead of time (so more AI alignment research can build on them).

Using general capabilities to reason about hypothetical aligned AGI designs

Ideally, a proposed aligned AGI design states: “if we had access to algorithms implementing general capabilities X, Y, and Z, then we could arrange these algorithms such that, if all algorithms actually deliver these capabilities, then the resulting system is pretty good for human values”. In practice, it is usually difficult to state a proposal this clearly, so initial research will be aimed at getting to the point that a proposal of this form can be stated.

If we’re assuming that future AI systems only have episodic RL capabilities, then the proposal will say something like: “if we arrange episodic RL systems in the right way, then if they play in an approximate correlated equilibrium (within each episode), then the system will do good things.” I think this approach allows both “unitary” systems and “decomposed” systems, and both “tools” and “agents”, while making everything precise enough that we don’t need to rely on the intuitive meanings of these words to reason about advanced AI systems.

What other capabilities could be available?

It’s worth thinking about additional general capabilities that highly advanced AI systems might have.

  1. We could suppose that systems are good at transfer learning: they can generalize well from a training context to a test context, without the training context being similar enough to the training context that we’d expect good training performance to automatically imply good test performance. This is clearly possible in some cases, and impossible in other cases, but it’s not clear where the boundary is.

  2. We could suppose that systems are good at learning “natural” structure in data (e.g. clusters and factors) using unsupervised learning.

  3. We could suppose that systems will be good at pursuing goals defined using logic, even when there aren’t many training examples of correct logical inference to generalize from. Paul describes a version of this problem here. Additionally, much of MIRI’s work in logical uncertainty and decision theory is relevant to designing agents that pursue goals defined using logic.

  4. We could suppose that systems will be good at pursuing environmental goals (such as manufacturing paperclips). At the moment, we have very little idea of how one might specify such a goal, but it is at least imaginable that some future theory would allow us to specify the goal of manufacturing paperclips. I expect any theory for how to do this to rely on (2) or (3), but I’m not sure.

This list isn’t exhaustive; there are probably lots of other general capabilities we could assume.

Some useful AI alignment research may assume access to these alternative capabilities (for example, one may try to define conservative concepts using (2)), or attempt to build these capabilities out of other capabilities (for example, one may try to build capability (4) out of capabilities (1), (2), and (3)). This research is somewhat limited by the fact that we don’t have good formal models for studying hypothetical systems having these properties. For example, at this time, it is difficult (though not impossible) to evaluate proposals for low-impact AI, since we don’t understand environmental goals, and we don’t know whether the AI will be able to find “natural” features of the world that include features that humans care about.

Unfortunately, there doesn’t appear to be a very good argument that we will probably have any of these capabilities in the future. It seems more likely than not to me that some general capability that I listed will be highly relevant to AI alignment, but I’m highly uncertain.

What’s the alternative?

If someone is attempting to design aligned AGI systems, what could they do other than reduce the problem to general capabilities like the ones I have talked about? Here I list some things that don’t obviously fit into this model:

  1. We could test out AI systems and use systems that seem to have better behavior. But I see this as a form of training: we’re creating policies and filtering out the ones that we don’t like (i.e. giving them a higher training score if it looks like they are doing good things, and finding policies that have a high training score). If we filter AI systems aggressively, then we’re basically just training things to maximize training score, and are back in an episodic RL context. If we don’t filter them aggressively, then this limits how much useful work the filtering can do (so we’d need another argument for why things will go well).

  2. We could assume that AI systems will be transparent. I don’t have a good model for what transparency for advanced AI systems would look like. At least some ways of getting transparency (e.g. anything that only uses the I/​O behavior of the policy) reduce to episodic RL. Additionally, while it is easy for policies to be transparent if the policy class is relatively simple, complex policy classes are required to effectively make use of the capabilities of an advanced system. Regardless, transparency is likely to be important despite the current lack of formal models for understanding it.

Perhaps there are more ideas like these. Overall, I am skeptical if the basic argument for why a system ought to behave as intended uses more assumptions than the fact that the system has access to suitable capabilities. Once we have a basic argument for why things should probably mostly work, we can elaborate on this argument by e.g. arguing that transparency will catch some violations of the assumptions required for the basic argument.