The LearningTheoretic AI Alignment Research Agenda
In this essay I will try to explain the overall structure and motivation of my AI alignment research agenda. The discussion is informal and no new theorems are proved here. The main features of my research agenda, as I explain them here, are

Viewing AI alignment theory as part of a general abstract theory of intelligence

Using desiderata and axiomatic definitions as starting points, rather than specific algorithms and constructions

Formulating alignment problems in the language of learning theory

Evaluating solutions by their formal mathematical properties, ultimately aiming at a quantitative theory of risk assessment

Relying on the mathematical intuition derived from learning theory to pave the way to solving philosophical questions
Philosophy
In this section I explain the key principles and assumptions that motivate my research agenda.
The importance of rigor
I believe that the solution to AI alignment must rely on a rigorous mathematical theory. The algorithms that comprise the solution must be justified by formal mathematical properties. All mathematical assumptions should be either proved or at least backed by considerable evidence, like the prominent conjectures of computational complexity theory. This needs to be the case because:

We might be facing oneshot success or failure. This means we will have little empirical backing for our assumptions.

To the extent we have or will have empirical evidence about AI, without a rigorous underlying theory it is very hard to know how scalable and transferable the conclusions are.

The enormity of the stakes demands designing a solution which is as reliable as possible, limited only by the time constraints imposed by competing unaligned projects.
That said, I do expect the ultimate solution to have aspects that are not entirely rigorous, specifically:

The quantitative risk analysis will probably rely on some parameters that will be very hard to determine from first principles, because of the involvement of humans and our physical universe in the equation. These parameters might be estimated through (i) study of the evolution of intelligence (ii) study of human brains, (iii) experiments with weak AI and its interaction with humans (iv) our understanding of physics. Nevertheless, we should demand the solution to be highly reliable even given cautious error margins on these parameters.

The ultimate solution will probably involve some heuristics. However, it should only involve heuristics that are designed to improve AI capabilities without invalidating any of the assumptions underlying the risk analysis. Thus, in the worstcase scenario these heuristics will fail and the AI will not take off but will not become unaligned.

In addition to the theoretical analysis, we do want to include as much empirical testing as possible, to provide an additional layer of defense. At the least, it can be a last ditch protection in the (hopefully very unlikely) scenario that some error got through the analysis.
Metaphilosophy and the role of models
In order to use mathematics to solve a realworld problem, a mathematical model of the problem must be constructed. When the realworld problem can be defined in terms of data that is observable and measurable, the validity of the mathematical model can be ascertained using the empirical method. However, AI alignment touches on problems that are philosophical in nature, meaning that there is still no agreedupon empirical or other criterion for evaluating an answer. Dealing with such problems requires a metaphilosophy: a way of evaluating answers to philosophical questions.
Although I do not claim a fully general solution to metaphilosophy, I think that, pragmatically, a quasiscientific approach is possible. In science, we prefer theories that are (i) simple (Occam’s razor) and (ii) fit the empirical data. We also test theories by gathering further empirical data. In philosophy, we can likewise prefer theories that are (i) simple and (ii) fit intuition in situations where intuition feels reliable (i.e. situations that are simple, familiar or received considerable analysis and reflection). We can also test theories by applying them to new situations and trying to see whether the answer becomes intuitive after sufficient reflection.
Moreover, I expect progress on most problems to be achieved by the means of successive approximations. This means that we start with a model that is grossly oversimplified but that already captures some key aspects of the problems. Once we have a solution within this model, we can start to attack its assumptions and arrive at a new, more sophistical model. This process should repeat until we arrive at a model that (i) has no obvious shortcomings and that (ii) we seem unable to improve despite our best efforts.
Like in science, we can never be certain that a theory is true. Any assumption or model can be questioned. This requires striking a balance between complacency and excessive skepticism. To avoid complacency, we need to keep working to find better theories. To avoid excessive skepticism, we should entertain hypotheses honestly and acknowledge when a theory is already capable of passing nontrivial quasiscientific tests. Reaching agreement is harder work (because our tests rely on intuition which may vary from individual to individual), but we should not despair of that goal.
Intelligence is understandable
It is possible to question whether a mathematical theory of intelligence is possible at all. After all, we don’t expect to have a tractable mathematical theory of Rococo architecture, or a simple equation describing the shape of the coastline of Africa in the year 2018.
The key difference is that intelligence is a natural concept. Intelligence, the way I use this word in the context of AI alignment, is the ability of an agent to make choices in a way that effectively promote its goals, in an environment that is not entirely known or even not entirely knowable. Arguing over the meaning of the word would be a distraction: this is the meaning relevant to AI alignment, because the entire concern of AI alignment is about agents that effectively pursue their goals, undermining the conflicting goals of the human species. Moreover, intelligence is (empirically) a key force in determining the evolution of the physical universe.
I conjecture that natural concepts have useful mathematical theories, and this conjecture seems to me supported by evidence in natural and computer science. It would be nice to have this conjecture itself follow from a mathematical theory, but this is outside of my current scope. Also, we already have some progress towards a mathematical theory of intelligence (I will discuss it in the next section).
A related question is, whether it is possible to design an algorithm for strong AI based on simple mathematical principles, or whether any strong AI will inevitably be an enormous kludge of heuristics designed by trial and error. I think that we have some empirical support for the former, given that humans evolved to survive in a certain environment but succeeded to use their intelligence to solve problems in very different environments. That said, I am less confident about this than about the previous question. In any case, having a mathematical theory of intelligence should allow us to resolve this question too, whether positively or negatively.
Value alignment is understandable
The core of AI alignment is reliably transferring human values to a strong AI. However, the problem of defining what we mean by “human values” is a philosophical problem. A common and natural model of “values” is expected utility maximization: this is what we find in game theory and economics, and this is supported by VNM and Savage theorems. However, as often pointed out, humans are not perfectly rational, therefore it’s not clear in what sense they can be said to maximize the expectation of a specific utility function.
Nevertheless, I believe that “values” is also a natural concept. Denying the concept of “values” altogether is paramount to nihilism, and in such a belief system there is no reason to do anything at all, including saving yourself and everyone else from a murderous AI. Admitting the general concept of “values” as something complex and human specific (despite the focus on “values” rather than “human values”) seems implausible, since intuitively we can easily imagine alien minds facing a similar AI alignment problem. Moreover, the concept of “values” is part and parcel of the concept of “intelligence”, so if we believe that “intelligence” (due to its importance in shaping the physical world) is a natural concept, then so are “values”.
Therefore, I conjecture that there is a simple mathematical theory of imperfect rationality, within which the concept of “human values” is welldefined modulo the (observable, measurable) concept of “humans”. Some speculation on what this theory looks like appears in the following sections.
Now, that doesn’t mean that “human values” are perfectly welldefined, anymore than, for example, the center of mass of the sun is perfectly welldefined (which would require deciding exactly which particles are considered part of the sun). However, like the center of mass of the sun is sufficiently welldefined for many practical purposes in astrophysics, the concept of “human values” should be sufficiently welldefined for designing an aligned AGI. To the extent alignment remains ambiguous, the resolution of these ambiguities doesn’t have substantial moral significance.
Foundations
In this section I briefly explain the mathematical tools with which I set out to study AI alignment, and the outline of the mathematical theory of intelligence that these tools already painted.
Statistical Learning Theory
Statistical learning theory studies the informationtheoretic constraints on various types of learning tasks, answering questions such as, when is a learning task solvable at all, and how much training data is required to solve the learning task within given accuracy (sample complexity). Learning tasks can be broadly divided into:

Classifications tasks: The input is sampled from a fixed probability distribution, and the objective is assigning the correct label. The deployment phase (during which the performance of the algorithm is evaluated) is distinct from the training phase (during which the correct labels are revealed).

Online learning / multiarmed bandits: There is no distinction between deployment and training. Instead, the algorithm’s performance on each round is evaluated, but also the algorithm might receive some feedback on its performance. The behavior of the environment might change over time, possibly even respond to the algorithm’s output. However, we only evaluate each output conditioned on the past history (we don’t require the algorithm to plan ahead).

Reinforcement learning: There is twosided interaction between the algorithm and the environment, and the algorithm’s performance is the aggregate of some reward function over time. The algorithm is required to plan ahead in order to achieve optimal performance. This might or might not assume “resets” (when the environment periodically returns to the initial state) or partition of time into “episodes”\ (when the performance of the algorithm is only evaluated conditioned on the previous episodes, so that it doesn’t have to plan ahead more than one episode into the future).
It is the last type of learning tasks, in particular assuming no resets or episodes, that is the most relevant for studying intelligence in the relevant sense. Indeed, the abstract setting of reinforcement learning is a good formalization for the informal definition of intelligence we had before. Note that the name “reward” might be misleading: this is not necessarily a signal received from outside, but can just as easily be some formally specified mathematical function.
In online learning and reinforcement learning, the theory typically aims to derive upper and lower bounds on “regret”: the difference between the expected utility received by the algorithm and the expected utility it would receive if the environment was known a priori. Such an upper bound is effectively a performance guarantee for the given algorithm. In particular, if the reward function is assumed to be “aligned” then this performance guarantee is, to some extent, an alignment guarantee. This observation is not vacuous, since the learning protocol might be such that the true reward function is not directly available to the algorithm, as exemplified by DIRL and DRL. Thus, formally proving alignment guarantees takes the form of proving appropriate regret bounds.
Computational Learning Theory
In addition to informationtheoretic considerations, we have to take into account considerations of computational complexity. Thus, after deriving informationtheoretic regret bounds, we should continue to refine them by constraining our algorithms to be computationally feasible (which typically means running on polynomial time, but we may also need to consider stronger restrictions, such as restrictions on space complexity or parallelizability). If we consider Bayesian regret (i.e. the expected value of regret w.r.t. some prior on the environments), this effectively means we are dealing with averagecase complexity. Note that, imposing computational constraints on the agent implies bounded reasoning / nonomniscience and already constitutes departure from “perfect rationality” in a certain sense.
More precisely, it is useful to differentiate between at least two levels of computational feasibility (see also this related essay by Alex Appel). On the first level, which I call “weakly feasible”, we allow the computing time to scale polynomially with the number of hypotheses we consider, or exponentially with the description length of the correct hypothesis (these two are more or less interchangeable since, the number of hypotheses of given description length is exponential in this length). Thus, algorithms like Levin’s universal search or Solomonoff induction over programs with polynomial time complexity, or Posterior Sampling Reinforcement Learning with a small number of hypotheses fall into this category. On the second level, which I call “strongly feasible”, we require polynomial computing time for the “full” hypothesis space. At present, we only know how to achieve theoretical guarantees on this second level in narrow contexts, such as reinforcement learning with a small state space (i.e. with number of states polynomial in the security parameter).
In fact, the current gap in our theoretical understanding of deep learning is strongly related to the gap between weak and strong feasibility. Indeed, results about expressiveness and (statistical) learnability of neural networks are wellknown, however exact learning of neural networks is NPcomplete in the general case. Understanding how this computational barrier is circumvented in practical problems is a key challenge in understanding deep learning. Such understanding would probably be a positive development in terms of AI alignment (although it might also contribute to increasing AI capacity), but I don’t think it’s a high priority problem since it seems to already receive considerable attention in mainstream academia (i.e. it is not neglected).
I believe that the development of AI alignment theory should proceed by prioritizing informationtheoretic analysis first, complexitytheoretic analysis in the sense of weak feasibility second, and complexitytheoretic analysis in the sense of strong feasibility last. That said, we should keep the complexitytheoretic considerations in mind, and strive to devise solutions that at least seem feasible modulo “miracles” similar to deep learning (i.e. modulo intractable problems that are plausibly tractable in realistic special cases). Moreover, certain complexitytheoretic considerations are already implicit in the choice of the space of hypotheses for your learning problem (e.g. Solomonoff induction has to be truncated to polynomialtime programs to be even weakly feasible). In particular, we should keep in mind that the hypotheses must be computationally simpler than the agent itself, whereas the universe must be computationally more complex than the agent itself. More on resolving this apparent paradox later.
Algorithmic Information Theory
The choice of hypothesis space plays a crucial role in any learning task, and the choice of prior plays a crucial role in Bayesian reinforcement learning. In narrow AI this choice is based entirely on the prior knowledge of the AI designers about the problem. On the other hand, general AI should be able to learn its environment with little prior knowledge, by noticing patterns and using Occam’s razor. Indeed, the latter is the basis of epistemic rationality to the best of our understanding. The Solomonoff measure is an elegant formalization of this idea.
However, Solomonoff induction is incomputable, so a realistic agent would have to use some truncated form of it, for example by bounding the computational resources made available to the universal Turing machine. It thus becomes an important problem to find a natural prior such that:

It allows for a (sufficiently good) sublinear regret bound with a computationally feasible algorithm.

It ranks hypotheses by description complexity in some appropriate sense.

It satisfies some universality properties analogous to the Solomonoff measure (but appropriately weaker).
Towards a rigorous definition of intelligence
The combination of perfect Bayesian reinforcement learning and the Solomonoff prior is known as AIXI. AIXI may be regarded as a model ideal intelligence, but there are several issues that were argued to be flaws in this concept:

Traps: AIXI doesn’t satisfy any interesting regret bounds, because the environment might contain traps. In fact, the set of all computable environments is an unlearnable class of hypotheses: no agent has a sublinear regret bound w.r.t. this class.

Cartesian duality: AIXI’s “reasoning” (and RL in general) seems to assume the environment cannot influence the algorithm executed by the agent. This is unrealistic. For example if our agent is a robot, then it’s perfectly possible to imagine some external force breaking into its computer and modifying its software.

Irreflexivity: The Solomonoff measure contains only computable hypotheses but the agent itself is uncomputable. In particular, AIXI can satisfy no guarantees pertaining to environments that e.g. contain other AIXIs. An analogous problem persists with any simple attempt to modify the prior: the prior can only contain hypotheses simpler than the agent.

Decisiontheoretic paradoxes: AIXI seems to be similar to a Causal Decision\ Theorist so apparently it will fail on Newcomblike problems.
The Cartesian duality problem and the traps problem are actually strongly related. Indeed, one can model any event that destroys the agent (including modifying its source code) as the transition of the environment into some inescapable state. Such a state should be assigned a reward that corresponds to the expected utility of the universe going on without the agent. However, it’s not obvious how the agent can learn to anticipate such states, since observing it once eliminates any chance of using this knowledge later. DRL already partially addresses this problem: more discussion in the next section.
Solving irreflexivity requires going beyond the Bayesian paradigm by including models that don’t fully specify the environment. More details in the next section.
Finally, the decisiontheoretic paradoxes are a more equivocal issue than it seems, because the usual philosophical way of thinking about decision theory assumes that the model of the environment is given, whereas in our way of thinking, the model is learned. This is important: for example, if AIXI is placed in a repeated Newcomb’s problem, it will learn to onebox, since its model will predict that oneboxing causes the money to appear inside the box. In other words, AIXI might be regarded as a CDT, but the learned “causal” relationships are not the same as physical causality. Formalizing other Newcomblike problems require solving irreflexivity first, because the environment contains Omega which cannot be simulated by the agent. Therefore, my current working hypothesis is that decision theory will be mostly solved (or dissolved) by

Solving irreflexivity

Value learning will automatically learn some aspects of the decision theory too.

Allowing for selfmodification, which should be possible after solving irreflexivity + Cartesian duality (selfmodification may be again be regarded as a terminal state)
To sum up, clarifying all of these issues should result in formulating a certain optimality condition (regret bound) which may be regarded as a rigorous definition of intelligence. This would also constitute progress towards defining “values” (having certain values means being intelligent w.r.t. these values), but the latter might require making the definition even more lax. More on that later.
Research Programme Outline
In this section I break down the research programme into different domains and subproblems. The list below is not intended to be a linear sequence. Indeed, many of the subproblems can be initially attacked in parallel, but also many of them are interconnected and progress in one subproblem can be leveraged to produce a more refined analysis of another. Any concrete plan I have regarding the order with which these questions should be addressed is liable to change significantly as progress is made. Moreover, I expect the entire breakdown to change as progress is made and new insights are available. However, I do believe that the highlevel principles of the approach have a good chance of surviving, in some form, into the future.
Universal reinforcement learning
The aim of this part in the agenda is deriving regret bounds or other performance guarantees for certain settings of reinforcement learning that are simultaneously strong enough and general enough to serve as a compelling definition / formalization of the concept of general intelligence. In particular, this involves solving the deficiencies of AIXI that were pointed out in the previous section.
I believe that a key step towards this goal is solving the problem of “irreflexivity”. That is, we need to define a form of reinforcement learning in which the agent achieves reasonable performance guarantees despite an environment which is as complex or more than the agent itself. My previous attempts to make progress towards that goal include minimax forecasting and dominant forecasters for incomplete models. There, the aim was passive forecasting rather than reinforcement learning.
The idea of minimax forecasting can be naturally extended to reinforcement learning. Environments in reinforcement learning naturally form a convex set in some topological vector space (where convex linear combinations correspond to probabilistic mixtures). Normally, models are points of , i.e. specific environments. Instead, we can consider incomplete models which are nonempty convex subsets of . Instead of considering , the expected utility of policy interacting with environment , we can consider , where is an incomplete model: the minimal guaranteed expected utility of for environments compatible with the incomplete model . We can define a set of incomplete models to be learnable when there is a metapolicy s.t. for any
Here, is the time discount parameter. Notably, this setting satisfies the analogue of the universality property of Bayesoptimality (see “Proposition 1” in this essay). Here, the role of the Bayesoptimal policy is replaced by the policy
Here, is the “incomplete prior” corresponding to some :
Moreover, it is possible to define an incomplete analogue of MDPs. These are stochastic games, where the choices of the opponent correspond to the “Knightian uncertainty” of the incomplete model. Thus, it is natural to try and derive regret bounds for learning classes of such incomplete MDPs. In fact, this theory might justify the use of finite (or other restricted) MDPs which is common in RL and is needed for deriving most regret bounds. Indeed, there is no reason why physical reality should be a finite MDP, however this does not preclude us from using a finite stochastic game as an incomplete model of reality. In particular, an infinite MDP (and thus also a POMDP, since a POMDP can be reduced to an MDP whose states are belief states = probability measures on the state space of the POMDP) can be approximated by a finite stochastic game by partitioning its state space into a finite number of “cells” and letting the opponent to choose the exact state inside the cell upon each transition.
It is possible to generalize this setting further by replacing “crisp” sets of environments by fuzzy sets. That is, we can define a “fuzzy model” to be a function (the membership function) s.t. is nonempty. The performance of a policy on the model is then given by
Note that is assumed to take values in , so no with can affect the above value.
This generalization allows capturing a broad spectrum of performance guarantees. For example, given any policy we can define by
Then, learning the model amounts to learning to perform at least as well as , whatever the environment is. Thus, the setting of “fuzzy reinforcement learning” might be regarded as a hybrid of modelbased and modelfree approaches.
One test for any theory attempting to solve irreflexivity is whether it leads to reasonable gametheoretic solution concepts in multiagent scenarios. For example, it is obvious that incomplete models lead to Nash equilibria in zerosum games (an incomplete model is a zerosum game, in some sense), but the situation in more general games in currently unknown. Another sort of test is applying the theory to Newcomblike decisiontheoretic puzzles, although solving all of them might require additional elements, such as selfmodification. Further applications of such a theory which may also be regarded as tests will appear in the next subsection.
Next, the problem of traps has to be addressed. DRL partially solves this problem by postulating an advisor that has prior knowledge about the traps. It seems reasonable to draw a parallel between this and realworld human intelligence: humans learn from previous generations regarding the dangers of their environment. In particular, children seems like a salient example of an algorithm employing a lot of exploration while trusting a different agent (the parent) to prevent it from falling into traps. However, from a different perspective, this seems like hiding the difficulty in a different place. Namely, if we consider the whole of humanity as an intelligent agent (which seems a legitimate model at least for the purposes of this particular issue), then how did it avoid traps? To some extent, we can claim that human DNA is another source for prior knowledge, acquired by evolution, but somewhere this recursion must come to an end.
One hypothesis is, the main way humanity avoids traps is by happening to exist in a relatively favorable environment and knowing this fact, on some level. Specifically, it seems rather difficult for a single human or a small group to pursue a policy that will lead all of humanity into a trap (incidentally, this hypothesis doesn’t reflect optimistically on our chances to survive AI risk), and also rather rare for many humans to coordinate on simultaneously exploring an unusual policy. Therefore, human history may be very roughly likened to episodic RL where each human life is an episode.
This mechanism should be formalized using the ideas of quantilal control. The baseline policy comes from the prior knowledge / advisor, and the allowed deviation (some variant of Renyi divergence) from the baseline policy is chosen according to the prior assumption about the rate of falling into a trap while following the baseline policy. This should lead to an appropriate regret bound.
I think that another important step towards universal RL is deriving regret bounds that exploit structural hierarchies. This builds on the intuition that, although the real world is very complex and diverse, the presence of structural hierarchies seems like a nearly universal feature. Indeed, it is arguable that we would never reach our current level of understanding physics if there was no separation of scales that allowed studying the macroscopic world without knowing string theory et cetera. I see 3 types of hierarchies that need to be addressed, together with their mutual interactions:

Temporal hierarchy: Separation between processes that happen on different timescales. We can try to model it by considering MDPs with a hierarchical statespace, s.t. transitions on a higher levels of the hierarchy happen much slower than transitions on a lower level of the hierarchy. This means that w.r.t. to a higher level, the lower level can always be considered to occupy an equilibrium distribution over states.

Spatial hierarchy: Separation between processes that happen on different spacescales. We consider a “cellular decision process” which is an MDP that is structured like a cellular automaton. It is then tempting to try and connected RL theory with renormalization group methods from physics.

Informational hierarchy: We consider a hierarchical structure on the space of hypotheses. That is, we expect the agent to first learn the highlevel class to which the environment belongs, then learn the class corresponding to the lower level of the hierarchy et cetera, until it learns the actual environment. This is a formalization of the idea of “learning how to learn”, a rather wellknown idea for reducing the sample complexity of reinforcement learning.
In particular, I expect these hierarchies to yield regret bounds which do not have the “trial and error” form of most known regret bounds. That is, known regret bounds imply a sample complexity that is a large multiple of either the reset time (for RL with resets) or the mixing time (for RL without resets). This seems unsatisfactory: a modelbased learner should be able to extrapolate its knowledge forward without waiting for a full “cycle” of environment response. Certainly we expect an artificial superintelligence to achieve a pivotal event from the first attempt, in some sense.
Also, the hierarchies should bridge at least part of the gap between weak and strong feasibility. Indeed, many of the successes of deep learning were based on CNNs and Boltzmann machines which seem to be exploiting the spatial hierarchy.
Returning to the issue of traps, there might be some sense in which our environment is “favorable” which is more sophisticated than the discussion before and which may be formalized using hierarchies (e.g. early levels of the information hierarchy can be learned safely and late levels only contain traps predictable by the early levels).
Finally, as discussed in the previous section, defining the correct universal prior and analyzing its properties is crucial to complete the theory. Given the hypotheses put forth in this section, this prior should be

A fuzzy prior rather than a “complete” prior

Possibly consist of a fuzzy version of finite, or otherwise restricted, MDPs (although Leike derives some regret bounds for general environments, at the cost of assuming sufficiently slowly dropping time discount and in particular ruling out geometric time discount). One way to think of it is, the finite MDPs are just an approximation of the infinite reality, however maybe we can also consider this a vindication of some sort of ultrafinitism.

Reflect some “favorability” assumptions

Have a hierarchical structure and consist of hierarchical models
Value learning protocols
The aim of this part in the agenda is developing learning setups that allow one agent (the AI) to learn the values of a different agent or group of agents (humans). This involves directly or indirectly tackling the issues of, what does it mean for an agent to have particular values if it is imperfectly rational and possibly vulnerable to manipulation or other forms of “corruption”.
At present, I conceive of the following possible basic mechanisms for value learning:

Formal communication: Information about the values is communicated to the agent in a form with predefined formal semantics. Examples of this are, communicating a full formal specification of the utility function or manually producing a reward signal. Other possibilities are, communicating partial information about the reward signal, or evaluating particular hypothetical situations.

Informal communication: Information about the values is communicated to the agent using natural language or in other form whose semantics have to be learned somehow.

Demonstration: The agent observes a human pursuing eir values and deduces the values from the behavior.

Reverse engineering: The agent somehow acquires a full formal specification of a human (e.g. an uploaded brain) and deduces the values from this specification. This is probably not a very realistic mechanism, but might still be useful for “thought experiments” to test possible definitions of imperfect rationality.
Formal communication is difficult because human values are complicated and describing them precisely is hard. A manual reward signal is more realistic than a full specification, but:

Still difficult to produce, especially if this reward is supposed to reflect the true aggregate of the human’s values as observed by em across the universe (which is what it would have to be in order to aim at the “true human utility function”).

The reward signal will become erroneous if the human or just the communication channel between the human and the agent will be corrupted in some way. This is serious problem since, if not taken into account, it incentives the agent to produce such corruption.

If the agent is supposed to aim at a very longterm goal, there might be little to no relevant information in the reward signal until the goal is attained.
Overall, it might be more realistic to rely on formal communication for tasks of limited scope (putting a strawberry on plate) rather than actually learning human values in full (i.e. designing a sovereign). However, it is also possible to combine several mechanisms in a single protocol, and formal communication might be only one of them.
The problem of corruption may be regarded as a special cases of the problem of traps (the latter was outlined in the previous section), if we assume that the agent is expected to achieve its goals without entering corrupt states. Delegative Reinforcement Learning aims to solve both problems by occasionally passing control to the human operator (“advisor”), and using it to learn which actions are safe. The analysis of DRL that I produced so far can and should be improved in multiple ways:

Instead of considering only a finite or countable set of hypotheses, we should consider a space of hypotheses of finite “dimension” (for some appropriate notion of dimension; it is a common theme in statistical learning theory that different learning setups have different natural notions of dimensionality for hypothesis classes). We then should obtain a regret bounded depending on the dimension and the entropy of the prior. I believe that I already have significant progress on this point, with results expected soon.

Merge the ideas of quantilal control and catastrophe mitigation to yield a setting where, corruption is a quantitative/gradual rather than boolean and there is a low but nonvanishing rate of corruption along the advisor policy. Successful catastrophe mitigation will be achieved if it is possible without high Renyi divergence from the advisor policy, in some sense.

In the current form DRL requires the advisor to be ready to act instead of the agent at any given time moment. If the temporal rate at which actions are taken is high, this is an unrealistic demand. Naively, we can solve this by dividing time into intervals, and considering the policy on each interval as a single action. However, this would introduce an exponential penalty into the regret bound. Therefore, a more sophisticated way to manage the scheduling of control between the agent and the advisor is required, with a corresponding regret bound.

DRL assumes that the advisor takes the optimal action with some minimal probability . The interpretation of probability in this context requires further inquiry. Specifically, it seems that a realistic interpretation would treat this probability as pseudorandom in some sense, s.t. the agent might simultaneously employ a more refined model within which the advisor might even be deterministic. Possibly relevant is the work of Shalizi (hat tip to Alex Appel for bringing it to my attention) where ey show that under some ergodicity assumptions (which should work for us since we use finite MDPs) Bayesian updating converges to the model in the prior that is the nearest to the true environment in some sense (this is reminiscent of optimal estimator theory). Thus, we can imagine having a “fullfledged” prior with refined models and a coarse prior s.t. this pseudorandom probability is defined by “projecting” to it in the Shalizi sense.

Instead of only considering MDPs with a finite state space, we can consider e.g. Feller continuous MDPs with a compact state space, and/or POMDPs. This has some conceptual importance, since, it is unrealistic to assume that the advisor knows the traps of the real physical environment, but it is more realistic to assume the advisor knows the traps of its own belief state regarding the environment (see also “Corollary 1” in the essay about DIRL). However, it seems dubious to describe belief states as finite MDPs, since a probabilistic mixture of finite MDPs is not a finite MDP (but it is a fintie POMDP). On the other hand, we also need to consider fuzzy/incomplete “MDPs”. As we discussed in the previous subsection, this might actually make it redundant to consider infinite MDPs and POMDPs. Note also that infinite MDPs pose computational challenges and in particular solving even finite POMDPs is known to be PSPACEcomplete.

[EDIT: Added after discussion with Jessica Taylor in the comments] The action that a RL agent takes depends both on the new observation and its internal state. Often we ignore the latter and pretend the action depends only on the history of observations and actions, and this is okay because we can always produce the probability distribution over internal states conditional on the given history. However, this is only ok for informationtheoretic analysis, since sampling this probability distribution given only the history as input is computationally intractable. So, it might be a reasonable assumption that the advisor takes “sane” actions when left to its own devices, but it is, in general, not reasonable to assume the same when it works together with the AI. This is because, even if the AI behaved exactly as the advisor, it would hide the simulated advisor’s internal state, which would preclude the advisor from taking the wheel and proceeding with the same policy. We can overcome it by letting the advisor write some kind of “diary” that documents eir reasoning process, as much as possible. The diary is also considered a part of the environment (although we might want to bake into the prior the rules of operating the diary and a “cheap talk” assumption which says the diary has no side effects on the world). This way, the internal state is externalized, and the AI will effectively become transparent by maintaining the diary too (essentially the AI in this setup is emulating a “best case” version of the advisor). This idea deserves a formal analysis that explicitly models the advisor as another RL agent.
There is another issue with DRL that is worth discussing, although I am not sure whether it calls for a formal analysis soon. So far, we assumed that there are no side effects on the environment from the act of delegation itself. That is, the same action has exactly the same results whether carried out by the advisor or by the agent. Obviously, this is not realistic since any physical isolation layer created to ensure this will not be entirely foolproof (as a bare minimum, the advisor emself will remember which actions ey took). The sole exception is, perhaps, if both the agent and the advisor are programs running inside a homomorphic cryptography box. More generally, any RL setup ignores the indirect (i.e. not mediated by actions) sideeffects that the execution of the agent’s algorithm has on the environment (although it is more realistic to solve this latter problem by homomorphic cryptography). This issue seems solvable via the use of incomplete/fuzzy models (see previous subsection). Although the true physical environment does have side effects as above, the model the agent tries to learn may ignore those sideeffects (i.e. subsume them in the “Knightian uncertainty”). Similar remarks apply to the use of a source of random inside the algorithm I analyzed (a form of PosteriorSampling Reinforcement Learning) that is assumed to be invisible to the environment (although it is also possible to use deterministic algorithms instead: for example, the Bayesoptimal policy is deterministic and necessarily satisfies the same Bayesian regret bound, although it is also not even weakly feasible). One caveat is the possibility of nonCartesian daemons, defined and discussed in the next subsection.
The demonstration mechanism avoids some of the difficulties with formal communication, but has its own drawbacks. The ability to demonstrate a certain preference is limited by the ability to satisfy this preference. For example, suppose I am offered to play against Kasparov for money: if I win the game, I win $100 and if I lose the game, I lose $100. Then, I will refuse the bet because I know that I have few chances of winning. On the other hand, an AI might be able to win against Kasparov, but, seeing my demonstration it will remain uncertain whether I avoided the game because I’m afraid to lose or because of some other reason (for example, maybe I don’t want to have more money, or maybe there is something intrinsically bad about playing chess). Therefore, it seems hard to produce a performance guarantee which will imply successfully learning the human’s preferences and significantly outperforming the human in satisfying these preferences. In particular, the regret bound I currently have for Delegative Inverse Reinforcement Learning assumes that the “advisor” (the human) already takes the optimal action with maximal likelihood among all actions on any given time step.
In other words, the advantage of providing a manual reward signal over demonstration is related to the separation between P and NP. Solving a problem is much harder than verifying a solution, and similarly, demonstrating behavior that maximizes certain values is (computationally) much harder than evaluating a behavior according to the same values.
We can use the demonstration mechanism to a much greater advantage by designing a protocol that allows the human to learn from the AI. This is because many tasks can be much easier to solve with external advice. On the other hand, the AI can tell which advice is useful by seeing that the human changes eir behavior. For example, suppose that the task at hand is receiving the adjacency matrices of two graphs and producing an isomorphism between them. Then, if the human is unable to always solve this on eir own, it might be impossible for the AI to unambiguously understand what the task is. However, suppose that the AI can make several plausible guesses and test them by communicating the corresponding answer to the human. It will then observe which of those guesses cause the human to endorse the answer, and will thereby become able to carry out the task on its own.
More generally, we can consider the following learning protocol that I call “Learning By Teaching” (LBT). We have our agent and two additional actors (in the simplest case, humans): an “operator” and an “advisor”. The agent can, at each given moment, decide between 3 modes:

Mode I: The operator and the advisor carry on without the AI’s input. In this case, the advisor communicates something (the “advice”) to the operator, and the operator takes actions that influence the external environment.

Mode II: The AI acts instead of the advisor. Thus, the AI communicates advice to the operator, and the operator takes external actions.

Mode III: The AI acts instead of the operator, taking external actions directly.
In all modes, all actors observe the percept produced by the environment.
Note that, the semantics of the advisor’s communications are a priori unknown to the agent, and ultimately it seems reasonable this communication will use natural language, so this learning protocol may be said to incorporate both demonstration and informal communication.
Now, the AI is at risk of committing dangerous actions both in mode II and in mode III. In particular, there might be some forms of advice that will corrupt the operator and cause irreversible damage. However, like in DRL, it can mitigate this risk by learning from the advisor and the operator which actions are safe.
Finally, whatever the ultimate value learning protocol will be, it is desirable to have it grounded in a coherent theory of imperfect rationality. Conversely, I believe that a reasonable theory of imperfect rationality should admit a value learning protocol (i.e. the concept of “values” should be observable and measurable in an appropriate sense). Specifically LBT suggests 3 types of “flaws” an agent is allowed to have while maintaining particular values:

Its modeling abilities are limited: some computable and even efficiently computable models don’t appear in its prior. This flaw is entirely relative (i.e. some agents are more limited than others), since any feasible agent is limited.

Some events might result in a plastic response of the agent (“corruption”) which makes it irreversibly lose rationality and/or alignment with its initial values. For this to be consistent with welldefined values, we need to assume that “left to its own devices” the agent only becomes corrupt with a small rate. A peculiar thing about this assumption is that it depends on the environment rather than only on the agent. This seems unavoidable. The philosophical implication is, the values of an agent (and perhaps therby also its “identity” or “consciousness”) reside not only inside the agent (in the case of a human, the brain) but also in its environment. They are contextual. Indeed, if we imagine a whole brain emulation of a human transmitted to aliens in a different dimension with completely different physics, that know nothing about our universe, it seems impossible for those aliens to reconstruct the human’s value. Placing the brain in arbitrary environments might lead to “reprogramming” it with a wide array of different values. Moreover, assuming that the environment is one that is plausible to “naturally” contain a human brain also doesn’t solve the problem: malign superintelligences across the multiverse might exploit this assumption by creating human brains in odd environments on purpose (see Christiano’s closely related discussion of why the universal prior is malign).

The agent’s policy might involve significant random noise, for example s.t. only the maximal likelihood policy is “rational” even given the two previous flaws (like the advisor in DIRL). Like in the discussion of DRL above, this requires some nuanced analysis of what counts as “random”: some process might appear random given a certain level of computational resources and predictable given a higher level. Therefore, we might need to “project” the actual agent onto a suitable bounded model thereof.
In my opinion, the theory of imperfect rationality suggested by these considerations might already be close to capturing all the nuances of human irrationality.
Taming daemons
“Daemons” are intelligent agents that are simulated, within some degree of fidelity, inside the AI’s reasoning process and whose malign agendas can render the entire system unaligned even if the AI’s “main” reasoning process is correctly designed to acquire human values in some sense. The aim of this part of the agenda is to formalize the problem and provide solutions in the form of theoretical guarantees.
I distinguish between two types of daemons: Cartesian and nonCartesian. Cartesian daemons are those that are dangerous even if the Cartesian boundary between the AI and the rest of the universe is perfect. In other words, Cartesian daemons influence the word through the “legitimate” output channels given to the AI and while preserving the computational process inside the AI’s hardware conformant to the software model assumed by the designers. On the other hand, nonCartesian daemons rely on violations of the Cartesian boundary: they either exploit some physical side effects of the computation on the world or “hack” the AI from the inside in a way that breaks its software model (e.g. violate an abstraction layer in the software). Of course we might have a single (in some sense) subagent that utilizes both Cartesian and nonCartesian attack mechanisms. In the following, we will refer to a “Cartesian daemon” in the sense of “a Cartesian attack mechanism of some daemon” and similarly for a “nonCartesian daemon”.
The taming of Cartesian daemons is guaranteed by the same regret bounds that guarantee the AI’s alignment in general. For example, we can examine how it works in the case of the acausal attack identified by Christiano. In this scenario, some malign superintelligence (let’s call it Mu) somewhere in the multiverse simulates an environment very similar to that experienced by the AI we constructed on Earth. This makes the AI uncertain about whether it is on Earth or in Mu’s realm, under some conditions assigning a higher probability to Mu. Mu then arranges the environment to diverge from Earth at some pivotal point s.t. the AI’s expectation of this divergence leads it to perform malign actions before the divergence happens. Now, if the AI is implementing DRL, the uncertainty between Earth and Mu leads it to delegate to the advisor precisely at the moment this difference is important. Seeing the advisor display Earthlike rather than Mulike behavior, it is then able to discard the Mu hypothesis.
Of course, it is important to make the discussion above (and more generally, the performance guarantee that implies this discussion as a special case) quantitative which will require better understanding of universal priors and their corresponding regret bounds, as discussed before. In particular, it appears that for some parameters the acausal attack might still succeed if Mu will manage to make the malign hypothesis overwhelmingly more likely than Earth. Avoiding this might requiring carefully defining the prior, for example counting the AI’s own source code as only a single unit of description complexity, which might lower the description length cost of locating Earth in the universe considerably.
NonCartesian daemons theoretically can be completely eliminated by placing the AI in a homomorphic encryption box. This relies mainly on two assumptions:

The complexity theoretic conjecture underlying the cryptographic system is sound (specifically, the stronger conjecture that implies safety against quantum computing attacks as well).

The physical universe is, in some sense, in the class BQP (as our current understanding of physics seems to indicate), or at least any strongerthanBQP layer would be inaccessible from the box (which seems likely due to the usual separation of scales in physics).
Of course one would also have to take care that the implementation of this idea is sound, for example the key should probably be generated from quantum randomness and kept at a location sufficiently remote/isolated (in some sense that requires careful physical/cryptographic analysis) from the AI’s hardware (with a linking communication channels for encrypted inputs/outputs).
More troubling is the large performance overhead this would impose. This overhead might make the hypothetical aligned AI uncompetitive relative to unaligned projects. It might be possible to tolerate this overhead if either there is considerable computing overhang (i.e. by the time the relevant algorithms are discovered, the hardware is sufficient to quickly grow a superintelligence even given this overhead, resulting in a singleton) or the aligned AI is a national or multinational project whose budget allows it to be orders of magnitudes ahead of the competition in terms of computing power. However, since this is far from certain, designing alternative solutions is desirable.
One alternative approach is, employing extremely meticulous formal verification. That is, a physical model of the AI’s software, hardware and its immediate environment will be developed and it will be formally verified that:

It is not possible to “hack” the model from within, i.e. if the system is intact initially it will remain intact baring some extreme external influence.

The communication bandwidth from the AI’s software to the outside bypassing the legitimate output channels is very small (it is probably physically impossible to make it zero).
We will then need some analysis which shows that, given the very small “illegitimate” bandwidth, the aligned AI will “defeat” the unaligned daemons with high probability. This requires some gametheoretic model incorporating physical assumptions the shape of which is currently unknown to me. Needless to say, we will also have to make sure the implementation fits the model, which might require, for example, extreme means to isolate the hardware from contaminants. There will be a tradeoff between making the model simpler and thus easier to verify, versus making the model more complex and thus easier to physically protect and possibly allowing for more efficient hardware.
Also, at some point the AI should be mature enough to design an antidaemon immune system for itself. However, it is not clear whether that simplifies the challenge, since we have to ensure the daemons will not grow out of control before this point.
Recursive selfimprovement
The aim of this part of the agenda is formalize and analyze the concept of “recursive selfimprovement” in learningtheoretic language.
Recursive selfimprovement as a method of extremely rapid capability growth is an intriguing idea, however so far it has little rigorous support. Moreover, it far from clear that the first AGI will be recursively selfimproving, even if the concept is sound. Therefore, I do not see it as high priority item on the agenda. Nevertheless, it is worth some attention both because of the capability angle and because of possible applications to decisiontheory.
At present, I have only a few observations on how the subject might be approached:

The dangers of selfmodification can be naturally regarded as “traps”, due to the irreversible nature of selfmodification. Therefore, it seems appropriate to address them by the mechanisms of DRL. The way I expect it to cash out in practice is: (i) the AI will initially not selfmodify directly but only by suggesting a selfmodification and delegating its acceptance to the advisor (ii) a selfmodification will only be approved if “annotated” by a natural language explanation of its safety and merit (iii) the explanation will be honest (nonmanipulative) due to being sampled out of a the space of explanations that the advisor might have produced by emself.

One way to view selfmodification is as a game, where the players are all the possible modified versions of the agent. The state of environment also includes the modification state of the agent, so at each state there is one player that is in control (a “switching controller” stochastic game). Therefore, if we manage to prove theoretical guarantees about such games (for e.g. fuzzy reinforcement learning), these guarantees have implications for selfmodification. It might be possible to assume the game is perfectly cooperative, since all modifications that change the utility function can be regarded as traps.

The initial algorithm of the AI will already be computationally feasible (e.g. polynomial time) and will probably satisfy a regret bound close to the best possible. However, polynomial time is a only a qualitative property: indeed, we cannot be more precise without choosing a specific model of computation. Therefore, it might be that the capability gains from selfimprovement should be regarded as, tailoring the algorithm to the particular model of computation (i.e. hardware). In other words, it involves the algorithm learning the hardware on which it is implemented (we could provide it with a formal specification of this hardware but this doesn’t gain much as the agent would not know, initially, how to effectively utilize such a specification). Now, every improvement gained speeds up further improvements, but also there is some absolute upper bound: the best possible implementation. It is therefore interesting to understand whether there are asymptotic regimes for which the agent undergoes exponentially fast improvement during some local period of time, and if so, how realistic are these regimes.
Summary
In this section, I recap and elaborate the main features of the agenda as I initially stated them.

Viewing AI alignment theory as part of a general abstract theory of intelligence: The “general abstract theory of intelligence” is implemented in this agenda as the theory of universal reinforcement learning. All the other parts of the agenda (value learning, daemons, selfimprovement) are grounded in this theory.

Using desiderata and axiomatic definitions as starting points, rather than specific algorithms and constructions: The main goal of this agenda is establishing which theoretical guarantees (in particular, in the form of regret bounds, but other types may also appear) can and should be satisfied by intelligent agents in general and aligned intelligent agents (i.e. value learning protocols) in particular. Any specific algorithm is mostly just a tool for proving that a certain guarantee can be satisfied, and its sole motivation is this guarantee. Over time these algorithms might evolve into something close to a practical design, but I also have no compunctions about discarding them.

Formulating alignment problems in the language of learning theory: The agenda is “conservative” in the sense that its tools are mostly the same as used by mainstream AI researchers, but of course the priorities and objectives are quite different.

Evaluating solutions by their formal mathematical properties, ultimately aiming at a quantitative theory of risk assessment: So far the properties I derived were rather coarse and qualitative, and the models were also coarse and grossly oversimplified. However, as stated in the “philosophy” section, I see it as a reasonable starting point for further growth. Ultimately these mathematical properties should become sufficiently refined to translate to realworld implications (such as, what is the probability the AI will be misaligned, or what is the time from launch to pivotal event; of course, realistically, these will always be estimates with considerable error margins).

Relying on the mathematical intuition derived from learning theory to pave the way to solving philosophical questions: In particular, the way I approach questions such as “what is imperfect rationality?”, “how to use inductive reasoning without assuming Cartesian duality?” and “how to deal with decisiontheoretic puzzles?” is guided by what seems natural within the framework of learning theory, and variants of reinforcement learning in particular. I see it as tackling the problems headon, as opposed to approaches which use causal networks or formal logic, which IMO involve more assumptions that don’t follow from the formulation of the problem.
This agenda is not intended as a territorial claim on my part. On the contrary, I encourage other researchers to work on parts of it or even adopt it entirely, whether in collaboration with me or independently. Conversely, I am also very interested to hear criticism.
 (My understanding of) What Everyone in Technical Alignment is Doing and Why by 29 Aug 2022 1:23 UTC; 412 points) (
 [SEE NEW EDITS] No, *You* Need to Write Clearer by 29 Apr 2023 5:04 UTC; 257 points) (
 2018 AI Alignment Literature Review and Charity Comparison by 18 Dec 2018 4:46 UTC; 190 points) (
 A central directory for open research questions by 19 Apr 2020 23:47 UTC; 163 points) (EA Forum;
 The LearningTheoretic Agenda: Status 2023 by 19 Apr 2023 5:21 UTC; 135 points) (
 2018 AI Alignment Literature Review and Charity Comparison by 18 Dec 2018 4:48 UTC; 118 points) (EA Forum;
 AI Alignment Metastrategy by 31 Dec 2023 12:06 UTC; 114 points) (
 The Credit Assignment Problem by 8 Nov 2019 2:50 UTC; 98 points) (
 My take on Vanessa Kosoy’s take on AGI safety by 30 Sep 2021 12:23 UTC; 97 points) (
 Announcement: AI alignment prize round 3 winners and next round by 15 Jul 2018 7:40 UTC; 93 points) (
 What are the coolest topics in AI safety, to a hopelessly pure mathematician? by 7 May 2022 7:18 UTC; 89 points) (EA Forum;
 LongTerm Future Fund: November 2020 grant recommendations by 3 Dec 2020 12:57 UTC; 76 points) (EA Forum;
 Offer of collaboration and/or mentorship by 16 May 2019 14:16 UTC; 76 points) (
 [SEE NEW EDITS] No, *You* Need to Write Clearer by 29 Apr 2023 5:04 UTC; 71 points) (EA Forum;
 Alignment Newsletter #13: 07/02/18 by 2 Jul 2018 16:10 UTC; 70 points) (
 Alignment Org Cheat Sheet by 20 Sep 2022 17:36 UTC; 69 points) (
 [Closed] Hiring a mathematician to work on the learningtheoretic AI alignment agenda by 19 Apr 2022 6:44 UTC; 69 points) (
 [Closed] Prize and fast track to alignment research at ALTER by 17 Sep 2022 16:58 UTC; 63 points) (
 My hopes for alignment: Singular learning theory and whole brain emulation by 25 Oct 2023 18:31 UTC; 60 points) (
 InfraBayesianism Unwrapped by 20 Jan 2021 13:35 UTC; 54 points) (
 [Closed] Hiring a mathematician to work on the learningtheoretic AI alignment agenda by 19 Apr 2022 6:49 UTC; 53 points) (EA Forum;
 Do Sufficiently Advanced Agents Use Logic? by 13 Sep 2019 19:53 UTC; 47 points) (
 Resources for AI Alignment Cartography by 4 Apr 2020 14:20 UTC; 45 points) (
 Alignment Newsletter #16: 07/23/18 by 23 Jul 2018 16:20 UTC; 42 points) (
 My upcoming CEEALAR stay by 14 Dec 2020 6:23 UTC; 41 points) (EA Forum;
 [Closed] Prize and fast track to alignment research at ALTER by 18 Sep 2022 9:15 UTC; 38 points) (EA Forum;
 Why You Should Care About GoalDirectedness by 9 Nov 2020 12:48 UTC; 38 points) (
 18 Jan 2020 18:46 UTC; 35 points) 's comment on Clarifying “AI Alignment” by (
 Defining Myopia by 19 Oct 2019 21:32 UTC; 32 points) (
 EAI Alignment Speaker Series #1: Challenges for Safe & Beneficial BrainLike Artificial General Intelligence with Steve Byrnes by 23 Mar 2023 14:32 UTC; 28 points) (
 [AN #57] Why we should focus on robustness in AI safety, and the analogous problems in programming by 5 Jun 2019 23:20 UTC; 26 points) (
 18 Sep 2022 5:58 UTC; 19 points) 's comment on [Closed] Prize and fast track to alignment research at ALTER by (
 23 Aug 2019 17:17 UTC; 17 points) 's comment on Formalising decision theory is hard by (
 Computational complexity of RL with traps by 29 Aug 2018 9:17 UTC; 16 points) (
 15 Jul 2020 9:45 UTC; 16 points) 's comment on Classification of AI alignment research: deconfusion, “good enough” nonsuperintelligent AI alignment, superintelligent AI alignment by (
 26 Oct 2019 14:42 UTC; 15 points) 's comment on Vanessa Kosoy’s Shortform by (
 13 Nov 2019 0:21 UTC; 11 points) 's comment on The Credit Assignment Problem by (
 Lessons from Isaac: Pitfalls of Reason by 8 May 2020 20:44 UTC; 9 points) (
 GoalDirectedness: What Success Looks Like by 16 Aug 2020 18:33 UTC; 9 points) (
 26 Oct 2019 13:04 UTC; 7 points) 's comment on Vanessa Kosoy’s Shortform by (
 Tel Aviv: Vadim Kosoy presents his AI alignment research by 24 Jul 2018 13:57 UTC; 7 points) (
 25 Jan 2019 14:19 UTC; 7 points) 's comment on Announcement: AI alignment prize round 4 winners by (
 8 Jun 2019 8:43 UTC; 6 points) 's comment on [AN #57] Why we should focus on robustness in AI safety, and the analogous problems in programming by (
 9 Aug 2019 15:10 UTC; 6 points) 's comment on AI Alignment Open Thread August 2019 by (
 24 Aug 2019 16:40 UTC; 6 points) 's comment on Computational Model: Causal Diagrams with Symmetry by (
 23 Mar 2020 16:39 UTC; 4 points) 's comment on ACDT: a hacky acausal decision theory by (
 2 Sep 2021 18:29 UTC; 4 points) 's comment on Research agenda update by (
 28 Sep 2019 11:28 UTC; 4 points) 's comment on Formalising decision theory is hard by (
 3 Nov 2019 17:08 UTC; 4 points) 's comment on “embedded selfjustification,” or something like that by (
 21 Apr 2019 12:19 UTC; 3 points) 's comment on Value Learning is only Asymptotically Safe by (
 5 Jan 2021 21:51 UTC; 2 points) 's comment on Gunnar_Zarncke’s Shortform by (
 10 Nov 2019 12:52 UTC; 2 points) 's comment on Vanessa Kosoy’s Shortform by (
 9 Nov 2019 15:03 UTC; 2 points) 's comment on Vanessa Kosoy’s Shortform by (
 13 Nov 2019 14:37 UTC; 2 points) 's comment on Building Intuitions On NonEmpirical Arguments In Science by (
 2 Jun 2018 7:52 UTC; 1 point) 's comment on Concrete project lists by (EA Forum;
 29 Apr 2023 23:19 UTC; 1 point) 's comment on [SEE NEW EDITS] No, *You* Need to Write Clearer by (
I don’t understand this claim. It seems to me that human brains appear to be “an enormous kludge of heuristics designed by trial and error”. Shouldn’t the success of humans be evidence for the latter?
The fact that the human brain was designed by trial and error is a given. However, we don’t really know how the brain works. It is possible that the brain contains a simple mathematical core, possibly implemented inefficiently and with bugs and surrounded by tonnes of legacy code, but nevertheless responsible for the broad applicability of human intelligence.
Consider the following two views (which might also admit some intermediates):
View A: There exists a simple mathematical algorithm M that corresponds to what we call “intelligence” and that allows solving any problem in some very broad natural domain D.
View B: What we call intelligence is a collection of a large number of unrelated algorithms tailored to individual problems, and there is no “metaalgorithm” that produces them aside from relatively unsophisticated trial and error.
If View B is correct, then we expect that doing trial and error on a collection X of problems will produce an algorithm that solves problems in X and almost only in X. The probability that you were optimizing for X but solved a much larger domain Y is vanishingly small: it is about the same as the probability of a completely random algorithm to solve all problems in Y∖X.
If View A is correct, then we expect that doing trial and error on X has a nonnegligible chance of producing M (since M is simple and therefore sampled with a relatively large probability), which would be able to solve all of D.
So, the fact that homo sapiens evolved in a some prehistoric environment but was able to e.g. land on the moon should be surprising to everyone with View B but not surprising to those with View A.
I think the most plausible view is: what we call intelligence is a collection of a large number of algorithms and innovations each of which slightly increases effectiveness in a reasonably broad range of tasks.
To see why both view A and B seem strange to me, consider the analog for physical tasks. You could say that there is a simple core to human physical manipulation which allows us to solve any problem in some very broad natural domain. Or you could think that we just have a ton of tricks for particular manipulation tasks. But neither of those seems right, there is no simple core to the human body plan but at the same time it contains many features which are helpful across a broad range of tasks.
Regarding the physical manipulation analogy: I think that there actually is a simple core to the human body plan. This core is, more or less: a spine, two arms with joints in the middle, two legs with joints in the middle, feet and arms with fingers. This is probably already enough to qualitatively solve more or less all physical manipulation problems humans can solve. All the nuances are needed to make it quantitatively more efficient and deal with the detailed properties of biological tissues, biological muscles et cetera (the latter might be considered analogous to the detailed properties of computational hardware and input/output channels for brains/AGIs).
I think that your view is plausible enough, however, if we focus only on qualitative performance metrics (e.g. time complexity up to a polynomial, regret bound up to logarithmic factors), then this collection probably includes only a small number of innovations that are important.
It’s pretty clear that humans avoid traps using thinking, not just learning. See: CFCs, mutually assured destruction. Yes, principles of thinking can be learned, but then they generalize better than learning theory can prove.
See also: Not just learning
When I say “learning” I only mean that the true environment is initially unknown. I’m not assuming anything about the internals of the algorithm. So, the question is, what desiderata can we formulate that are possible to satisfy by any algorithm at all. The collection of all environments is not learnable (because of traps), so we cannot demand the algorithm to be asymptotically optimal on every environment. Therefore, it seems like we need to assume something about the environment, if we want a definition of intelligence that accounts for the effectiveness of intelligence. Formulating such an assumption, making it rigorous, and backing it by rigorous analysis is the subproblem I’m presenting here. The particular sort of assumption I’m pointing at here might be oversimplified, but the question remains.
I agree that we’ll want some reasonable assumption on the environment (e.g. symmetry of physical laws throughout spacetime) that will enable thinking to generalize well. I don’t think that assumption looks like “it’s hard to cause a lot of destruction” or “the environment is favorable to you in general”. And I’m pretty sure that individual human lives are not the most important level of analysis for thinking about the learning required to avoid civilizationlevel traps (e.g. with CFCs, handling the situation required scientific and policy knowledge that no one knows at birth and no one could discover by themself over a lifetime)
Consider also, evolution. Evolution can also be regarded as a sort of reinforcement learning algorithm. So why, during billions years of evolution, no gene sequence was created that somehow destroyed all life on Earth? It seems hard to come up with an answer other than “it’s hard to cause a lot of destruction”.
Some speculation:
I think that we have a sequence of reinforcement algorithms: evolution → humanity → individual human / small group (maybe followed by → AGI) s.t. each step inherits the knowledge generated by the previous step and also applies more optimization pressure than the previous step. This suggests formulating a “favorability” assumption of the following form: there is a (possibly infinite) sequence of reinforcement learning algorithms A0, A1, A2… s.t. each algorithm is more powerful than the previous (e.g. has more computing power), and our environment has to be s.t.
(1) Running policy A0 has a small rate (at most ϵ0) of falling into traps. (2) If we run A0 for some time T0 (s.t. ϵ0T0≪1), and then run A1 after updating on the observations during T0, then A1 has a small rate (at most ϵ1) of falling into traps. (3) Ditto when we add A2
...And so forth.
The sequence {Ai} may be thought of as a sequence of agents or as just steps in the exploration of the environment by a single agent. So, our condition is that, each new “layer of reality” may be explored safely given that the previous layers were already studied.
Most species have gone extinct in the past. I would not be satisfied with an outcome where all humans die or 99% of humans die, even though technically humans might rebuild if there are any left and other intelligent life can evolve if humanity is extinct. These extinction levels can happen with foreseeable tech. Additionally, avoiding nuclear war requires continual cognitive effort to be put into the problem; it would be insufficient to use trialanderror to avoid nuclear war.
I don’t see why you would want a long sequence of reinforcement learning algorithms. At some point the algorithms produce things that can think, and then they should use their thinking to steer the future rather than trialanderror alone. I don’t think RL algorithms would get the right answer on CFCs or nuclear war prevention.
I am pretty sure that we can’t fully explore our current level, e.g. that would include starting nuclear wars to test theories about nuclear deterrence and nuclear winter.
I really think that you are taking the RL analogy too far here; decisionmaking systems involving humans have some things in common with RL but RL theory only describes a fragment of the reasoning that these systems do.
I don’t think you’re interpreting what I’m saying correctly.
First, when I say “reinforcement learning” I don’t necessarily mean the type of RL algorithms that exist today. I just mean something that is designed to perform well (in some sense) in the face of uncertainty about the environment.
Second, even existing RL algorithms are not pure trialanderror. For example, posterior sampling maintains a belief state about the environment and runs the optimal policy for some environment sampled from the belief state. So, if the belief state “knows” that something is a bad/good idea then the algorithm doesn’t need to actually try it.
Third, “starting nuclear wars to test theories” is the opposite of I’m trying to describe. What I’m saying is, we already have enough knowledge (acquired by exploring previous levels) to know that nuclear war is a bad idea, so exploring this level will not involve starting nuclear wars. What I’m trying to formalize is, what kind of environments allow this to happen consistently, i.e. being able to acquire enough knowledge to deal with a trap before you arrive at the trap.
That is broad enough to include Bayesianism. I think you are imagining a narrower class of algorithms that can achieve some property like asymptotic optimality. Agree that this narrower class is much broader than current RL, though.
I agree that if it knows for sure that it isn’t in some environment then it doesn’t need to test anything to perform well in that environment. But what if there is a 5% chance that the environment is such that nuclear war is good (e.g. because it eliminates other forms of destructive technology for a long time)? Then this AI would start nuclear war with 5% probability per learning epoch. This is not pure trialanderror but it is trialanderror in an important relevant sense.
This seems like an interesting research approach and I don’t object to it. I would object to thinking that algorithms that only handle this class of environments are safe to run in our world (which I expect is not of this form). To be clear, while I expect that a Bayesianish agent has a good chance to avoid very bad outcomes using the knowledge it has, I don’t think anything that attains asymptotic optimality will be useful while avoiding very bad outcomes with decent probability.
After thinking some more, maybe the following is natural way towards formalizing the optimism condition.
Let H be the space of hypotheses and ξ0∈ΔH be the “unbiased” universal prior. Given any ζ∈ΔH, we denote ^ζ=Eμ∼ζ[μ], i.e. the environment resulting from mixing the environments in the belief state ζ. Given an environment μ, let πμ be the Bayesoptimal policy for μ and πμθ the perturbed Bayesoptimal policy for μ, where θ is a perturbation parameter. Here, “perturbed” probably means something like softmax expected utility, but more thought is needed. Then, the “optimistic” prior ξ is defined as a solution to the following fixed point equation:
ξ(μ)=Z−1ξ0(μ)exp(β(Eμ⋈π^ξθ[U]−Eμ⋈πμ[U]))
Here, Z is a normalization constant and β is an additional parameter.
This equation defines something like a softmax Nash equilibrium in a cooperative game of two players where, one player chooses μ (so that ξ is eir mixed strategy), another player chooses π and the utility is minus regret (alternatively, we might want to choose only Pareto efficient Nash equilibria). The parameter β controls optimism regarding the ability to learn the environment, whereas the parameter θ represents optimism regarding the presence of slack: ability to learn despite making some errors or random exploration (how to choose these parameters is another question).
Possibly, the idea of exploring the environment “layer by layer” can be recovered from combining this with hierarchy assumptions.
This seems like a hack. The equilibrium policy is going to assume that the environment is good to it in general in a magical fashion, rather than assuming the environment is good to it in the specific ways we should expect given our own knowledge of how the environment works. It’s kind of like assuming “things magically end up lower than you expected on priors” instead of having a theory of gravity.
I think there is something like a theory of gravity here. The things I would note about our universe that make it possible to avoid a lot of traps include:
Physical laws are symmetric across spacetime.
Physical laws are spacially local.
The predictable effects of a local action are typically local; most effects “dissipate” after a while (e.g. into heat). The butterfly effect is evidence for this rather than against this, since it means many effects are unpredictable and so can be modeled thermodynamically.
When small changes have big and predictable effects (e.g. in a computer), there is often agentic optimization power towards the creation and maintenance of this system of effects, and in these cases it is possible for at least some agents to understand important things about how the system works.
Some “partiallydissipated” effects are statistical in nature. For example, an earthquake hitting an area has many immediate effects, but over the long term the important effects are things like “this much local productive activity was disrupted”, “this much local human health was lost”, etc.
You have the genes that you do because evolution, which is similar to a reinforcement learning algorithm, believed that these genes would cause you to survive and reproduce. If we construct AI systems, we will give them code (including a prior) that we expect to cause them to do something useful for us. In general, the agency of an agent’s creator should affect the agent’s beliefs.
If there are many copies of an agent, and successful agents are able to repurpose the resources of unsuccessful ones, then different copies can try different strategies; some will fail but the successful ones can then repurpose their resources. (Evolution can be seen as a special case of this)
Some phenemona have a “fractal” nature, where a small thing behaves similar to a big thing. For example, there are a lot of similarities between the dynamics of a nation and the dynamics of a city. Thus small things can be used as models of big things.
If your interests are aligned with those of agents in your local vicinity, then they will mostly try to help you. (This applies to parents making their children’s environment safe)
I don’t have an elegant theory yet but these observations seem like a reasonable starting point for forming one.
I think that we should expect evolution to give us a prior that is a good lossy compression of actual physics (where “actual physics” means, those patterns the universe has that can be described within our computational complexity bounds). Meaning that, on the one hand it should be low description complexity (otherwise it will be hard for evolution to find it), and on the other hand it should be assign high probability to the true environment (in other words, the KL divergence of the true environment from the prior should be small). And also it should be approximately learnable, otherwise it won’t go from assigning high probability to actually performing well.
The principles you outlined seem reasonable overall.
Note that the locality/dissipation/multiagent assumptions amount to a special case of “the environment is effectively reversible (from the perspective of the human species as a whole) as long as you don’t apply too much optimization power” (“optimization power” probably translates to divergence from some baseline policy plus maybe computational complexity considerations). Now, as you noted before, actual macroscopic physics is not reversible, but it might still be effectively reversible if you have a reliable longterm source of negentropy (like the sun). Maybe we can also slightly relax them by allowing irreversible changes as long as they are localized and the available space is sufficiently big.
“If we construct AI systems, we will give them code (including a prior) that we expect to cause them to do something useful for us. In general, the agency of an agent’s creator should affect the agent’s beliefs” is essentially what DRL does: allows transferring our knowledge to the AI without hardcoding it by hand.
“When small changes have big and predictable effects (e.g. in a computer), there is often agentic optimization power towards the creation and maintenance of this system of effects, and in these cases it is possible for at least some agents to understand important things about how the system works” seems like it would allow us to go beyond effective reversibility, but I’m not sure how to formalize it or whether it’s a justified assumption. One way towards formalizing it is, the prior is s.t. studying the initial state approximate communication class allows determining the entire environment, but this seems to point at a very broad class of approximately learnable priors w/o specifying a criterion how to choose among them.
Another principle that we can try to use is, the ubiquity of analytic functions. Analytic functions have the property that, knowing the function in a bounded domain allows extrapolating it everywhere. This is different from allowing arbitrary computable functions which may have “if” clauses, so that studying the function in a bounded domain is never enough to be sure about its behavior outside it. In particular, this line of inquiry seems relatively easy to formalize using continuous MDPs (although we run into the problem that finding the optimal policy is infeasible, in general). Also, it might have something to do with the effectiveness of neural networks (although the popular ReLU response function is not analytic).
Actually, I am including Bayesianism in “reinforcement learning” in the broad sense, although I am also advocating for some form of asymptotic optimality (importantly, it is not asymptotic in time like often done in the literature, but asymptotic in the time discount parameter; otherwise you give up on most of the utility, like you pointed out in an earlier discussion we had).
In the scenario you describe, the agent will presumably discard (or, strongly penalize the probability of) the pronuclearwar hypothesis first since the initial policy loses value much faster on this hypothesis compared to the antinuclearwar hypothesis (since the initial policy is biased towards the more likely antinuclearwar hypothesis). It will then remain with the antinuclearwar hypothesis and follow the corresponding policy (of not starting nuclear war). Perhaps this can be formalized as searching for a fixed point of some transformation.
Consider a panel with two buttons, A and B. One button sends you to Heaven and one to Hell, but you don’t know which is which and there is no way to check without pressing one. To make it more fun, you have to choose a button within one minute or you go to Hell automatically.
So, there are are two environments: in environment X, button A corresponds to Heaven and in environment Y, button B corresponds to Heaven. Obviously both cannot be in a learnable class simultaneously. So, at least one of them has to be ruled out (and if we also want to preserve symmetry then both). What sort of assumption do you think will rule them out?
I think that “scientific and policy knowledge that no one knows at birth and no one could discover by themself over a lifetime” is absolutely compatible with the hypothesis I outlined, even in its most naive form. If humanity’s progress is episodic RL where each human life is an episode, then of course each human uses the knowledge accumulated by previous humans. This is the whole idea of a learning algorithm in this setting.
Also, I think that success with CFC is not a lot of evidence against the hypothesis since, for one thing, CFC doesn’t allow a small group to easily destroy all of humanity, and for another thing, AFAIK action against CFC was only taken when some damage was already apparent. This is different from risks that have to be handled correctly on the first try.
That said, “doesn’t reflect optimistically on our chances to survive AI risk” wasn’t intended as a strong claim but as something very speculative. Possibly I should have made it clearer.
More generally, the idea of restricting to environments s.t. some base policy doesn’t fall into traps on them is not very restrictive. Indeed, for any learnable class H you can just take the base policy to be the learning algorithm itself and tautologically get a class at least as big as H. It becomes more interesting if we impose some constraints on the base policy, such as maybe restricting its computational complexity.
Intuitively, it seems alluring to say that our environment may contain Xrisks, but they are s.t. by the time we face them we have enough knowledge to avoid them. However, this leads to assumptions that depend on the prior as a whole rather than on particular environments (basically, it’s not clear whether this is saying anything besides just assuming the prior is learnable). This complicates things, and in particular it becomes less clear what does it mean for such a prior to be “universal”. Moreover, the notion of a “trap” is not even a function of the prior regarded a single mixed environment, but a function of the particular partition of the prior into constituent hypotheses. In other words, it depends on which uncertainty is considered subjective (a property of the agent’s state of knowledge) and which uncertainty is considered objective (an inherent unpredictability of the world). For example, if we go to the initial example but assume that there is a fair coin inside the environment that decides which button is Heaven, then instead of two environments we get one and tautologically there is no trap.
In short, I think there is a lot more thinking to do about this question.
I don’t think we should rule either of these out. The obvious answer is to give up on asymptotic optimality and do something more like utility function optimization instead. That would be moving out of the learning theory setting, which is a good thing.
Asymptotic optimality can apply to bounded optimization problems and can’t apply to civilizationlevel steering problems.
Well, we could give up on regret bounds and instead just consider algorithms that asymptotically approach Bayesoptimality. (This would not be moving out of the learning theory setting though? At least not the way I use this terminology.) Regret bounds would still be useful in the context of guaranteeing transfer of human knowledge and values to the AGI, but not in the context of defining intelligence.
However, my intuition is that it would be the wrong way to go.
For one thing, it seems that it is computationally feasible (at least in some weak sense, i.e. for a small number of hypotheses s.t. the optimal policy for each is feasible) to get asymptotic Bayesoptimality for certain learnable classes (PSRL is a simple example) but not in general. I don’t have a proof (and I would be very interested to see either a proof or a refutation), but it seems to be the case AFAIK.
For another thing, consider questions such as, why intelligent agents outcompete instinctbased agents, and why general intelligence (i.e. Bayes optimality or at least some notion of good performance w.r.t. a prior that is “universal” or “nearly universal” in some sense) can be developed by evolution in a rather restricted environment. These questions seem much easier to answer if intelligence has some frequentist property (i.e. it is in some sense effective in all or most environments) compared to, if intelligence has only purely Bayesian properties (i.e. it is only good on average w.r.t. some very broad ensemble of environments).
I don’t understand why you think this. Suppose there is some simple “naturalized AIXI”ish thing that is parameterized on a prior, and there exists a simple prior for which an animal running this algorithm with this prior does pretty well in our world. Then evolution may produce an animal running something like naturalized AIXI with this prior. But naturalized AIXI is only good on average rather than guaranteeing effectiveness in almost all environments.
My intuition is that it must not be just a coincidence that the agent happens to works well in our world, otherwise your formalism doesn’t capture the concept of intelligence in full. For example, we are worried that a UFAI would be very likely to kill us in this particular universe, not just in some counterfactual universes. Moreover, Bayesian agents with simple priors often do very poorly in particular worlds, because of what I call “Bayesian paranoia”. That is, if your agent thinks that lifting its left arm will plausibly send it to hell (a rather simple hypothesis), it will never lift its left arm and learn otherwise.
In fact, I suspect that a certain degree of “optimism” is inherent in our intuitive notion of rationality, and it also has a good track record. For example, when scientists did early experiments with electricity, or magnetism, or chemical reactions, their understanding of physics at the time was arguably insufficient to know this will not destroy the world. However, there were few other ways to go forward. AFAIK the first time anyone seriously worried about a physics experiment was the RHIC (unless you also count the Manhattan project, when Edward Teller suggested the atom bomb might create a selfsustaining nuclear fusion reaction that will envelope the entire atmosphere). These latter concerns were only raised because we already knew enough to point at specific dangers. Of course this doesn’t mean we shouldn’t be worried about Xrisks! But I think that some form of a priori optimism is likely to be correct, in some philosophical sense. (There was also some thinking in that direction by Sunehag and Hutter although I’m not sold on the particular formalism they consider).
I think I understand your point better now. It isn’t a coincidence that an agent produced by evolution has a good prior for our world (because evolution tries many priors, and there are lots of simple priors to try). But the fact that there exists a simple prior that does well in our universe is a fact that needs an explanation. It can’t be proven from Bayesianism; the closest thing to a proof of this form is that computationally unbounded agents can just be born with knowledge of physics if physics is sufficiently simple, but there is no similar argument for computationally bounded agents.
It’s not a coincidence because evolution selects the prior, and evolution tries lots of priors. (There are lots of simple priors)
I am not proposing this. I am proposing doing something more like AIXI, which has a fixed prior and does not obtain optimality properties on a broad class of environments. It seems like directly specifying the right prior is hard, and it’s plausible that learning theory research would help give intuitions/models about which prior to use or what nonBayesian algorithm would get good performance in the world we actually live in, but I don’t expect learning theory to directly produce an algorithm we would be happy with running to make big decisions in our universe.
Yes, I think that we’re talking about the same thing. When I say “asymptotically approach Bayesoptimality” I mean the equation from Proposition A.0 here. I refer to this instead of just Bayesoptimality, because exact Bayesoptimality is computationally intractable even for a small number of hypothesis each of which is a small MDP. However, even asymptotic Bayesoptimality is usually only tractable for some learnable classes, AFAIK: for example if you have environments without traps then PSRL is asymptotically Bayesoptimal.
If RL is using human lives as episodes then humans should already be born with the relevant knowledge. There would be no need for history since all learning is encoded in the policy. History isn’t RL; it’s data summarization, model building, and intertemporal communication.
This seems to be interpreting the analogy too literally. Humans are not born with the knowledge, but they acquire the knowledge through some protocol that is designed to be much easier than rediscovering it. Moreover, by “reinforcement learning” I don’t mean the same type of algorithms used for RL today, I only mean that the performance guarantee this process satisfies is of a certain form.
This rules out environments in which the second law of thermodynamics holds.
No, it doesn’t rule out any particular environment. A class that consists only of one environment is tautologically learnable, by the optimal policy for this environment. You might be thinking of learnability by anytime algorithms whereas I’m thinking of learnability by nonanytime algorithms (what I called “metapolicies”), the way I defined it here (see Definition 1).
Ok, I am confused by what you mean by “trap”. I thought “trap” meant a set of states you can’t get out of. And if the second law of thermodynamics is true, you can’t get from a highentropy state to a lowentropy state. What do you mean by “trap”?
To first approximation, a “trap” is a an action s.t. taking it loses longterm value in expectation, i.e an action which is outside the set A0M that I defined here (see the end of Definition 1). This set is always nonempty, since it at least has to contain the optimal action. However, this definition is not very useful when, for example, your environment contains a state that you cannot escape and you also cannot avoid (for example, the heat death of the universe might be such a state), since, in this case, nothing is a trap. To be more precise we need to go from an analysis which is asymptotic in the time discount parameter to an analysis with a fixed, finite time discount parameter (similarly to how with time complexity, we usually start from analyzing the asymptotic complexity of an algorithm, but ultimately we are interested in particular inputs of finite size). For a fixed time time discount parameter, the concept of a trap becomes “fuzzy”: a trap is an action which loses a substantial fraction of the value.
Why would you assume the existence of an advisor who can avoid taking catastrophic actions and sometimes take an optimal action? This would require some process capable of good judgment to understand many aspects of the AI’s decisionmaking process, such as its world models (as these models are relevant to which actions are catastrophic/optimal). Are you proposing a high degree of transparency, a bootstrapping process as in ALBA, or something else?
I think that what you’re saying here can be reformulated as follows (please correct me if I end up not answering your question):
The action that a RL agent takes depends both on the new observation and its internal state. Often we ignore the latter and pretend the action depends only on the history of observations and actions, and this is okay because we can always produce the probability distribution over internal states conditional on the given history. However, this is only ok for informationtheoretic analysis, since sampling this probability distribution given only the history as input is computationally intractable.
So, it might be a reasonable assumption that the advisor takes “sane” actions when left to its own devices, but it is not reasonable to assume the same when it works together with the AI. This is because, even if the AI behaved exactly as the advisor, it would hide the simulated advisor’s internal state, which would preclude the advisor from taking the wheel and proceeding with the same policy.
I think this is a real problem, but we can overcome it by letting the advisor write some kind of “diary” that documents eir reasoning process, as much as possible. The diary is also considered a part of the environment (although we might want to bake into the prior the rules of operating the diary and a “cheap talk” assumption which says the diary has no side effects on the world). This way, the internal state is externalized, and the AI will effectively become transparent by maintaining the diary too (essentially the AI in this setup is emulating a “best case” version of the advisor). It would be great if we could make this idea into a formal analysis.
That captures part of it but I also don’t think the advisor takes sane actions when the AI is doing things to the environment that change the environment. E.g. the AI is implementing some plan to create a nuclear reactor, and the advisor doesn’t understand how nuclear reactors work.
I guess you could have the AI first write the nuclear reactor plan in the diary, but this is essentially the same thing is transparency.
Well, you could say it is the same thing as transparency. What is interesting about it is that, in principle, you don’t have to put in transparency by hand using some completely different techniques. Instead, transparency arises naturally from the DRL paradigm and some relatively mild assumptions (that there is a “diary”). The idea is that, the advisor would not build a nuclear reaction without seeing an explanation of nuclear reactors, so the AI also won’t do it too.
It seems like this is giving up on allowing the AI to make longterm predictions. It can make shortterm, testable predictions (since if different advisors disagree, it is possible to see who is right). But longterm predictions can’t be cheaply tested.
In the absence of longterm predictions, it still might be possible to do something along the lines of what Paul is thinking of (i.e. predicting human judgments of longerterm things), but I don’t see what else you could do. Does this match your model?
I’m not giving up on longterm predictions in general. It’s just that, because of traps, some uncertainties cannot be resolved by testing, as you say. In those cases the AI has to rely on what it learned from the advisor, which indeed amounts to human judgment.