Attacking the grain of truth problem using Bayes-Savage agents

Motivated by the grain of truth problem, we propose a generalization of Bayesian inference that allows for incomplete models. Such a model can be thought of as a set of constraints on the environment which doesn’t specify it completely. This means that in addition to ordinary (probabilistic) uncertainty, another (“Knightian”) type of uncertainty is introduced. This new uncertainty is managed using Savage’s minimax regret decision rule.

Motivation

The grain of truth problem, as described by Hutter (see problem 5j) concerns the interaction between several AIXI agents. Ideally, we expect that rational agents in multi-agent scenarios should converge to behavior corresponding to some reasonable solution concept in game theory, e.g. Nash equilibrium (for the purposes of this post I completely ignore stronger desiderata such as superrationality; also purely Bayesian agents often fail to do so because of insufficient exploration but this is also a problem mostly orthogonal to grain of truth). However, it is difficult to prove anything about multi-agent scenarios of AIXI since AIXI’s prior is supported on computable environments whereas AIXI itself is uncomputable. This problem survives in bounded analogues: if we limit the computational complexity of environments included in the prior, the computational complexity of the agent itself will invariably be higher. Thus, two agents of equal power seem unable to “comprehend” each other.

There are several reasons this problems seem important to AI alignment:

  1. The ability of an agent to operate in environments of complexity similar to that of the agent or even surpassing that of the agent seems an important desideratum for the mathematical formalization of intelligence. Without a formal understanding how this desideratum can be satisfied it is difficult to imagine any reliable theory of intelligent agents.

  2. Arguably, self-improving agents need the capacity to understand systems that equal to themselves in complexity, more or less by definition.

  3. Perhaps most importantly, IRL relies on the AI’s ability to successfully reason about a complex agent (a human), presumably an agent that cannot be emulated precisely with the AI’s computational power. CIRL relies on two-sided interaction between the AI and the human, where neither side can emulate the other. Thus, it seems impossible to prove reasonable guarantees for (C)IRL under quasi-realistic assumptions without solving the grain of truth problem.

To the best of our knowledge, the most serious attack on the problem to date is the work of Leike, Taylor and Fallenstein. They prove that Thompson samplers relatively to a reflective oracle converge to an “asymptotic” Nash equilibrium. In our opinion, it should be possible to extent that work to ensure subgame perfect and even proper equilibria and, under some assumptions, allow for “tamer” forms of exploration than Thompson sampling (Thompson sampling necessarily sacrifices many horizons until convergence). There is no doubt reflective oracles constitute significant progress towards the solution, in the sense that they clearly demonstrate the grain of truth problem is not inherently unsolvable and provide a formal framework where agents satisfying this desideratum can be studied. Nevertheless, we retain skepticism regarding the prospect of reaching the ultimate solution through developing that approach.

In our opinion, there are 3 ways in which reflective oracles are an unsatisfactory solution:

  1. There are many different reflective oracles. In multi-agent scenarios, all results rely on all agents using the same reflective oracle. Realistic agents cannot use uncomputable oracles so they will have to replace them by some sort of bounded analogues. However, there is every reason to suspect these bounded analogues will also be far from unique. Thus, it’s not clear that interaction between agents using different brands of “bounded reflective oracles” will have any substantial guarantees. In fact, this lack of uniqueness seems directly related to the lack of uniqueness for Nash equilibria. Any solution for the grain of truth will have to solve equilibrium selection, but reflective oracles seem to “cheat” by implicitly selecting Nash equilibria for all possible games in advance. A more realistic model should imply some sort of negotiation process which will probably need some non-trivial assumptions to converge.

  2. Multi-agent scenarios are a test case for the ability to reason about environments more complex than the agent itself, but the latter is more general than the former. We expect reasonable multi-agent behavior to arise as a special cases of some broad capability, useful in a great variety of other scenarios. However, it is not clear what physically realistic environments except multi-agent scenarios benefit from reasoning with reflective oracles. If there are few such environments, we would have to admit that either reflective oracles are an unsatisfactory model or that realistic agents need to be designed qualitatively differently in order to manage multi-agent scenarios as opposed to all other complex scenarios. In other words, in the latter case there would be uncontrived examples of AIs that solve almost any problem better than humans (including proving theorems, discovering physics, inventing nanotechnology, von Neumann probes and Dyson spheres) but that fail miserably in games. This possibility, although not impossible, appears to us less likely than the alternative.

  3. Reflective oracles are uncomputable, although they were proven to be limit computable. Obviously realistic agents use algorithms that are not only computable but satisfy rather stringent complexity bounds.

Of the problems above, 3 seems to be the least significant since it is plausible that bounded analogues can be constructed. In any case, in our formalism it also entirely unobvious how to construct computationally feasible agents. Problem 1 might be solved if a natural class of reflective oracles is found s.t. any agent that is computable w.r.t. any oracle in this class is also computable w.r.t. any other oracle in this class, at least approximately. This is not impossible but also not straightforward. Problem 2 might be the hardest.

Incomplete Models

Although an agent cannot simulate another agent of equal power without entering an infinite loop, it might be able to observe certain facts about the other agent’s behavior. This calls for a notion of an incomplete model: a model of the environment that doesn’t yield precise probabilistic predictions but provides some constraints on what might happen.

To give a trivial example, given a sequence of two bits , the model might say that without assigning any probability to . Knowing only this model one has unquantifiable (Knightian) uncertainty between the possibilities and . The space of probability distributions on two bit sequences is a tetrahedron, and our model corresponds to one edge of this tetrahedron.

More generally, we have some measurable event space and is the space of probability measures on . A complete (Bayesian) model corresponds to some . An incomplete model corresponds to some convex set (which can be assumed closed in the strong convergence topology). If we assign probability to model and to model , this can be represented by the model . Similarly, if we assign probability to model and probability to model , this can be represented by the model

In ordinary Bayesian statistics, a prior is often specified by a some countable space of hypotheses (i.e. for each we have ; more generally can be some measurable space with a Markov kernel into , but we won’t need that) and a probability measure on , so that the prior is . Similarly, we can have and the incomplete prior

Now consider an agent interacting with an unknown environment. The agent has a finite set of actions and a finite set of percepts. The event space is the space of “pure” (deterministic) environments (its -algebra comes from viewing it as the inverse limit of the sequence ). In this context we don’t usually work with the full space of “mixed” environments but only with the quotient space of “behavioral” environments . Here, every is a partial function from to s.t. is defined iff assigns positive probability to all percepts appearing in . For every , we define the corresponding by

Here, is defined by and stands for the first element of the pair . Note that the operation of taking convex linear combinations in descends to a well-defined operation in which we will also regard as convex linear combination (but which is not the same as taking pointwise convex linear combinations of the partial functions; instead it can be written as the result of Bayesian updating).

Priors are usually constructed from environments that are in some sense computable. There are several variants of the definition, but we will consider environments that can be represented as

Here, is regarded as an infinite sequence of fair coin flips and the notation means that is a Turing machine that halts with probability 1 for any fixed and a sequence of fair coin flips. The corresponding environment is defined by

Here is the fair coin probability measure on (it is easy to see that the set is measurable because is a Turing machine).

Fix a prefix-free universal Turing machine and denote the set of programs that satisfy the halting condition above. We have defined by

For any , we can define the environment :

We can now define the universal prior .

Alternatively, we can consider environments that have an unobservable state. For any we denote the domain of by . An computable environment with unobservable state corresponds to a machine

The corresponding is defined recursively, simultaneously with , the latter describing the probability distribution on unobservable states. Denoting the empty element of :

For arbitrary computable environments the addition of an unobservable state gains us nothing. However, it makes a difference when imposing complexity bounds.

We now generalize these concepts to incomplete models. A computable incomplete model corresponds to an oracle machine

We require that the machine halts with probability 1 for any oracle. The values returned by the oracle represent “Knightian coin flips.” For any , we can define by

The notation signifies we are running the machine with the oracle .

We define the incomplete model

The -algebra on is defined by regarding it as an infinite product: .

Taking a universal oracle machine and the set of programs for that satisfy the our halting condition, we can define for any in the obvious way. We define by

This allows us defining the “universal incomplete prior” .

We also can represent computable incomplete models with unobservable states

is defined analogously.

It is straightforward to impose complexity bounds on these objects, e.g. by limiting the time, space, number of random coins or number of oracle queries is allowed to use as a function of .

Hutter observes that it’s unknown whether Solomonoff induction can correctly predict relationships betweens bits in a sequence that contains uncomputable information (see problem 4g). To deal with similar issues in online learning one can use “sleeping experts” (see also subsequence induction). Here, we deal with much more general scenarios than sleeping experts by including incomplete models in our prior.

Bayes-Savage Agents

Consider an agent that has to choose from a finite strategy set . The utility function of the agent is . If the agent’s beliefs are described by a complete model , the standard decision rule is maximizing expected utility:

For an incomplete model , we propose using the minimax regret decision rule:

We call agents following this rule “Bayes-Savage” agents.

Note that we lost nothing by requiring to be convex since the minimum in the above equation is of a concave function and therefore is always attained in an extreme point (so if we started from a non-convex subset of , we might as well have replaced it with its convex hull).

As opposed to expected utility maximization, minimax regret might demand randomization. This is perhaps not surprising since if we expect our agents to produce Nash equilibria, there must be randomization in their decision rule (e.g. in reflective oracles this randomization comes from the oracle itself). In fact, this decision rule can be interpreted as selecting a Nash equilibrium in a game where the strategies of the opponent (who we dub Metanoia) are , the agent’s utility function is and Metanoia’s utility function is the agent’s regret

Equivalently (i.e. yielding the same Nash equilibria), we can make it a zero-sum game by taking negative regret to be the agent’s utility function (it makes no difference since the other term depends only on Metanoia). Note that the attaining the minimum is not the Nash equilibrium strategy for the Metanoia but only a pure best response to a Nash equilibrium: the former would be a mixed strategy.

In most interesting settings the agent’s strategy involves making a sequence of actions between which it receives additional information. The strategy of a Bayesian agent in this case can be regarded as performing a Bayesian update after each observation and computing and optimal policy for the rest of time using the maximal expected utility rule applied to the posterior. On the other hand, the policy of a Bayes-Savage agent cannot be decomposed in this way, i.e., the policy after making an observation is not the result of applying the minimax regret rule to the incomplete posterior. We deal with this “dynamic inconsistency” simply by requiring the entire policy to be determined in advance (“updatelessly”). In practical implementations we will likely need some approximate decoupling from distant “branches” in order to make the algorithm feasible, but we leave this issue for later.

In the sequential setting, the Metanoia game is naturally regarded as having extensive form. In particular, the Nash equilibrium condition is too weak to always ensure reasonable behavior, since there might be agent information sets that are never reached in Nash equilibrium. Therefore, we need to make the decision rule more stringent by requiring e.g. quasi-perfect or proper equilibrium. We leave working out the details for later.

As a simple example for a setting where the minimax regret rule yields reasonable behavior, consider a stochastic -armed bandit played for time steps. Here , each representing a vector of payoffs for all arms and time steps. is the convex hull of all i.i.d. distributions satisfying an appropriate moment condition. Clearly the minimax regret rule leads to behavior qualitatively similar to the UCB algorithm (at least assuming the correct handling of special information sets as suggested above).

This stands in sharp contrast to the minimax decision rule:

The latter leads to undefined behavior for stochastic bandits since there is no way to guarantee any payoff above the minimum, even if good payoffs were observed on previous time steps.

Finally, a Bayes-Savage agent for is a natural counterpart of AIXI in this setting. We hope that for these agents strong game-theoretic guarantees can be derived.

Discussion

We think that there is a reasonable chance the concepts outlined in this essay can lead to solving the grain of truth problem. This will require much further work. Specifically, we suggest to continue the research in approximately the following order:

  • Better understanding of minimax regret in the sequential setting, including the definition of quasi-perfect or proper strategies.

  • Proving convergence of Bayes-Savage agents to policies satisfying appropriate regret guarantees. That is, given an environment that does not appear explicitly in the prior but which satisfies for that appears in the prior, we expect the agent to converge to a policy whose loss w.r.t. the best response to is no greater than the maximin regret for .

  • Better understanding the relationship of the minimax regret rule to game theory e.g. whether the minimax regret rule can allow a better justification of Nash equilibria than best response (i.e. in a Nash equilibrium all strategies are best responses but not all best responses preserve the Nash equilibrium; maybe minimax regret allows doing better?)

  • Deriving game-theoretic guarantees for AIXI-like Bayes-Savage agents from the generic regret guarantees above. Presumably, this requires for each agent in the game describing a specific computable incomplete model consistent with the behavior of the other agents (which cannot be complete but should be sufficiently strong in some sense). In general, we expect being able to prove e.g. convergence to iteratively undominated strategies under fairly general conditions and convergence to Nash equilibria (preferably e.g. proper equilibria) under some assumptions on the game (e.g. time discount falling sufficiently slowly).

  • Looking for Bayes-Savage agents of bounded complexity. Here the algorithm by Koller and Megiddo for two-player zero-sum extensive games with perfect recall might be relevant (in our case the game tree is at least exponentially big w.r.t. horizon length, but such inefficient agents can be a good starting point towards understand bounded agents in general).