Intelligence Metrics with Naturalized Induction using UDT

Followup to: Intelligence Metrics and Decision Theory
Related to: Bridge Collapse: Reductionism as Engineering Problem

A central problem in AGI is giving a formal definition of intelligence. Marcus Hutter has proposed AIXI as a model of perfectly intelligent agent. Legg and Hutter have defined a quantitative measure of intelligence applicable to any suitable formalized agent such that AIXI is the agent with maximal intelligence according to this measure.

Legg-Hutter intelligence suffers from a number of problems I have previously discussed, the most important being:

  • The formalism is inherently Cartesian. Solving this problem is known as naturalized induction and it is discussed in detail here.

  • The utility function Legg & Hutter use is a formalization of reinforcement learning, while we would like to consider agents with arbitrary preferences. Moreover, a real AGI designed with reinforcement learning would tend to wrestle control of the reinforcement signal from the operators (there must be a classic reference on this but I can’t find it. Help?). It is straightword to tweak to formalism to allow for any utility function which depends on the agent’s sensations and actions, however we would like to be able to use any ontology for defining it.

Orseau and Ring proposed a non-Cartesian intelligence metric however their formalism appears to be too general, in particular there is no Solomonoff induction or any analogue thereof, instead a completely general probability measure is used.

My attempt at defining a non-Cartesian intelligence metric ran into problems of decision-theoretic flavor. The way I tried to used UDT seems unsatisfactory, and later I tried a different approach related to metatickle EDT.

In this post, I claim to accomplish the following:

  • Define a formalism for logical uncertainty. When I started writing this I thought this formalism might be novel but now I see it is essentially the same as that of Benja.

  • Use this formalism to define a non-constructive formalization of UDT. By “non-constructive” I mean something that assigns values to actions rather than a specific algorithm like here.

  • Apply the formalization of UDT to my quasi-Solomonoff framework to yield an intelligence metric.

  • Slightly modify my original definition of the quasi-Solomonoff measure so that the confidence of the innate model becomes a continuous rather than discrete parameter. This leads to an interesting conjecture.

  • Propose a “preference agnostic” variant as an alternative to Legg & Hutter’s reinforcement learning.

  • Discuss certain anthropic and decision-theoretic aspects.

Logical Uncertainty

The formalism introduced here was originally proposed by Benja.

Fix a formal system F. We want to be able to assign probabilities to statements s in F, taking into account limited computing resources. Fix D a natural number related to the amount of computing resources that I call “depth of analysis”.

Define P0(s) := 12 for all s to be our initial prior, i.e. each statement’s truth value is decided by a fair coin toss. Now define
PD(s) := P0(s | there are no contradictions of length ⇐ D).

Consider X to be a number in [0, 1] given by a definition in F. Then dk(X) := “The k-th digit of the binary expansion of X is 1” is a statement in F. We define ED(X) := Σk 2-k PD(dk(X)).

Remarks

  • Clearly if s is provable in F then for D >> 0, PD(s) = 1. Similarly if “not s” is provable in F then for D >> 0,
    PD(s) = 0.

  • If each digit of X is decidable in F then limD → inf ED(X) exists and equals the value of X according to F.

  • For s of length > D, PD(s) = 12 since no contradiction of length ⇐ D can involve s.

  • It is an interesting question whether limD → inf PD(s) exists for any s. It seems false that this limit always exists and equals 0 or 1, i.e. this formalism is not a loophole in Goedel incompleteness. To see this consider statements that require a high (arithmetical hierarchy) order halting oracle to decide.

  • In computational terms, D corresponds to non-deterministic spatial complexity. It is spatial since we assign truth values simultaneously to all statements so in any given contradiction it is enough to retain the “thickest” step. It is non-deterministic since it’s enough for a contradiction to exists, we don’t have an actual computation which produces it. I suspect this can be made more formal using the Curry-Howard isomorphism, unfortunately I don’t understand the latter yet.

Non-Constructive UDT

Consider A a decision algorithm for optimizing utility U, producing an output (“decision”) which is an element of C. Here U is just a constant defined in F. We define the U-value of c in C for A at depth of analysis D to be
VD(c, A; U) := ED(U | “A produces c” is true). It is only well defined as long as “A doesn’t produce c” cannot be proved at depth of analysis D i.e. PD(“A produces c”) > 0. We define the absolute U-value of c for A to be
V(c, A; U) := ED(c, A)(U | “A produces c” is true) where D(c, A) := max {D | PD(“A produces c”) > 0}. Of course D(c, A) can be infinite in which case Einf(...) is understood to mean limD → inf ED(...).

For example V(c, A; U) yields the natural values for A an ambient control algorithm applied to e.g. a simple model of Newcomb’s problem. To see this note that given A’s output the value of U can be determined at low depths of analysis whereas the output of A requires a very high depth of analysis to determine.

Naturalized Induction

Our starting point is the “innate model” N: a certain a priori model of the universe including the agent G. This model encodes the universe as a sequence of natural numbers Y = (yk) which obeys either specific deterministic or non-deterministic dynamics or at least some constraints on the possible histories. It may or may not include information on the initial conditions. For example, N can describe the universe as a universal Turing machine M (representing G) with special “sensory” registers e. N constraints the dynamics to be compatible with the rules of the Turing machine but leaves unspecified the behavior of e. Alternatively, N can contain in addition to M a non-trivial model of the environment. Or N can be a cellular automaton with the agent corresponding to a certain collection of cells.

However, G’s confidence in N is limited: otherwise it wouldn’t need induction. We cannot start with 0 confidence: it’s impossible to program a machine if you don’t have even a guess of how it works. Instead we introduce a positive real number t which represents the timescale over which N is expected to hold. We then assign to each hypothesis H about Y (you can think about them as programs which compute yk given yj for j < k; more on that later) the weight QS(H) := 2-L(H) (1 - e-t(H)/​t). Here L(H) is the length of H’s encoding in bits and t(H) is the time during which H remains compatible with N. This is defined for N of deterministic /​ constraint type but can be generalized to stochastic N.

The weights QS(H) define a probability measure on the space of hypotheses which induces a probability measure on the space of histories Y. Thus we get an alternative to Solomonoff induction which allows for G to be a mechanistic part of the universe, at the price of introducing N and t.

Remarks

  • Note that time is discrete in this formalism but t is continuous.

  • Since we’re later going to use logical uncertainties wrt the formal system F, it is tempting to construct the hypothesis space out of predicates in F rather than programs.

Intelligence Metric

To assign intelligence to agents we need to add two ingredients:

  • The decoding Q: {Y} → {bit-string} of the agent G from the universe Y. For example Q can read off the program loaded into M at time k=0.

  • A utility function U: {Y} → [0, 1] representing G’s preferences. U has to be given by a definition in F. Note that N provides the ontology wrt which U is defined.

It seems tempting to define the intelligence to be EQS(U | Q), the conditional expectation value of U for a given value of Q in the quasi-Solomonoff measure. However, this is wrong for roughly the same reasons EDT is wrong (see previous post for details).

Instead, we define I(Q0) := EQS(Emax(U(Y(H)) | “Q(Y(H)) = Q0” is true)). Here the subscript max stands for maximal depth of analysis, as in the construction of absolute UDT value above.

Remarks

  • IMO the correct way to look at this is intelligence metric = value of decision for the decision problem “what should I program into my robot?”. If N is a highly detailed model including “me” (the programmer of the AI), this literally becomes the case. However for theoretical analysis it is likely to be more convenient to work with simple N (also conceptually it leaves room for a “purist” notion of agent’s intelligence, decoupled from the fine details of its creator).

    • As opposed to usual UDT, the algorithm (H) making the decision (Q) is not known with certainty. I think this represents a real uncertainty that has to be taken into account in decision problems in general: the decision-maker doesn’t know her own algorithm. Since this “introspective uncertainty” is highly correlated with “indexical” uncertainty (uncertainty about the universe), it prevents us from absorbing the later into the utility function as proposed by Coscott.

  • For high values of t, G can improve its understanding of the universe by bootstrapping the knowledge it already has. This is not possible for low values of t. In other words, if I cannot trust my mind at all, I cannot deduce anything. This leads me to an interesting conjecture: There is a a critical value t* of t from which this bootstrapping becomes possible (the positive feedback look of knowledge becomes critical). I(Q) is non-smooth at t* (phase transition).

  • If we wish to understand intelligence, it might be beneficial to decouple it from the choice of preferences. To achieve this we can introduce the preference formula as an unknown parameter in N. For example, if G is realized by a machine M, we can connect M to a data storage E whose content is left undetermined by N. We can then define U to be defined by the formula encoded in E at time k=0. This leads to I(Q) being a sort of “general-purpose” intelligence while avoiding the problems associated with reinforcement learning.

  • As opposed to Legg-Hutter intelligence, there appears to be no simple explicit description for Q* maximizing I(Q) (e.g. among all programs of given length). This is not surprising, since computational cost considerations come into play. In this framework it appears to be inherently impossible to decouple the computational cost considerations: G’s computations have to be realized mechanistically and therefore cannot be free of time cost and side-effects.

  • Ceteris paribus, Q* deals efficiently with problems like counterfactual mugging. The “ceteris paribus” conditional is necessary here since because of cost and side-effects of computations it is difficult to make absolute claims. However, it doesn’t deal efficiently with counterfactual mugging in which G doesn’t exist in the “other universe”. This is because the ontology used for defining U (which is given by N) assumes G does exist. At least this is the case for simple ontologies like described above: possibly we can construct N in which G might or might not exist. Also, if G uses a quantum ontology (i.e. N describes the universe in terms of a wavefunction and U computes the quantum expectation value of an operator) then it does take into account other Everett universes in which G doesn’t exist.

  • For many choices of N (for example if the G is realized by a machine M), QS-induction assigns well-defined probabilities to subjective expectations, contrary to what is expected from UDT. However:

    • This is not the case for all N. In particular, if N admits destruction of M then M’s sensations after the point of destruction are not well-defined. Indeed, we better allow for destruction of M if we want G’s preferences to behave properly in such an event. That is, if we don’t allow it we get a “weak anvil problem” in the sense that G experiences an ontological crisis when discovering its own mortality and the outcome of this crisis is not obvious. Note though that it is not the same as the original (“strong”) anvil problem, for example G might come to the conclusion the dynamics of “M’s ghost” will be some sort of random.

    • These probabilities probably depend significantly on N and don’t amount to an elegant universal law for solving the anthropic trilemma.

    • Indeed this framework is not completely “updateless”, it is “partially updated” by the introduction of N and t. This suggests we might want the updates to be minimal in some sense, in particular t should be t*.

  • The framework suggests there is no conceptual problem with cosmologies in which Boltzmann brains are abundant. Q* wouldn’t think it is a Boltzmann brain since the long address of Boltzmann brains within the universe makes the respective hypotheses complex thus suppressing them, even disregarding the suppression associated with N. I doubt this argument is original but I feel the framework validates it to some extent.