Formalizing Two Problems of Realistic World Models

I’m pleased to announce a new paper from MIRI: Formalizing Two Problems of Realistic World Models.

Abstract:

An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment.

This is the fifth of six papers discussing active research topics that we’ve been looking into at MIRI. It discusses a few difficulties that arise when attempting to formalize problems of induction and evaluation in settings where an agent is attempting to learn about (and act upon) a universe from within. These problems have been much discussed on LessWrong; for further reading, see the links below. This paper is intended to better introduce the topic, and motivate it as relevant to FAI research.

  1. Intelligence Metrics with Naturalized Induction using UDT

  2. Building Phenomenological Bridges

  3. Failures of an Embodied AIXI

  4. The Naturalized Induction wiki page

The (rather short) introduction to the paper is reproduced below.

An intelligent agent embedded in the real world faces an induction problem: how can it learn about the environment in which it is embedded, about the universe which computes it? Solomonoff (1964) formalized an induction problem faced by agents which must learn to predict an environment which does not contain the agent, and this formalism has inspired the development of many useful tools, including Kolmogorov complexity and Hutter’s AIXI. However, a number of new difficulties arise when the agent must learn about the environment in which it is embedded.
An agent embedded in the world also faces an interaction problem: how can an agent learn to achieve a complex set of goals within its own universe? Legg and Hutter (2007) have formalized an ``intelligence measure” which scores the performance of agents on which learn about and act upon an environment that does not contain the agent, but again, new difficulties arise when attempting to do the same in a naturalized setting.
This paper examines both problems. Section 2 introduces Solomonoff’s formalization of an induction problem where the agent is separate from the environment, and Section 3 discusses troubles that arise when attempting to formalize the analogous naturalized induction problem. Section 4 discusses Hutter’s interaction problem, and Section 5 discusses an open problem related to formalizing an analogous naturalized interaction problem.
Formalizing these problems is important in order to fully understand the problem faced by an intelligent agent embedded within the universe: a general artificial intelligence must be able to learn about the environment which computes it, and learn how to achieve its goals from inside its universe. Section 6 concludes with a discussion of why a theoretical understanding of agents interacting with their own environment seems necessary in order to construct highly reliable smarter-than-human systems.