Building Phenomenological Bridges

Naturalized induction is an open problem in Friendly Artificial Intelligence (OPFAI). The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.

The problem’s roots lie in algorithmic information theory and formal epistemology, but finding answers will require us to wade into debates on everything from theoretical physics to anthropic reasoning and self-reference. This post will lay the groundwork for a sequence of posts (titled ‘Artificial Naturalism’) introducing different aspects of this OPFAI.

AI perception and belief: A toy model

A more concrete problem: Construct an algorithm that, given a sequence of the colors cyan, magenta, and yellow, predicts the next colored field.

Colors: CYYM CYYY CYCM CYYY ????

This is an instance of the general problem ‘From an incomplete data series, how can a reasoner best make predictions about future data?’. In practice, any agent that acquires information from its environment and makes predictions about what’s coming next will need to have two map-like1 subprocesses:

1. Something that generates the agent’s predictions, its expectations. By analogy with human scientists, we can call this prediction-generator the agent’s hypotheses or beliefs.

2. Something that transmits new information to the agent’s prediction-generator so that its hypotheses can be updated. Employing another anthropomorphic analogy, we can call this process the agent’s data or perceptions.

Here’s an example of a hypothesis an agent could use to try to predict the next color field. I’ll call the imaginary agent ‘Cai’. Any reasoner will need to begin with some (perhaps provisional) assumptions about the world.2 Cai begins with the belief3 that its environment behaves like a cellular automaton: the world is a grid whose tiles change over time based on a set of stable laws. The laws are local in time and space, meaning that you can perfectly predict a tile’s state based on the states of the tiles next to it a moment prior — if you know which laws are in force.

Cai believes that it lives in a closed 3x3 grid where tiles have no diagonal effects. Each tile can occupy one of three states. We might call the states ‘0’, ‘1’, and ‘2’, or, to make visualization easier, ‘white’, ‘black’, and ‘gray’. So, on Cai’s view, the world as it changes looks something like this:

An example of the world’s state at one moment, and its state a moment later.

Cai also has beliefs about its own location in the cellular automaton. Cai believes that it is a black tile at the center of the grid. Since there are no diagonal laws of physics in this world, Cai can only directly interact with the four tiles directly above, below, to the left, and to the right. As such, any perceptual data Cai acquires will need to come from those four tiles; anything else about Cai’s universe will be known only by inference.

Cai perceives stimuli in four directions. Unobservable tiles fall outside the cross.

How does all this bear on the color-predicting problem? Cai hypothesizes that the sequence of colors is sensory — it’s an experience within Cai, triggered by environmental changes. Cai conjectures that since its visual field comes in at most four colors, its visual field’s quadrants probably represent its four adjacent tiles. The leftmost color comes from a southern stimulus, the next one to the right from a western stimulus, then a northern one, then an eastern one. And the south, west, north, east cycle repeats again and again.

Cai’s visual experiences break down into quadrants, corresponding to four directions.

On this model, the way Cai’s senses organize the data isn’t wholly veridical; the four patches of color aren’t perfectly shaped like Cai’s environment. But the organization of Cai’s sensory apparatus and the organization of the world around Cai are similar enough that Cai can reconstruct many features of its world.

By linking its visual patterns to patterns of changing tiles, Cai can hypothesize laws that guide the world’s changes and explain Cai’s sensory experiences. Here’s one possibility, Hypothesis A:

  • Black corresponds to cyan, white to yellow, and gray to magenta.

  • At present, the top two rows are white and the bottom row is black, except for the upper-right tile (which is gray) and Cai itself, a black middle tile.

  • Adjacent gray and white tiles exchange shades. Exception: When a white tile is pinned by a white and gray tile on either side, it turns black.

  • Black tiles pinned by white ones on either side turn white. Exception: When the black tile is adjacent to a third white tile, it remains black.

Hypothesis A’s physical content. On the left: Cai’s belief about the world’s present state. On the right: Cai’s belief about the rules by which the world changes over time. The rules are symmetric under rotation and reflection.

Bridging stimulus and experience

So that’s one way of modeling Cai’s world; and it will yield a prediction about the cellular automaton’s next state, and therefore about Cai’s next visual experience. It will also yield retrodictions of the cellular automaton’s state during Cai’s three past sensory experiences.

Hypothesis A asserts that tiles below Cai, to Cai’s left, above, and to Cai’s right relate to Cai’s color experiences via the rule {black ↔ cyan, white ↔ yellow, gray ↔ magenta}. Corner tiles, and future world-states and experiences, can be inferred from Hypothesis A’s cell transition rules.

Are there other, similar hypotheses that can explain the same data? Here’s one, Hypothesis B:

  • Normally, the correspondences between experienced colors and neighboring tile states are {black ↔ cyan, white ↔ yellow, gray ↔ magenta}, as in Hypothesis A. But northern grays are perceived as though they were black, helping explain irregularities in the distribution of cyan.

  • Hypothesis B’s cellular automaton presently looks similar to Hypothesis A’s, but with a gray tile in the upper-left corner.

  • Adjacent gray and white tiles exchange shades. Nothing else changes.

The added complexity in the perception-to-environment link allows Hypothesis B to do away with most of the complexity in Hypothesis A’s physical laws. Breaking down Hypotheses A and B into their respective physical and perception-to-environment components makes it more obvious how the two differ:

A has the simpler bridge hypothesis, while B has the simpler physical hypothesis.

Though they share a lot in common, and both account for Cai’s experiences to date, these two hypotheses diverge substantially in the cellular automaton states and future experiences they predict:

The two hypotheses infer different distributions and dynamical rules for the tile shades from the same perceptual data. These worldly differences then diverge in the future experiences they predict.

Hypotheses linking observations to theorized entities appear to be quite different from hypothesis that just describe the theorized entities in their own right. In Cai’s case, the latter hypotheses look like pictures of physical worlds, while the former are ties between different kinds of representation. But in both cases it’s useful to treat these processes in humans or machines as beliefs, since they can be assigned weights of expectation and updated.

‘Phenomenology’ is a general term for an agent’s models of its own introspected experiences. As such, we can call these hypotheses linking experienced data to theorized processes phenomenological bridge hypotheses. Or just ‘bridge hypotheses’, for short.

If we want to build an agent that tries to evaluate the accuracy of a model based on the accuracy of its predictions, we need some scheme to compare thingies in the model (like tiles) and thingies in the sensory stream (like colors). Thus a bridge rule appears to be necessary to talk about induction over models of the world. And bridge hypotheses are just bridge rules treated as probabilistic, updatable beliefs.

As the last figure above illustrates, bridge hypotheses can make a big difference for one’s scientific beliefs and expectations. And bridge hypotheses aren’t a free lunch; it would be a mistake to shunt all complexity onto them in order to simplify your physical hypotheses. Allow your bridge hypotheses to get too complicated, and you’ll be able to justify mad world-models, e.g., ones where the universe consists of a single apricot whose individual atoms each get a separate bridge to some complex experience. At the same time, if you demand too much simplicity from your bridge hypotheses, you’ll end up concluding that the physical world consists of a series of objects shaped just like your mental states. That way you can get away with a comically simple bridge rule like {exists(x) ↔ experiences(y,x)}.

In the absence of further information, it may not be possible to rule out Hypothesis A or Hypothesis B. The takeaway is that tradeoffs between the complexity of bridging hypotheses and the complexity of physical hypotheses do occur, and do matter. Any artificial agent needs some way of formulating good hypotheses of this type in order to be able to understand the universe at all, whether or not it finds itself in doubt after it has done so.

Generalizing bridge rules and data

Reasoners — both human and artificial — don’t begin with perfect knowledge of their own design. When they have working self-models at all, these self-models are fallible. Aristotle thought the brain was an organ for cooling the blood. We had to find out about neurons by opening up the heads of people who looked like us, putting the big corrugated gray organ under a microscope, seeing (with our eyes, our visual cortex, our senses) that the microscope (which we’d previously generalized shows us tiny things as if they were large) showed this incredibly fine mesh of connected blobs, and realizing, “Hey, I bet this does information processing and that’s what I am! The big gray corrugated organ that’s inside my own head is me!”

The bridge hypotheses in Hypotheses A and B are about linking an agent’s environment-triggered experiences to environmental causes. But in fact bridge hypotheses are more general than that.

1. An agent’s experiences needn’t all have environmental causes. They can be caused by something inside the agent.

2. The cause-effect relation we’re bridging can go the other way. E.g., a bridge hypothesis can link an experienced decision to a behavioral consequence, or to an expected outcome of the behavior.

3. The bridge hypothesis needn’t link causes to effects at all. E.g., it can assert that the agent’s experienced sensations or decisions just are a certain physical state. Or it can assert neutral correlations.

Phenomenological bridge hypotheses, then, can relate theoretical posits to any sort of experiential data. Experiential data are internally evident facts that get compared to hypotheses and cause updates — the kind of data of direct epistemic relevance to individual scientists updating their personal beliefs. Light shines on your retina, gets transduced to neural firings, gets reconstructed in your visual cortex and then — this is the key part — that internal fact gets used to decide what sort of universe you’re probably in.

The data from an AI’s environment is just one of many kinds of information it can use to update its probability distributions. In addition to ordinary sensory content such as vision and smell, update-triggering data could include things like how much RAM is being used. This is because an inner RAM sense can tell you that the universe is such as to include a copy of you with at least that much RAM.

We normally think of science as reliant mainly on sensory faculties, not introspective ones. Arriving at conclusions just by examining your own intuitions and imaginings sounds more like math or philosophy. But for present purposes the distinction isn’t important. What matters is just whether the AGI forms accurate beliefs and makes good decisions. Prototypical scientists may shun introspectionism because humans do a better job of directly apprehending and communicating facts about their environments than facts about their own inner lives, but AGIs can have a very different set of strengths and weaknesses. Although introspection, like sensation, is fallible, introspective self-representations sometimes empirically correlate with world-states.4 And that’s all it takes for them to constitute Bayesian evidence.

Bridging hardware and experience

In my above discussion, all of Cai’s world-models included representations of Cai itself. However, these representations were very simple — no more than a black tile in a specific environment. Since Cai’s own computations are complex, it must be the case that either they are occurring outside the universe depicted (as though Cai is plugged into a cellular automaton Matrix), or the universe depicted is much more complex than Cai thinks.5 Perhaps its model is wildly mistaken, or perhaps the high-level cellular patterns it’s hypothesized arise from other, smaller-scale regularities.

Regardless, Cai’s computations must be embodied in some causal pattern. Cai will eventually need to construct bridge hypotheses between its experiences and their physical substrate if it is to make reliable predictions about its own behavior and about its relationship with its surroundings.

Visualize the epistemic problem that an agent needs to solve. Cai has access to a series of sensory impressions. In principle we could also add introspective data to that. But you’ll still get a series of (presumably time-indexed) facts in some native format of that mind. Those facts very likely won’t be structured exactly like any ontologically basic feature of the universe in which the mind lives. They won’t be a precise position of a Newtonian particle, for example. And even if we were dealing with sense data shaped just like ontologically basic facts, a rational agent could never know for certain that they were ontologically basic, so it would still have to consider hypotheses about even more basic particles.

When humans or AGIs try to match up hypotheses about universes to sensory experiences, there will be a type error. Our representation of the universe will be in hypothetical atoms or quantum fields, while our representation of sensory experiences will be in a native format like ‘red-green’.6 This is where bridge rules like Cai’s color conversions come in — bridges that relate our experiences to environmental stimuli, as well as ones that relate our experiences to the hardware that runs us.

Cai can form physical hypotheses about its own internal state, in addition to ones about its environment. This means it can form bridge hypotheses between its experiences and its own hardware, in addition to ones between its experiences and environment.

If you were an AI, you might be able to decode your red-green visual field into binary data — on-vs.-off — and make very simple hypotheses about how that corresponded to transistors making you up. Once you used a microscope on yourself to see the transistors, you’d see that they had binary states of positive and negative voltage, and all that would be left would be a hypothesis about whether the positive (or negative) voltage corresponded to an introspected 1 (or 0).

But even then, I don’t quite see how you could do without the bridge rules — there has to be some way to go from internal sensory types to the types featured in your hypotheses about physical laws.

Our sensory experience of red, green, blue is certain neurons firing in the visual cortex, and these neurons are in turn made from atoms. But internally, so far as information processing goes, we just know about the red, the green, the blue. This is what you’d expect an agent made of atoms to feel like from the inside. Our native representation of a pixel field won’t come with a little tag telling us with infallible transparency about the underlying quantum mechanics.

But this means that when we’re done positing a physical universe in all its detail, we also need one last (hopefully simple!) step that connects hypotheses about ‘a brain that processes visual information’ to ‘I see blue’.

One way to avoid worrying about bridge hypotheses would be to instead code the AI to accept bridge axioms, bridge rules with no degrees of freedom and no uncertainty. But the AI’s designers are not in fact infinitely confident about how the AI’s perceptual states emerge from the physical world — that, say, quantum field theory is the One True Answer, and shall be so from now until the end of time. Nor can they transmit infinite rational confidence to the AI merely by making it more stubbornly convinced of the view. If you pretend to know more than you do, the world will still bite back. As an agent in the world, you really do have to think about and test a variety of different uncertain hypotheses about what hardware you’re running on, what kinds of environmental triggers produce such-and-such experiences, and so on. This is particularly true if your hardware is likely to undergo substantial changes over time.

If you don’t allow the AI to form probabilistic, updatable hypotheses about the relation between its phenomenology and the physical world, the AI will either be unable to reason at all, or it will reason its way off a cliff. In my next post, Bridge Collapse, I’ll begin discussing how the latter problem sinks an otherwise extremely promising approach to formalizing ideal AGI reasoning: Solomonoff induction.


1 By ‘map-like’, I mean that the processes look similar to the representational processes in human thought. They systematically correlate with external events, within a pattern-tracking system that can readily propagate and exploit the correlation.

2 Agents need initial assumptions, built-in prior information. The prior is defined by whatever algorithm the reasoner follows in making its very first updates.

If I leave an agent’s priors undefined, no ghost of reasonableness will intervene to give the agent a ‘default’ prior. For example, it won’t default to a uniform prior over possible coinflip outcomes in the absence of relevant evidence. Rather, without something that acts like a prior, the agent just won’t work — in the same way that a calculator won’t work if you grant it the freedom to do math however it wishes. A frequentist AI might refuse to talk about priors, but it would still need to act like it has priors, else break.

3 This talk of ‘belief’ and ‘assumption’ and ‘perception’ is anthropomorphizing, and the analogies to human psychology won’t be perfect. This is important to keep in view, though there’s only so much we can do to avoid vagueness and analogical reasoning when the architecture of AGIs remains unknown. In particular, I’m not assuming that every artificial scientist is particularly intelligent. Or particularly conscious.

What I mean with all this ‘Cai believes...’ talk is that Cai weights predictions and selects actions just as though it believed itself to be in a cellular automaton world. One can treat Cai’s automaton-theoretic model as just a bookkeeping device for assigning Cox’s-theorem-following real numbers to encoded images of color fields. But one can also treat Cai’s model as a psychological expectation, to the extent it functionally resembles the corresponding human mental states. Words like ‘assumption’ and ‘thinks’ here needn’t mean that the agent thinks in the same fashion humans think; what we’re interested in are the broad class of information-processing algorithms that yield similar behaviors.

4 To illustrate: In principle, even a human pining to become a parent could, by introspection alone, infer that they might be an evolved mind (since they are experiencing a desire to self-replicate) and embedded in a universe which had evolved minds with evolutionary histories. An AGI with more reliable internal monitors could learn a great deal about the rest of the universe just by investigating itself.

5 In either case, we shouldn’t be surprised to see Cai failing to fully represent its own inner workings. An agent cannot explicitly represent itself in its totality, since it would then need to represent itself representing itself representing itself … ad infinitum. Environmental phenomena, too, must usually be compressed.

6 One response would be to place the blame on Cai’s positing white, gray, and black for its world-models, rather than sticking with cyan, yellow, and magenta. But there will still be a type error when one tries to compare perceived cyan/​yellow/​magenta with hypothesized (but perceptually invisible) cyan/​yellow/​magenta. Explicitly introducing separate words for hypothesized v. perceived colors doesn’t produce the distinction; it just makes it easier to keep track of a distinction that was already present.