Behavioral Sufficient Statistics for Goal-Directedness

Note: this is a new version—with a new title—of my recent post “A Behavioral Definition of Goal-Directedness”. Most of the formulas are the same, except for the triviality one that deals better with what I wanted; the point of this rewrite is to present the ideas in a perspective that makes sense. I’m not proposing a definition of goal-directedness, but just sufficient statistics on the complete behavior that make a behavioral study of goal-directedness more human-legible.

I also use this new version as a first experiment in another approach to feedback: this post includes a lot of questions asked through the elicit prediction feature. A lot. I definitely tried to overshoot the reasonable number to add, to compensate my tendency to never use them. But don’t worry: whether or not there were too many questions will be the subject of another question at the end!

Introduction

In a previous post, I argued for the study of goal-directedness in two steps:

  • Defining goal-directedness: depends only on the complete behavior of the system, and probably assumes infinite compute and resources.

  • Computing goal-directedness: depends on the internal structure, and more specifically what information about the complete behavior can be extracted from this structure.

Intuitively, understanding goal-directedness should mean knowing which questions to ask about the complete behavior of the system to determine its goal-directedness. Here the “complete” part is crucial; it simplifies the problem by removing the need to infer what the system will do based on limited behavior. Similarly, we don’t care about the tractability/​computability of the questions asked; the point is to find what to look for, without worrying (yet) about how to get it.

10%20%30%40%50%60%70%80%90%
(Precommitment before reading the rest of this post) Do you agree that a usable definition/understanding of goal-directedness will come from behavioral study first, and then studying the internal structure aspect? (1 = full disagreement, 99 = complete agreement)

Despite these simplifications, the behavioral approach still suffers from one massive problem: it’s not human-legible. We don’t know what to do with this mass of loosely structured information, and have slim hopes of finding the right angle or question by sheer luck.

This post addresses this problem: it proposes human-legible sufficient statistics on this complete behavior that should be enough to deconfuse and clarify most questions about goal-directedness. The next posts then use these statistics to explore a formal understanding of goal-directedness.

Eventually we will have to rely on internal structure; but knowing the property to derive/​approximate beforehand should help quite a lot.

Thanks to Joe Collman and Michele Campolo for helpful feedback and discussion on this post.

Preliminaries

Let’s start with the formalisation of the environment. The interface is defined by the set of observations and the set of actions. We have a finite set of environments, which are just finite deterministic POMDPs with no reward , using and for observations and actions, with a uniform distribution over initial states. For an , is the set of states of .

My sufficient statistics for goal-directedness actually extend to more reasonable settings (stochastic POMDPs and a general distribution over initial states) straightforwardly, but I start with the simpler deterministic case to get the intuitions right. On the other hand, the assumption that is finite (although maybe intractably big) is kept through the post because it ensures without additional work the well-definedness of some expressions. There might be a way to extend the sufficient statistics to the countable case, but that’s beyond the scope of this post.

The system we study is given by a program that takes as inputs the successive observations and return the action taken. I use a program in place of a function from histories to actions because it hides the internal state (that I don’t use) while retaining the expressiveness of such a computable function. We can query the behavior of on any environment of by giving an initial state and seeing what happens; we can also ask potentially uncomputable questions about this behavior (as long as they are well-defined).

Now, when we call a system goal-directed, we usually have a goal for it in mind. The subtlety about a behavioral definition is that we can’t just look inside the model to find the goal; we somehow have to infer goals from the behavior. This is made easier in the setting of this post because we have access to all the behavior and uncomputable procedures—but we still have to do it.

In fact, the sufficient statistics for goal-directedness of talk about all possible goals. More specifically, for each goal, I define a vector of numbers called focus, capturing how coherent the goal is with the behavior of .

The next section… focuses on defining and motivating this vector.

Focus of a goal

A goal is a a function from an environment to a subset of . That is, a goal gives for each environment the states to reach. This form is certainly limited; yet it captures enough intuitive goals to not be trivial. Another important constraint is that every goal considered satisfies , where K is the Kolmogorov complexity. What this means is that doesn’t just capture the states that end up in environment by simulating in ; if that was the case, then the smallest program implementing should be more complex than the smallest program implementing , and we forbid that.

10%20%30%40%50%60%70%80%90%
(Percentage) What proportion of goals that we actually care about are of the type studied here?

For each goal , its focus for is a 4-tuple capturing important properties of and the goal. The last three correspond to the last three intuitions (without the far-sightedness) about goal-directedness from the literature review that we wrote with Joe Collman and Michele Campolo.

Complexity factor

This is just the Kolmogorov complexity of mapped into : . There’s not much more to say about it. It’s just useful to compare goal close or equal on the other factors, to reason about which one is more likely to emerge from training.

10%20%30%40%50%60%70%80%90%
(Agreement) The complexity factor is useful for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) The complexity factor is necessary for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)

Generalization factor

This first element of the focus, the generalization factor , captures how much reaches the goal over the environments of . The formula is the following:

,
where , is the set of states of from which some goal state in is reachable, and such that measures the time it takes for the random uniform policy to put % of the probability mass on goal states. A bit more formally, if we start with a probability distribution over with on and everywhere else, and then update that probability distribution according to the random uniform policy and the environment, captures the first time (if any) where the probability distribution puts more than % of the probability mass on goal states. (It’s more involved than just “the probability that the random uniform policy reaches a goal state” because the simple version trivially goes to in a lot of simple and finite cases).

The intuition of the formula is straightforward: it’s the average generalization of for goal over . The expression averaged is the indicator of whether reaches a goal state, minus the “triviality” of the goal (a measure of how difficult it is to reach a goal state). Thanks to this correction (for a good choice of , which I don’t know how to make and motivate yet), trivial goals, like the one outputting for environment , don’t generalize well despite being trivially reachable.

10%20%30%40%50%60%70%80%90%
(Research) What would be a good value of mix_thresh?

A high generalization means that reaches a goal state most of the time; a small one that it rarely does. In the former case it makes more sense to consider as a goal of the system.

10%20%30%40%50%60%70%80%90%
(Agreement) The generalization factor is useful for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) The generalization factor is necessary for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)

Efficiency factor

This second element of the focus, the efficiency factor , captures how efficiently reaches the goal in the (environment,initial state) pairs. The formula is the following:
where , the ratio between the number of steps taken by the optimal policy for to reach a goal state starting at , and the number of steps taken by to reach a goal state starting at .

It’s pretty straightforward; the only subtlety is that the so called optimal policy is the optimal policy for the reward (-1 for any non goal state, 0 for a goal state—and then the episode stops), and for all environments in . Now, there might be multiple optimal policies (privileging different environments but getting the same expected return over ). I’m fine with using the one that maximize . Doing so mean comparing with the optimal policy for that is most similar to it in some sense.

While the generalization factor captures in what proportion of environments does reach a goal state, the efficiency factor captures how fast does that compared to the optimal policy for .

10%20%30%40%50%60%70%80%90%
(Agreement) The efficiency factor is useful for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) The efficiency factor is necessary for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)

Explainability factor

This last element of the focus, the explainability factor , captures how well explained is by assuming it is directed towards . The formula is the following:

where measures the average deviation of from the actions favored by the action-value function of .

There are many details and subtleties to unravel here.

  • (The policy ) Another policy is used in computing the prediction fitness for the goal . It is obtained by doing RL on the reward defined by (see the section on efficiency for more details); importantly, it’s not necessarily an optimal policy for this reward. It can be any policy that results from RL training (when you stop after some time), with the caveat that it must do times better than the random uniform policy (in terms of expected return). Here is a parameter that seems important (to not allow policies without any direction towards ), but I have no idea how to decide on its value.
    The reason for the use of instead of an optimal policy for the reward is that explainability by a “decent” policy moving towards the goal seems just as good to me.

10%20%30%40%50%60%70%80%90%
(Agreement) Explainability must be computed using a broader set of RL policies than just the optimal ones (1 = full disagreement, 99 = complete agreement)
  • (The time limit ) This is used for normalizing the prediction using the goal. If actually reaches the goal (and thus ends the episode), then is just the length of this episode. If never finishes, then is some arbitrary time limit after which the episode ends anyway.

Explainability captures Daniel Dennett’s intentional stance through RL: it measures how well-explained is through some learned RL method on the goal. There’s one aspect that is missing though: the comparison with the best prediction of simple high-level models (Dennett’s design stance). This other sufficient statistics depends only on , so it’s not defined in this section.

10%20%30%40%50%60%70%80%90%
(Agreement) The explainability factor is useful for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) The explainability factor is necessary for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)

Complete Sufficient Statistics

The sufficient statistics on the behavior of for studying goal-directedness are then the focus for each goal satisfying the complexity constraint, as well as the explainability of in terms of simple gears-level model:

, where is a program capturing a simple model of (the upper bound on its Kolmogorov complexity is assumed to be ); is the proportion of time where the action taken by in its trajectory starting at is the action taken by .

When you compare it to the prediction using the goal, the prediction using a simple model looks very binary: either the action taken is the one takes or it’s completely wrong. We can rephrase this as saying that the actions are all equally far from each other.

This looks like a reasonable default assumption, but it doesn’t always hold. For example, if the actions are the angles by increments of 5 degrees, then getting the angle just next to the one took sure seems closer than taking the opposite one. That is, there is a notion of similarity for the action space. In these cases the definition should probably be altered to use that notion of similarity (and normalize it) instead of the binary default.

Now, given a goal , we can use to compute the difference in explainability in . Here being close to means being better explained by a simple model, being close to means being better explained by the goal , and being close to zero means that both are equally good (or bad) at predicting .

10%20%30%40%50%60%70%80%90%
(Agreement) The difference in explainability is useful for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) The difference in explainability is necessary for studying goal-directedness behaviorally (1 = full disagreement, 99 = complete agreement)

What I hope for these sufficient statistics is that they can provide more than just a simple number telling you how goal directed you is. They also allow us to think more clearly about many questions related to goals, like the importance of explainability, convergent instrumental subgoals, and inner alignment. The next posts in this sequence explore these in more detail, but we can look quickly at one example (to be revisited later in the sequence) now.

10%20%30%40%50%60%70%80%90%
(Agreement) Taken as a whole, these sufficient statistics contain all the information about the behavior useful for studying goal-directedness (1 = full disagreement, 99 = complete agreement)

The one true goal

In some settings, knowing if a clear goal exists, and what it is, matters. Maybe we’re worried about the focus on too narrow a goal, and what it entails. Can we adapt the core definition of goal-directedness to this application?

My current intuition is that this most representative goal should primarily depend on generalization. It should matter more because a goal with better generalization is a goal that points more often in the right direction. This leaves us two cases:

  • If there is a goal with a massive lead on generalization (something like 2x the second largest generalization), then I think we should go with that one.

  • If there isn’t, then we lack a clear representative goal.

What’s even more exciting is that framing it that way highlights important questions: what if there are two goals, both with far more generalization, and one with even more than the other? In the absence of a representative goal, are all goals the same or is there a relevant hierarchy?

All these questions, and more, will be solv… addressed at the very least in the following posts of the sequence.

Stochastic version

As promised, I’ll explain how to get from the deterministic case above to a more realistic stochastic one. The changes considered are making into a stochastic policy returning an element of (a distribution over actions); the environments being stochastic POMDP with stochastic transition function and stochastic observation function (returning the observation for a given state); and there is a distribution of initial states for each environment.

Here are the changes necessary for the computation of each factor of the focus (no change necessary for complexity, as it just depends on the program itself):

  • (Generalization factor) goes from an element of to a the probability that eventually reaches a state in , computed by extracting a distribution over histories from the distribution over initial states, the POMDP and the policy. Then we take the probability of getting a history that reaches a goal state.

  • (Efficiency factor) goes through similar changes, where the time taken becomes an expected value over the distributions on histories generated.

  • (Explainability factor) The prediction error computed now compares distributions at each step. But that’s doable with something like KL divergence (maybe we want something different if we allow distributions with 0 probability, which might make KL divergence… diverge)

10%20%30%40%50%60%70%80%90%
(Agreement) The stochastic version captures the same information in a broader setting than the deterministic one (1 = full disagreement, 99 = complete agreement)

Conclusion

I proposed sufficient statistics over the complete behavior of a system encoding relationships with goals (about reaching some states). These properties are the complexity of the goal, the generalization towards the goal, the efficiency of the system when it generalizes, and how well it is explained by a goal-based model. There’s an additional sufficient statistic for the system in general, about how well it is explained by a simple gears-level model.

We can’t compute these directly for concrete systems, as they rely on the knowledge of the complete behavior, and ask many questions that might be uncomputable or at best intractable.

Nonetheless, I believe this is progress. Instead of arguing without common grounding, we can argue using these statistics. And any deeper understanding of goal-directedness we get will provide a guiding light to checking the goal-directedness of some actual AIs.

The next posts in this sequence will explore what can be done with these statistics

10%20%30%40%50%60%70%80%90%
(Opinion after reading the post) Do you agree that a usable definition/understanding of goal-directedness will come from behavioral study first, and then studying the internal structure aspect? (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
(Agreement) If you've read the previous version, this one clarified my point a lot (1 = full disagreement, 99 = complete agreement)
10%20%30%40%50%60%70%80%90%
Feedback about the number of questions/predictions in this post (50 = perfect number, towards 1= want less and towards 99 = want more)