Intertheoretic utility comparison: outcomes, strategies and utilities

This post seeks to clarify the question of what utility functions should be defined over, in the situation where an agent has uncertainty over their own utility function.


Outcomes or worlds

The most intuitively natural thing to define utility functions over is outcomes or worlds. The problem with this approach is that it results in an excessive amount of utility functions.

For instance, imagine the agent has two actions and . The action will map straight to world , while will map to a complicated probability distribution over a large set of worlds . The agent gets no further action.

If we see the set of worlds as , then we have a vast amount of possible utility functions. Whereas in practice only two parts of the utility function matters: its value on , and its expected value on .

An agent that knows those values (technically, that knows the ordering of those values) for a single utility function, can reach a decision. Therefore it would seem perverse to have things change when the agent starts to have uncertainty over its utility.

What would make sense is to have utilities defined over expected outcomes—ie over and -- but that definition doesn’t seem to work in more general situations.

Histories

Model the agent as interacting times with the world. It gets an observation, then follows that by an action until it takes its last action . Call the set of alternating observations and actions a history, and let be the set of histories. We can also define partial histories, sequences of length .

Note this inverts the usual order, by starting with observations and following with actions. This is because the history ends with an action: since the agent can’t affect anything more after than, subsequent observations are meaningless. Partial histories, though, end with observations, not actions (this is for subsequent ease of use).

Any utility function can be seen as function from to , by mapping the history to , the expected utility of the history.

So a more natural class would be to define utilities as being over , the set of possible histories. Note that actions are included in histories, and histories with the same observations but different actions can have different utility values. This is because, eg, the observation of someone saying ‘You succeed in your task!’ will have different implications depending on the action just prior.

So define as the set of utility functions (not equivalence classes of utility functions) over .

This is better that defining over outcomes, but still somewhat unsatisfactory, as we can have a large number of incredibly unlikely histories, and these should not affect the normalisation procedure much.

So what is really wanted is a distribution over . But that makes no sense, as we don’t know what the actions will be. So we need to look at how the agent choose its actions: at its set of strategies.

Strategies

The set of deterministic strategies is the set of ways an agent can pick its actions, given its past history. It would be tempting to define utilities as taking values over , but there a subtleties to deal with first.

Isomorphic strategies

Suppose the agent can choose action or at the first step, then or at the second step. The observations are and , independently of actions chosen.

Then consider the following strategies:

  • , , .

  • , , .

These strategies seem to differ: they give different behaviours on the partial history . However, because both will choose after , the history will never come up. Thus we will define as consisting of equivalent classes of strategies that will always choose the same action in any situation that actually arises.

Stochastic strategies

We should briefly note that we can form stochastic strategies by setting a probability distribution over (this defines a simplex with ‘pure’ elements of as the vertexes). This set includes all possible stochastic strategies.

Utilities and strategies

This section presents a ways of connecting strategies and utilities. Starting with embedding utilities over histories into utilities over strategies:

Theorem 1: There is a linear map from to . This need not be either injective nor surjective.

Proof: The set is naturally identified with the set of functions .

Each element of defines a probability distribution on , and hence an expectation for any utility function . Write this expectation as . Then the values of for different defines the element .

The linearity of comes from the fact that utility functions are additive: .

To show that need not be injective, consider the following situation, where the initial observation is or with equal probability, leading to a choice of actions and :

Then there are two strategies—the ones that pick and -- but three worlds—those ending in , , and -- so maps a three dimensional space into a two dimensional one, and cannot be injective.

There are utilities in that are independent of any strategy; the kernel of is the subspace of these that happen to always map to utility .

To show that need not be surjective, add actions and after and above:

Then the set of strategies has six elements (each strategy choose one element from among elements, then another from among elements), while the set of histories has five elements, corresponding to each of the five final actions. Hence cannot be surjective from the five dimensional to the six dimensional .

Given this , we can consider the true space of utility functions that we’re interested in: the image of , divided out by equivalence relations (translations and positive scalings). Thus the space of relevant utility classes is:

  • .

Note that can translate any element of to have all its coordinates sum to zero, and then divide by positive scalings. This means that where is a sphere and corresponds to the utilities constant on all strategies. The topology of is the natural topology on , combined with the fact that the only open set containing is itself.

Stable permutations

We want to be able to interchange strategies, and still get the same result: have some sort of symmetry on the normalisation process. However, since can be a strict subset of , we can’t just take arbitrary permutations of . This section will instead define the subset of stable permutations. Note that any permutation of extends to an invertable linear transformation of by permutation of the basis elements.

Definition: Stable permutations. A stable permutation is a permutation of that extends to a linear map on that preserves .

Since , we can call the set of stable permutations .

How common are stable permutations? In general, not too uncommon and not too common, as the following two theorems will show:

Proposition 2: If is non-trivial, then so is .

Proof: Consider the set of actions and possible observations as a tree. If contains more than one element, then there is a branching point where the agent can choose between more than one action. Since the length of the branches is and hence finite, there exists at least one branching point with no subsequent branching point: the agent has no further decisions to make.

Let be the partial history that corresponds to that final branching point, and let and be possible actions at . Then a transposition of and will also permute any strategies for which is a possible history.

But because is a finale branching point, the expected utilities and are defined for any , independently of any strategy. Thus transposing and defines a map on , with , , and otherwise unchanged for any history that doesn’t start with or .

Hence the permutation of corresponds to a linear automorphism of , which preserves the image of under , which means that that permutation is in .

In general, any tree isomorphism will correspond to an element of , and one can delete -- or add—nodes with a single action before doing such an isomorphism. However, in the general case, the space of stable permutations is not that large:

Proposition 3: The set can be a strict subset of the set of permutations of . And need not be transitive and may even have fixed points: there may exist an which is mapped to itself for all .

Proof: consider the situation below, where there are no choices after :

Let be the strategy that chooses at , and let be the utility function that maps to and everything else to . Then assume that exchanges with any . This means that is a function in that takes value on a single policy that choose after (as all policies but do that), and everywhere else.

Without loss of generality, let choose and after and , respectively. Then changing to must reduce the expectation of by , and similarly changing to . Thus changing to and changing to simultaneously must, by linearity, set the expected utility of that policy to . Consequently, cannot be an element of , and hence .

Thus every element of must map to itself.

Extra results and concepts

This section presents some miscellaneous extra results and concepts within this formalism.

Utilities and strategies

Each generates which is the set of histories which have non-zero probability given .

Lemma 4: If , then .

Proof: since , there must be some partial history , of non-zero probability given or , where will choose action while will choose action (in the extreme case, may be ). Thus must have at least one possible history starting with while can have none (and vice versa for ).

Proposition 5: Each has some of which it is the unique maximising strategy for.

Proof: Define as mapping to and the rest of to . Thus the expectation , and for , since has a non-zero probability of reaching a history outside , hence a history where has expected utility . The argument extends to mixed strategies, and the relevant utility class is .

Partial histories

For any partial history (or , if we want it to end with an action), define () as the set of histories that start with (or ).

Let be the set of strategies on , and the set of strategies that allow the history . Then there is a natural projection by mapping each strategy to its equivalence class of strategies on . The same holds for .

Let be a set of partial strategies, defined as strategies defined on that allow to happen with non-zero probability. Then .

Let be the set of utility functions on , and the projection from Theorem 1. We can define as , similarly to above.

Let be the set of possible observations the agent can make just after . Then there is a decomposition .

Proposition 6: There is a natural map from to .

Proof: for , and , define as .

Now replace with , also in , and compare, using standard conditional probability and conditional expected utility:

  • .

  • .

Note that the probability of is independent of (since ), and, since and are deterministic, . Similarly and are independent of and and , respectively—and the same goes for the expression.

Putting this all together gives:

  • ,

where is a constant depending only on and .

Consequently, the map defined above changes by a constant when changes. Similarly, if we change by changing its definition on only, then also changes by a constant, as changes.

Thus, up to constants, is a well defined map from to . Since it only uses the expectation of on , this makes it a well defined map from to .

Then if we divide by translations (and scalings), the constant factors drop out, so is a well defined map from to .

No comments.