In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy

Summary: I define memoryless Cartesian environments (which can model many familiar decision problems), note the similarity to memoryless POMDPs, and define a local optimality condition for policies, which can be roughly stated as “the policy is consistent with maximizing expected utility using CDT and subjective probabilities derived from SIA”. I show that this local optimality condition is necesssary but not sufficient for global optimality (UDT).


Memoryless Cartesian environments

I’ll define a memoryless Cartesian environment to consist of:

  1. a set of states

  2. a set of actions

  3. a set of observations

  4. an initial state

  5. a transition function , determining the distribution of states resulting from starting in a state and taking a certain action

  6. an observation function , determining what the agent sees in a given state

  7. a set of terminal states. If the environment reaches a terminal state, the game ends.

  8. a utility function , measuring the value of each terminal state.

On each iteration, the agent observes some observation, and takes some action. Unlike in a POMDP, the agent has no memory of previous observations: the agent’s policy must take into account only the current observation. That is, the policy is of type . In this analysis I’ll assume that, for any state and policy, the expected number of iterations in the Cartesian environment starting from that state and using that policy is finite.

Memoryless Cartesian environments can be used to define many familier decision problems (for example, the absent-minded driver problem, Newcomb’s problem with opaque or transparent boxes (assuming Omega runs a copy of the agent to make its prediction), counterfactual mugging (also assuming Omega simulates the agent)). Translating a decision problem to a memoryless Cartesian environment obviously requires making some Cartesian assumptions/​decisions, though; in the case of Newcomb’s problem, we have to isolate Omega’s simulation of the agent as a copy of the agent.

Globally and locally optimal policies

Memoryless Cartesian environments are much like memoryless POMDPs, and the following analysis is quite similar to that given in some previous work on memoryless POMDPs: the main difference is that I am targeting (local) optimality given a known world model, while previous work usually targets asymptotic (local) optimality given an unknown world model.

Let us define the expected utility of a particular state, given a policy:

Although this definition is recursive, the recursion is well-founded (since the expected number of iterations starting from any particular state is finite). Note that the agent’s initial expected utility is just . Now we can also define a Q function, determining the expected utility of being in a certain state and taking a certain action:

Let be a random variable indicating the total number of iterations, and be random variables indicating the state on each iteration. It is now possible to define the frequency of a given state (i.e. the expected number of times the agent will encounter this state):

These frequencies are bounded since the expectation of is bounded. Given an observation, the agent may be uncertain which state it is in (since multiple states might result in the same observation). It is possible to use SIA to define subjective state probabilities using these frequencies:

Note that I’ve defined SIA to return an un-normalized probability distribution; this turns out to be useful later, since it naturally handles the case when the observation occurs with probability 0.

How might an agent decide which action to take? Under one approach (UDT), the agent simply computes the globally optimal policy that results in maximum expected utility (that is, a policy maximizing ) and takes the action recommended by this policy (perhaps stochastically). While UDT is philosophically satisfying, it is not a very direct algorithm. It would be nice to have a better intuition for how an agent using UDT acts, such that we could (in some cases) derive a polynomial-time algorithm.

So let’s consider a local optimality condition. Intuitively, the condition states that if the agent has a nonzero probability of taking an action given observation , then that action should maximize expected utility (given the agent’s uncertainty about which state it is in). More formally, the local optimality condition states:

Philosophically, a policy is locally optimal iff it is consistent with CDT (using SIA probabilities). This local optimality condition is not sufficient for global optimality (for the same reason that not all Nash equilibria in cooperative games are optimal), but it is necessary. The proof follows.

Global optimality implies local optimality

Let be a state and be a policy. Consider a perturbation of the policy : given observation , the agent will take action more often, and action less often. This results in a change of the agent’s expected utility starting from each state:

This has a natural interpretation: to compute , we compute the expected value of simulating a run starting from using policy and summing (i.e. how much better is than in expectation in state ) for all visited states with .

To determine the optimal policy, we are concerned with for different observations and actions . To compute this, we imagine starting from the state and following policy , and sum for all visited states with . This expected sum is actually equivalent to

i.e. the expected value of of with having probabilities (up to a multiplicative constant). From here the implication should be clear: if a policy is not locally optimal, then there is some triple such that a small change in making more likely and less likely given observation will increase expected utility (just set to the non-optimal action having nonzero probability given , and set to be a better alternative action). So this policy would not be globally optimal either.

Conclusion

In memoryless Cartesian environments, policies consistent with CDT+SIA are locally optimal in some sense, and all globally optimal (UDT) policies are locally optimal in this sense. Therefore, if we look at (Cartesian) UDT the right way, it’s doing CDT+SIA with some method for making sure the resulting policy is globally optimal rather than just locally optimal. It is not clear how to extend this analysis to non-Cartesian environments where logical updatelessness is important (e.g. agent simulates predictor), but this seems like a useful research avenue.