Counterfactuals on POMDP

A putative new idea for AI control; index here.

This is technical note explaining how to define counterfactuals on partially observable Markov decision processes (POMDP).

The POMDP formalism is explained here. This note will just sketch out how counterfactuals are defined; the full details will be in the final paper.

Taking another action

Suppose that an agent has been active for turns on POMDP , and has seen history . Then suppose it wants to estimate the counterfactual of what would have happened if had done other actions after timestep . So what is the counterfactual probability of history , , which can be written as:

  • .

This rather clunky notation (let me know if there’s a better way of writing this) is trying to estimate the probability of , given that is the counterfactual history, and both and start with before diverging.

It might seem surprising that no policy is mentioned in that expression—after all, the probability of a history is given by the environment and the agent’s action choices. But histories like include action choices, so these don’t need to be specified separately.

The first thing to notice is that and give a probability distribution over , the (hidden) state at timestep . And the value of can change the subsequent probability of . So the counterfactual probability is defined as:

  • .

Counterfactual equivalence

Though the counterfactual probability is defined in terms of the states, the expression itself only involves histories.

Thus if and are two POMDPs with the same sets of actions and observations (but potentially different sets of states) we can say they are counterfactually equivalent if they generate the same counterfactual probabilities:

  • .

Consider the simple POMDP (actually, an MDP, since it’s fully observable), defined as:

The reasons for the notation will be explained in a later post; but here, starting from a single state, the agent can take one of two actions, and each action has a chance of ending up in one of two states.

Now consider the POMDP , defined as:

Here there are one of two initial states, each equally likely, and then each action will lead with certainty to another state that comes from the initial hidden state and the action choice.

Note that there are four possible histories on and , and two are compatible with each action, and those two are equally probable given that action. So every history will appear with equal probability on and : they are observationally equivalent.

However, they are not counterfactually equivalent. For instance, , but

  • .

Conversely, if we consider the POMDP :

Then it’s not hard to check that it’s counterfactually equivalent with .

No comments.