postCDT: Decision Theory using post-selected Bayes nets

The purpose of this post is to document a minor idea about a new type of decision theory that works using a Bayes net. This is not a concrete proposal, since I will give no insight on which Bayes net to use. I am not that excited by this proposal, but think it is worth writing up anyway.

The main object used in this proposal has more information than a standard Bayes net. It is the result you get by taking a Bayes net and conditioning on the values of some of the nodes. The data type is a Bayes net, along with a subset of the nodes, and values for each of those nodes. I will call this a post-selected Bayes net.

First, and observation:

Given a collection of dependent random variables, one can sometimes infer a Bayes net relating them, for example by taking the network with the fewest parameters. Sometimes a network can be made simpler by adding in extra hidden variables, which are common causes. Sometimes it can be made simpler still by adding extra hidden variables which are common effects, and conditioning on the values of these common effects.

Philosophically, this is kind of weird. It is hard to imagine what would happen if we were trying to discover the causal structure of the universe, and we discover that there is some hidden variable that always turns out to be a certain way, in spite of appearing to be causally downstream from the other variables.

And now a decision theory, postCDT:

Given a post-selected Bayes net, and a special node representing my action, and a special node representing my utility, sever all edges directed into my action. For each possible action, compute the expectation of the value of the utility node in the newly severed network, given I take that action, and all the post-selected nodes have their specified value. Take the action resulting in the highest expected utility.

This can do some things that decision theories based on non-post-selected Bayes nets cannot:

Consider a twin prisoner’s dilemma. There is a node representing your output, a node representing your twin’s output, a node representing your utility, a node representing your twin’s utility, and possibly some other nodes correlating you with your twin. The two utilities are decedents of the two actions.

Observe that if you and your twin share the same non-post-selected diagram, then an least one of you must not be causally downstream of the other. Without loss of generality, assume that your twin is not causally downstream from you.

In this case, you do not effect your twins action, and CDT says you should defect. You could try to get past this by pretending that you are deciding for some abstract algorithm that is a parent of your action, but unless the two players are deciding for the exact same node, you will run into the same problem. It is more clear that pretending you are choosing for an ancestor node does not work in a prisoners dilemma with different agents that are mutually predicting each other.

Now consider a post-selected Bayes net, where we add an extra node, which is a child of both your and your twin’s actions. This node is (approximately) 1 if your actions are the same, and 0 otherwise. This node is post-selected to be 1, explaining the correlation between you and your twin. In this case, postCDT will recommend cooperation, since your cooperation is correlated with your twin’s cooperation, even after severing your parents (which did not exist in this simplified example.)

A weird bug (or feature):

Note that if you connect up your action to logically equivalent nodes through common post-selected effects, then when you sever the parents, you do not also sever the parents of the other logically equivalent nodes. This seems weird, and makes it feel to me like this mostly just gets though CDT issues by effectively becoming a weird unmotivated modification of EDT.

On the other hand, maybe the effect is to be like CDT when you are the only copy of yourself, but then becomes like EDT when other copies are involved, thus allowing it to be like an EDT that smokes in the smoking legion problem.

Conclusion: I am not particularly excited by this proposal, partially because it seems to not be very philosophically motivated, and partially because I think that causality based decision theories do not add anything to EDT. EDT has problems, all of which are in my mind orthogonal to the changes proposed in CDT. However, I think that it does provide an (ugly) fix to what is otherwise a hole in CDT like proposals, and is probably what I would think about if I were to think about that class of proposals.