What is causality to an evidential decision theorist?

Link post

(Subsumed by: Timeless Decision Theory, EDT=CDT)

People sometimes object to evidential decision theory by saying: “It seems like the distinction between correlation and causation is really important to making good decisions in practice. So how can a theory like EDT, with no role for causality, possibly be right?”

Long-time readers probably know my answer, but I want to articulate it in a little bit more detail. This is essentially identical to the treatment of causality in Eliezer Yudkowsky’s manuscript Timeless Decision Theory, but much shorter and probably less clear.

Causality and conditional independence

If a system is well-described by a causal diagram, then it satisfies a complex set of statistical relationships. For example:

  • In the causal graph A ⟶ B ⟶ C, the variables A and C are independent given B.

  • In the graph A ⟶ B ⟵C, the variables A and C are independent, but are dependent given B.

To an evidential decision theorist, these kinds of statistical relationships are the whole story about causality, or at least about its relevance to decisions. We could still ask why such relationships exist, but the answer wouldn’t matter to what we should do.

EDT = CDT

Now suppose that I’m making a decision X, trying to optimize Y.

And suppose further that there is a complicated causal diagram containing X and Y, such that my beliefs satisfy all of the statistical relationships implied by that causal diagram.

Note that this diagram will necessarily contain me and all of the computation that goes into my decision, and so it will be (much) too large for me to reason about explicitly.

Then I claim that an evidential decision theorist will endorse the recommendations of CDT (using that causal diagram):

  • EDT recommends maximizing the conditional expectation of Y, conditioned on all the inputs to X. Write Z for all of these inputs.

    • It might be challenging to condition on all of Z, given limits on our introspective ability, but we’d recommend doing it if possible. (At least for the rationalist’s interpretation EDT, which evaluates expected utility conditioned on a fact of the form “I decided X given inputs Z.”)

    • So if we can describe a heuristic that gives us the same answer as conditioning on all of Z, then an EDT agent will want to use it.

    • I’ll argue that CDT is such a heuristic.

  • In a causal diagram, there is an easy graphical condition (d-connectedness) to see whether (and how) X and Y are related given Z:

    • We need to have a path from X to Y that satisfies certain properties:

    • That path can start out moving upstream (i.e. against the causal arrows); it may switch from moving upstream to downstream at any time (including at the start); it must switch direction whenever it hits a node in Z; and it may only switch from moving downstream to upstream when it hits a node in Z.

  • If Z includes exactly the causal parents of X, then it’s easy to check that the only way for X and Y to be d-connected is by a direct downstream path from X to Y.

  • Under these conditions, it’s easy to see that intervening on X is the same as conditioning on X. (Indeed you could check this more directly from the definition of a causal intervention, which is structurally identical to conditioning in cases where we are already conditioning on all parents.)

Moreover, once the evidential decision-theorist’s problem is expressed this way, they can remove all of the causal nodes upstream of X, since they have no effect on the decision. This is particularly valuable because that contains all of the complexity of their own decision-making process (which they had no hope of modeling anyway).

So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure.

Whence subjective causality?

You might think: causal diagrams encode a very specific kind of conditional independence structure. Why would we see that structure in the world so often? Is this just some weird linguistic game we are playing, where you can rig up some weird statistical structure that happens to give the same conclusions as more straightforward reasoning from causality?

Indeed, one easy way to have statistical relationships is to have “metaphysically fundamental” causality: in a world containing many variables, each of which is an independent stochastic function of its parents in some causal diagram, then those variables will satisfy all the conditional independencies implied by the that causal diagram.

If this were the only way that we got subjective causality, then there’d be no difference between EDT and CDT, and no one would care about whether we treated causality as subjective or metaphysically fundamental.

But it’s not. There are other sources for similar statistical relationships. And moreover, the “metaphysically fundamental causality” isn’t actually consistent with the subjective beliefs of a logically bounded agent.

We can illustrate both points with the calculator example from Yudkowsky’s manuscript:

  • Suppose there are two calculators, one in Mongolia and one on Neptune, each computing the same function (whose value we don’t know) at the same instant.

  • Our beliefs about the two calculators are correlated, since we know they compute the same function. This remains true after conditioning on all the physical facts about the two calculators.

  • But in the “metaphysically fundamental” causal diagram, the results of the two calculators should be d-separated once we know the physical facts about them (since there isn’t even enough time for causal influences to propagate between them).

  • We can recover the correct conditional independencies by adding a common cause of the two calculators, representing “what is the correct output of the calculation?” We might describe this as “logical” causality.

This kind of “logical” causality can lead to major deviations from the CDT recommendation in cases where the EDT agent’s decision is highly correlated with other facts about the environment through non-physically-causal channels. For example: if there are two identical agents, or if someone else is reasoning about the agent’s decision sufficiently accurately, then the EDT agent would be inclined to say that the logical facts about their decision “cause” physical facts about the world (and hence induce correlations), whereas a CDT agent would say that those correlations should be ignored.

Punchline

EDT and CDT agree under two conditions: (i) we require that our causal model of the world and our beliefs agree in the usual statistical sense, i.e. that our beliefs satisfy the conditional independencies implied by our causal model, (ii) we evaluate utility conditioned on “I make decision X after receiving inputs Z” rather than conditioning on “I make decision X in the current situation” without including relevant facts about the current situation.

In practice, I think the main way CDT and EDT differ is that CDT ends up in a complicated philosophical discussion about “what really is causality?” (and so splinters into a host of theories) while EDT picks a particular answer: for EDT, causality is completely characterized by condition (i), that our beliefs and our causal model agree. That makes it is obvious how to generalize causality to logical facts (or to arbitrary universes with very different laws), while recovering the usual behavior of causality in typical cases.

I believe the notion of causality that is relevant to EDT is the “right” one, because causality seems like a concept developed to make and understand decisions (both over evolutionary time and more importantly over cultural evolution) rather than something ontologically fundamental that is needed to even define a correct decision.

If we take this perspective, it doesn’t matter whether we use EDT or CDT. I think this perspective basically accounts for intuitions about the importance of causality to decision-making, as well as the empirical importance of causality, while removing most of the philosophical ambiguity about causality. And it’s a big part of why I don’t feel particularly confused about decision theory.