Critiquing Scasper’s Definition of Subjunctive Dependence

Before we get to the main part, a note: this post discusses subjunctive dependence, and assumes the reader has a background in Functional Decision Theory (FDT). For readers who don’t, I recommend this paper by Eliezer Yudkowsky and Nate Soares. I will leave their definition of subjunctive dependence here, because it is so central to this post:

When two physical systems are computing the same function, we will say that their behaviors “subjunctively depend” upon that function.

So, I just finished reading Dissolving Confusion around Functional Decision Theory by Stephen Casper (scasper). In it, Casper explains FDT quite well, and makes some good points, such as FDT not assuming causation can happen backwards in time. However, Casper makes a claim about subjunctive dependence that’s not only wrong, but might add to confusion around FDT:

Suppose that you design some agent who enters an environment with whatever source code you gave it. Then if the agent’s source code is fixed, a predictor could exploit certain statistical correlations without knowing the source code. For example, suppose the predictor used observations of the agent to make probabilistic inferences about its source code. These could even be observations about how the agent acts in other Newcombian situations. Then the predictor could, without knowing what function the agent computes, make better-than-random guesses about its behavior. This falls outside of Yudkowsky and Soares’ definition of subjunctive dependence, but it has the same effect.

To see where Casper goes wrong, let’s look at a clear example. The classic Newcomb’s problem will do, but now, Omega isn’t running a model of your decision procedure, but has observed your one-box/​two-box choices on 100 earlier instances of the game—and uses the percentage of times you one-boxed for her prediction. That is, Omega predicts you one-box iff that percentage is greater than 50. Now, in the version where Omega is running a model of your decision procedure, you and Omega’s model are subjunctively dependent on the same function. In our current version, this isn’t the case, as Omega isn’t running such a model; however, Omega’s prediction is based on observations that are causally influenced by your historic choices, made by historic versions of you. Crucially, “current you” and every “historic you” therefore subjunctively depend on your decision procedure. The FDT graph for this version of Newcomb’s problem looks like this:

FDT Graph for Newcomb’s problem with prediction based on observations

Let’s ask FDT’s question: “Which output of this decision procedure causes the best outcome?”. If it’s two-boxing, your decision procedure causes each historical instance of you to two-box: this causes Omega to predict you two-box. Your decision procedure also causes current you to two-box (the oval box on the right). The payoff, then, is calculated as it is in the classic Newcomb’s problem, and equals $1000.

However, if the answer to the question is one-boxing, then every historical you and current you one-box. Omega predicts you one-box, giving you a payoff of $1,000,000.

Even better, if we assume FDT only faces this kind of Omega (and knows how this Omega operates), FDT can easily exploit Omega by one-boxing >50% of the time and two-boxing in the other cases. That way, Omega will keep predicting you one-box and fill box B. So when one-boxing, you get $1,000,000, but when two-boxing, you get the maximum payoff of $1,001,000. This way, FDT can get an average payoff approaching $1,000,500. I learned this from a conversation with one of the original authors of the FDT paper, Nate Soares (So8res).

So FDT solves the above version of Newcomb’s problem beautifully, and subjunctive dependence is very much in play here. Casper however offers his own definition of subjunctive dependence:

I should consider predictor P to “subjunctively depend” on agent A to the extent that P makes predictions of A’s actions based on correlations that cannot be confounded by my choice of what source code A runs.

I have yet to see a problem of decision theoretic significance that falls within this definition of subjunctive dependence, but outside of Yudkowsky and Soares’ definition. Furthermore, subjunctive dependence isn’t always about predicting future actions, so I object to the use of “predictor” in Casper’s definition. Most importantly, though, note that for a decision procedure to have any effect on the world, something or somebody must be computing (part of) it—so coming up with a Newcomb-like problem where your decision procedure has an effect at two different times/​places without Yudkowsky and Soares’ subjunctive dependence being in play seems impossible.