No, EDT Did Not Get It Right All Along: Why the Coin Flip Creation Problem Is Irrelevant

Back in 2017, Johannes_Treutlein published a post critiquing logical decision theories: Did EDT get it right all along? Introducing yet another medical Newcomb problem. In it, Treutlein presents the Coin Flip Creation problem (and a second version) and argues logical decision theories (like Updateless Decision Theory (UDT) and Functional Decision Theory (FDT)) handle it wrong. After reading the post, it seems to me Treutlein’s argumentation is flawed, and while I am probably not the first to notice this (or even write about it), I still think it’s important to discuss this, as I am afraid more people make the same mistake.

Note that I will be talking mostly about how FDT handles the problems Treutlein presents, as this is a theory I have some expertise on.

The Coin Flip Creation Problem

From the original post:

One day, while pondering the merits and demerits of different acausal decision theories, you’re visited by Omega, a being assumed to possess flawless powers of prediction and absolute trustworthiness. You’re presented with Newcomb’s paradox, but with one additional caveat: Omega informs you that you weren’t born like a normal human being, but were instead created by Omega. On the day you were born, Omega flipped a coin: If it came up heads, Omega created you in such a way that you would one-box when presented with the Coin Flip Creation problem, and it put $1 million in box A. If the coin came up tails, you were created such that you’d two-box, and Omega didn’t put any money in box A. We don’t know how Omega made sure what your decision would be. For all we know, it may have inserted either CDT or EDT into your source code, or even just added one hard-coded decision rule on top of your messy human brain. Do you choose both boxes, or only box A?

Treutlein claims EDT one-boxes and “gets it right”. But I think it’s wrong even to discuss what a decision theory would do in this problem: my claim is that this is not a proper decision theoretic problem. It’s an interesting thought experiment, but it is of little value to decision theory. Why? Because the question

Do you choose both boxes, or only box A?

has two branches:

  1. If Omega flipped heads, do you choose both boxes, or only box A?

  2. If Omega flipped tails, do you choose both boxes, or only box A?

In both cases, the answer is already given in the problem statement. In case 1, Omega created you as a one-boxer, and in case 2, you were created as a two-boxer.

Treutlein claims logical decision theories (like UDT and FDT) get this problem wrong, but there literally is no right or wrong here. Without the Omega modification at the coin flip, FDT would two-box (and rightly so). With the Omega modification, there is, in case 1, no FDT anymore (as Omega modifies the agent into a one-boxer), so the question becomes incoherent. The question is only coherent for case 2, where FDT makes the right decision (two-boxing, making $1,000 > $0). And it’s not FDT’s fault there’s no $1,000,000 to earn in case 2: this is purely the result of a coin flip before the agent even existed. It’s not the result of any decision made by the agent. In fact, the whole outcome of this game is determined purely by the outcome of the coin flip! Hence my claim that this is not a proper decision theoretic problem.

Treutlein does (sort of) address my counterargument:

There seems to be an especially strong intuition of “absence of free will” inherent to the Coin Flip Creation problem. When presented with the problem, many respond that if someone had created their source code, they didn’t have any choice to begin with. But that’s the exact situation in which we all find ourselves at all times! Our decision architecture and choices are determined by physics, just like a hypothetical AI’s source code, and all of our choices will thus be determined by our “creator.” When we’re confronted with the two boxes, we know that our decisions are predetermined, just like every word of this blogpost has been predetermined. But that knowledge alone won’t help us make any decision.

Indeed. An AI always does what its source code says, so in a way, its decisions are determined by its creator. This is why my intuition with Newcomb’s problem is not so much “What action should the agent take?” but “What source code (or decision procedure) should the agent run?” This phrasing makes it more clear that the decision does influence whether there’s $1,000,000 to earn, as actions can’t cause the past but your source code/​decision procedure could have been simulated by Omega. But actions being predetermined is not the objection to the Coin Flip Creation problem. In Newcomb’s problem, your action is predetermined, but your decision still influences the outcome of the game. I want to run a one-boxing procedure, as that would give me $1,000,000 in Newcomb’s problem. What procedure do I want to run in the Coin Flip Creation problem? This question doesn’t make sense! In the Coin Flip Creation problem, my decision procedure is determined by the coin flip!

Coin Flip Creation, Version 2

From the original post:

The situation is identical to the Coin Flip Creation, with one key difference: After Omega flips the coin and creates you with the altered decision algorithm, it actually simulates your decision, just as in Newcomb’s original paradox. Only after Omega has determined your decision via simulation does it decide whether to put money in box A, conditional on your decision. Do you choose both boxes, or only box A?

Treutlein claims UDT does one-box on this version while it two-boxes on the original version, and finds this curious. My objection remains that this, too, is not a problem for decision theories, as the decision procedure is determined by the coin flip in the problem statement.