# 1 Is there a divergence between what’s rational to do and what type of agent you should want to be?

This may just be the crux of our disagreement. I claim there is no difference here: the questions What type of agent do I want to be? and What decision should I make in this scenario? are equivalent. If it is wise to do X in a given problem, then you want to be an X-ing agent, and if you should be an X-ing agent, then it is wise to do X. The only way to do X is to have a decision procedure that does X, which makes you an X-ing agent. And if you are an X-ing agent, you have a decision procedure that does X, so you do X.

This quote from Heighn I think expresses the core of the disagreement. Heighn claims that it’s rational to do X if and only if it’s best to be the type of agent who does X. Heighn asserts that if it’s wise to do X in a given problem, then you want to be an X-ing agent—seemingly without argument. So the question is, is this a plausible stipulation. Specifically, let’s think about a case I gave earlier that Heighn provides a very precise summary of:

Unfortunately, omnizoid once again doesn’t clearly state the problem—but I assume he means that

there’s an agent who can (almost) perfectly predict whether people will cut off their legs once they exist

this agent only creates people who he predicts will cut off their legs once they exist

existing with legs > existing without legs > not existing

Heighn’s diagnosis of such a case is that it’s rational to cut off your legs once you exist because the types of agents who do that are more likely to exist and thus do better. I think this is really implausible—once you exist, you don’t care about the probability of agents like you existing. You already exist! Why does it matter the probability of your existence conditional on certain actions. Your actions can’t affect that anymore because you already exist. In response to this, Heighn says:

FDT’ers would indeed cut off their legs: otherwise they wouldn’t exist. omnizoid seems to believe that once you already exist, cutting off your legs is ridiculous. This is understandable, but ultimately false. The point is that your decision procedure doesn’t make the decision just once. Your decision procedure also makes it in the predictor’s head, when she is contemplating whether or not to create you. There, deciding not to cut off your legs will prevent the predictor from creating you.

But you only make decisions after you exist. Of course, your decisions influence whether or not you exist but they don’t happen until after you exist. But once you exist, no matter how you act, there is a zero percent chance that you won’t exist.

So I think reflecting on this case, and similar cases, shows quite definitively that it can sometimes be worth being the type of agent who acts irrationally. Specifically, if you are artificially rewarded for irrationality, prior to the irrational act, such that once you can take the irrational act you’ve already received the reward and thus receive no benefit from the irrational act, then it pays to be rational. If rationality were just about being the type of agent who gets more money on average, there would be no dispute—no one denies that Newcombe’s problem one-boxers get more money on average. What two-boxers claim is that the situation is artificially rigged against them, such that they lose by being rational. Once the predictor has run their course, 2-boxing gets you free money.

But I think we have a knock-down argument for the conclusion that it doesn’t always pay to be rational. There are cases where even FDTists would agree that being rational makes you worse off (the thing in quotes is from my last article). Let me give 4 examples:

“After all, if there’s a demon who pays a billion dollars to everyone who follows CDT or EDT then FDTists will lose out. The fact you can imagine a scenario where people following one decision theory are worse off is totally irrelevant—the question is whether a decision theory provides a correct account of rationality.”

Suppose that in the agent creation scenario, the agent gets very lucky, and is created. On this account, CDTists win more than FDTists—they get an extra leg.

There are bands of roving people who beat up those who follow FDT. In this world, FDTists will get less utility. Still, that doesn’t mean FDT is wrong—it just means that acting rationally doesn’t always pay.

A demon tortures only people who are very rational. Here, it clearly pays to be irrational.

I think this is the crux of the disagreement. But it just seems patently obvious that sometimes, if the world is rigged such that those who are rational are stipulated to get worse payouts, then it sometimes won’t pay to be rational.

# 2 Is Schwarz’s procreation case unfair?

Schwarz’s procreation case is the following:

Procreation. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed FDT. If FDT were to recommend not procreating, there’s a significant probability that I wouldn’t exist. I highly value existing (even miserably existing). So it would be better if FDT were to recommend procreating. So FDT says I should procreate. (Note that this (incrementally) confirms the hypothesis that my father used FDT in the same choice situation, for I know that he reached the decision to procreate.)

This problem doesn’t fairly compare FDT to CDT though. By specifying that the father follows FDT, FDT’ers can’t possibly do better than procreating. Procreation directly punishes FDT’ers—not because of the decisions FDT makes, but for following FDT in the first place.

They could do better. They could follow CDT and never pass up on the free value of remaining child-free.

No! Following CDT wasn’t an option. The question was whether or not to procreate, and I maintain that Procreation is unfair towards FDT.

Yes, the decision is whether or not to procreate. I maintain that in such scenarios, it’s rational to follow CDT—Heighn says that it’s rational to follow FDT. Agents who follow my advice objectively do better—Heighn’s agents are miserable, mine aren’t!

# 3 Reasons I refer to FDT as crazy, not just implausible or false

In my article, I note:

I do not know of a single academic decision theorist who accepts FDT. When I bring it up with people who know about decision theory, they treat it with derision and laughter.

Heighn replies:

They should write up a critique!

Some of them have—MacAskill, for example. But the reason I point this out is two-fold.

I think higher-order evidence of this kind is useful. If 99.9% of people who seriously study a topic agree about X, and you disagree about X, you should think your reasoning has gone wrong.

The view I defend in my article isn’t just that FDT is wrong, but instead that it’s crazy. Crazy in the sense of being something that pretty much anyone who seriously considered the issue for long enough without being biased or misled on decision theory would give it up. Such an accusation is pretty dramatic—the kind of thing usually reserved for views like that the Earth is flat or creationist views! But it becomes less so when the idea is one rejected by perhaps every practicing decision theorist, and popular among people who are building AI, who just seem to be answering a fundamentally different question.

# 4 Implausible discontinuities

MacAskll has a critique of FDT that is, to my mind, pretty damning:

A related problem is as follows: FDT treats ‘mere statistical regularities’ very differently from predictions. But there’s no sharp line between the two. So it will result in implausible discontinuities. There are two ways we can see this.

First, take some physical processes S (like the lesion from the Smoking Lesion) that causes a ‘mere statistical regularity’ (it’s not a Predictor). And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems. If it’s S alone that results in the Newcomb set-up, then FDT will recommending two-boxing.

But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing. But this seems arbitrary — why should the fact that S’s causal influence on whether there’s money in the opaque box or not go via another agent much such a big difference? And we can think of all sorts of spectrum cases in between the ‘mere statistical regularity’ and the full-blooded Predictor: What if the ‘predictor’ is a very unsophisticated agent that doesn’t even understand the implications of what they’re doing? What if they only partially understand the implications of what they’re doing? For FDT, there will be some point of sophistication at which the agent moves from simply being a conduit for a causal process to instantiating the right sort of algorithm, and suddenly FDT will switch from recommending two-boxing to recommending one-boxing.

Second, consider that same physical process S, and consider a sequence of Newcomb cases, each of which gradually make S more and more complicated and agent-y, making it progressively more similar to a Predictor making predictions. At some point, on FDT, there will be a point at which there’s a sharp jump; prior to that point in the sequence, FDT would recommend that the decision-maker two-boxes; after that point, FDT would recommend that the decision-maker one-boxes. But it’s very implausible that there’s some S such that a tiny change in its physical makeup should affect whether one ought to one-box or two-box.

I’m quoting the argument in full because I think it should make things a bit clear. Heighn responds:

This is just wrong: the critical factor is not whether “there’s an agent making predictions”. The critical factor is subjunctive dependence, and there is no subjunctive dependence between S and the decision maker here.

I responded to that, saying:

But in this case there is subjective dependence. The agent’s report depends on whether the person will actually one box on account of the lesion. Thus, there is an implausible continuity on account of it mattering whether to one box the precise causal mechanisms of the box.

Heighn responded to this saying:

No, there is no subjunctive dependence. Yes,

The agent’s report depends on whether the person will actually one box on account of the lesion.

but that’s just a correlation. This problem is just Smoking Lesion, where FDT smokes. The agent makes her prediction by looking at S, and S is explicitly stated to cause a ‘mere statistical regularity’. It’s even said that S is “not a Predictor”. So there is no subjunctive dependence between X and S, and by extension, not between X and the agent.

But there is! The agent looks at the brain and then acts only if the brain is likely to output the result of one boxing. The mechanism is structurally similar to Newcombe’s problem—they predict what the person will do and so what they predict depends on the output of the cognitive algorithm. Whether the agent runs the same cognitive algorithm or just relies on something that predicts the result of the same cognitive algorithm seems flatly irrelevant—both cases give identical payouts and odds of affecting the payouts.

# 5 Conclusion

That’s probably a wrap! The basic problem with FDT is what MacAskill described—it’s fine as a theory of what type of agent to be—sometimes it pays to be irrational—but it’s flatly crazy as a theory of rationality. The correct theory of rationality would not instruct you to cut off your legs when that is guaranteed not to benefit you—or anyone else.