Ordinary and unordinary decision theory

There are a couple of paradoxes in decision theory, typically involving beings making perfect predictions, copies of yourself, and similar shenanigans. If you’re unfamiliar with all of this, take a look at Newcomb’s problem. Attempted resolutions of these paradoxes usually involve either the construction of complex decision theories (such as FDT ) or copious hand-waving and informal arguments (the philosophy literature on the subject).

This post has two messages.

  1. Most decision problems, including paradoxes, are ordinary decision problems.

  2. Unordinary decision problems can be found by testing with Jensen’s inequality. These typically fall into two categories

    1. Counterfactual game-theory and

    2. Precommitment problems, when the size of the action space affects the utility.

I have three guiding principles.

  1. Formulate your problems mathematically. Make sure all your quantities are defined. I’m not talking about complex math here. I’m just talking about doing the work. Sometimes it’s not possible to formalize the problem in a way that keeps every aspect of the original, informal project description. And that’s ok!

  2. Ignore useless information. Find the minimal description of your problem and solve it.

  3. Take action sets seriously. If is an allowed action, you can do , even if it is irrational.

Ordinary decision theory

We’re dealing with a rational actor, presumably you, an action space , and a set of random utilities . I take the action set seriously – it is possible to choose any of the actions in the set. We know the distribution of for every . Let’s say that an act is rational if it maximizes the expected utility .We’ll assume that exists. (Defining a rational act is hard when a maximum does not exist. For instance when the act a gives deterministic utility when , and even harder when the utility function is discontinuous at infinity. See “Satan’s apple”.)

You may want to pause for a second or two to figure out if you agree with this notion of a rational act. Is it the usual definition? Is it strange in any way?

Let’s take a look at a simple example. Suppose you’re given the option to flip either (A) a biased coin with probability for heads, (B) a biased coin with probability for heads. In either case, you can bet $1 and on either tails or heads, winning $2 if you’re right and losing your $1 if you’re wrong. Your action space is

The expected utilities are

Unsurprisingly, choosing coin A and heads is the rational act. So far, so good.

Ordinary decision theory wants you to choose between different random variables , not just between different acts a and a fixed source of randomness . Usually, you would be restricted to acting within the context of a single random variable, as in the von Neumann–Morgenstern utility theorem, where we maximize . This distinction usually doesn’t matter mathematically, as it’s mathematically trivial to transform a problem to a problem. It does matter for interpretation though!

Now take note of the following:

  1. The action a isn’t a random variable, hence evidential decision theory cannot be used. At least not without adding more information to the problem. Even if you did decide to add more evidential information to the problem, it wouldn’t change the rational act, as it’s already defined without it. Adding extraneous information about the population distribution makes the decision problem a l’âge du capitaine problem.

  2. There’s no causal structure inherent in a utility maximization problem. There’s no way to use statistical counterfactuals, do-calculus, possible world semantics, or whatever your favorite description of causal decision theory uses. Hence causal decision theory cannot be used either. Again, you might add causal information to the problem, but the rational act is already defined in terms of the information we have. Adding causal information won’t help you be rational.

  3. Ordinary decision theory is timeless. As is the case with causality, there is no notion of time embedded into the decision problem . (Unless you’re dealing with a dynamic decision problem.)

Not every problem can be formulated as an ordinary decision problem, either because they are inherently game-theoretical (counterfactual mugging, Psykosh’s non-anthropic problem) or because they encode more than a single ordinary decision problem (opaque Newcomb).

Newcomb’s problem and incompatibility between descriptions

Consider the following decision problem. You have two acts, and . The utilities are deterministic and equal to and . Which act do you choose? Using the definition of a rational act above, you choose . This is utterly uncontroversial, isn’t it? It turns out it isn’t, as adding mathematically irrelevant information to the problem makes the majority of philosophers choose !

Now, let’s add Newcomb’s description to the problem so that corresponds to one-boxing and to two-boxing. The mathematical formulation of Newcomb’s problem coincides with the setup above: “You have two acts, and . The utilities are deterministic, and are equal to and .” This information was sufficient for to be rational – adding more information can’t change that.

Newcomb’s problem feels like a significant problem because it involves a clash between intuitions; on one hand, you wish to reason causally, which leads to the incorrect application of dominance analysis. On the other hand, you want to do the rational act.

I can see two resolutions to these

  1. You admit that the mathematical description of Newcomb’s problem does not match its informal causal structure, hence dominance analysis (or any other kind of causal reasoning) cannot be applied to solve .

  2. You accept that an agent can be predicted perfectly.

My impression is that (b) is the favored resolution to Newcomb’s problem on LessWrong. While that’s an OK resolution, I dislike it. It (informally!) implies that you cannot choose any action from your action set. As I said, I take action sets seriously (and so should you). The resolution cannot be generalized to situations with proper randomness involved, as omega cannot predict your choice if you have access to a source of randomness that he doesn’t, but it’s mathematically 100% OK to formulate Newcomb’s problem with mixed strategies instead of pure ones. Resolution (a) is better. No one is claiming that Newcomb’s problem is possible. It’s not possible if there’s a true source of randomness that you can observe but Omega can’t. The trick is to see through your intuitions, write the problem down, and solve it.

(Newcomb’s problem is sometimes used as a short morality tale, as in “LOL! Don’t you know the human brain is a machine and can be predicted?!?” Some people have not gotten that message, but I don’t think the confusion around the problem is primarily about the possibility of powerful predictors.)

Is causal decision theory useful? Yes, as a device to construct decision-theoretic problems, but not as a solution method. Writing down small causal graphs helps you to think clearer, but can’t ever change the fundamental idea of choosing the act that maximizes expected utility. Causal decision theory is ordinary decision theory – but only if the causal description matches the mathematical description of the problem. In these cases, causal decision theory will add irrelevant, but not misleading, information.

The same cannot be said for evidential decision theory, whose added information is actively misleading (think about, e.g., the smoking lesion).

I insist on using the notation . instead of as it clarifies what we are dealing with. You could Newcomb’s problem using the notation , with a complicated enough utility function and outcome variable . To write down Newcomb’s problem using , we could, for instance, define and equal to if and otherwise. That’s pretty easy, but notice that is not equal to the amount of money left in the boxes, which we would expect. Now, take a look at a slightly modified Newcomb’s problem, where Omega predicts your action with accuracy. Then

To write down , we can let X be the random variable that equals 1 (Omega predicted correctly) with probability 0.9 and 0 with probability 0.1. Then we define

This formulation is unsatisfying, as isn’t the natural outcome (did Omega put money in the opaque box?). That’s why I advocate for .

Counterfactual game-theory

It’s a consequence of Jensen’s inequality that, for any random variable

In other words, you cannot expect to lose expected utility by gaining information. A couple of decision-theoretic paradoxes ostensibly violate Jensen’s inequality. That is a big deal since it implies that we are either (i) dealing with an inconsistent decision problem, or (ii) the problem cannot be formulated as an ordinary decision problem. After all, Jensen’s inequality is a theorem.

Recall counterfactual mugging

Omega, a perfect predictor, flips a coin. If it comes up tails Omega asks you for $100. If it comes up heads, Omega pays you $10,000 if it predicts that you would have paid if it had come up tails.

Here’s an intuitive analysis of this problem. If you could force yourself to the strategy of paying when , then

On the other hand, refusing to pay is the optimal action, conditioned on , which yields a utility of 0. But using this strategy makes

which appears to imply that you lose money, in expectation, by gaining information. As Jensen’s inequality is violated, counterfactual mugging isn’t an ordinary decision-theoretic problem. What is it then?

The most natural solution (one-shot) solution to this problem uses game theory, by letting you and your counterfactual self be the two players. The Nash equilibrium is “not pay”, so it does not optimize expected utility across both players (i.e., it’s not Pareto optimal), but that’s not exactly uncommon in game theory. Just as defection is rational in the Prisoner’s dilemma, I see no problem with not paying in counterfactual mugging; your counterfactual self isn’t you. (The counterfactual player does not make an action in counterfactual mugging, but he does in the counterfactual Prisoner’s dilemma.)

Psy-kosh’s non-anthropic problem can also be analyzed with a game-theoretic lens to explain why coordinating has the highest payoff and is thus rational, provided it is an option. There is simply no reason to believe game-theoretic reasoning to be Pareto-optimal, so it should be avoided if possible.

I haven’t seen any detailed analyses of Psy-kosh’s problem and similar “counterfactual” game-theoretic problems yet, and I don’t believe they are fully solved. But I do believe counterfactual game-theory forms an interesting subset of unordinary decision problems. Developing a theory for counterfactual game theory could be worthwhile. Please tell me if you want to work on it (and like the approach of this post).

When the size of the action space affects

Finally, we have paradoxical decision problems about precommitment. In transparent Omega, both boxes are transparent, the box and the box. Omega perfectly predicts your choice and puts in the transparent box if he predicts one-boxing.

If you could remove the strategy from the action set, then Omega would correctly predict , and put in the box, so that . Since , the decision problem is trivial.

Now consider the case when the action space isn’t reduced, i.e., . Since Omega predicts perfectly, he can’t possibly place in the box, as it would make it possible for you to two-box. (Mere possibility is enough. If you’re given a choice, you’re given a choice.). Thus we’re left with two separate decision problems:

You can combine both problems into one:

Viewcomb: The set-up is the same as Newcomb, only now you can look inside the opaque box before making your decision. However, if you wish to make your decision straight away, without first looking in the box, you must pay a small fee. As before, the predictor predicted only whether you will one-box or two-box.

Here you’re asked to choose between the small decision problem, yielding a guaranteed , and the big decision problem, yielding a guaranteed . In conclusion, you should make your decision straight away if the fee is smaller .

A standard precommitment version of Viewcomb won’t work. If you write a contract pledging to pay when two-boxing, it’s always rational to one-box. But the (irrational) possibility of one-boxing is still there, and Omega, being a perfect predictor, cannot allow you to make that choice. Thus you’re stuck in the large decision problem.