Normative uncertainty in Newcomb’s problem

Here is Wikipedia’s description of Newcomb’s problem:

The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.

Nozick also stipulates that if the Predictor predicts that the player will choose randomly, then box B will contain nothing.

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined. That is, box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Before the game begins, the player is aware of all the rules of the game, including the two possible contents of box B, the fact that its contents are based on the Predictor’s prediction, and knowledge of the Predictor’s infallibility. The only information withheld from the player is what prediction the Predictor made, and thus what the contents of box B are.

Most of this is a fairly general thought experiment for thinking about different decision theories, but one element stands out as particularly arbitrary: the ratio between the amount the Predictor may place in box B and the amount in box A. In the Newcomb formulation conveyed by Nozick, this ratio is 1000:1, but this is not necessary. Most decision theories that recommend one-boxing do so as long as the ratio is greater than 1.

The 1000:1 ratio strengthens the intuition for one-boxing, which is helpful for illustrating why one might find one-boxing plausible. However, given uncertainty about normative decision theory, the decision to one-box can diverge from one’s best guess at the best decision theory, e.g. if I think there is a 1 in 10 chance that one-boxing decision theories I may one-box on Newcomb’s problem with a potential payoff ratio of 1000:1 but not if the ratio is only 2:1.

So the question, “would you one-box on Newcomb’s problem, given your current state of uncertainty?” is not quite the same as “would the best decision theory recommend one-boxing?” This occurred to me in the context of this distribution of answers among target philosophy faculty from the PhilPapers Survey:

Newcomb’s problem: one box or two boxes?

Accept: two boxes 13 /​ 31 (41.9%)
Accept: one box 7 /​ 31 (22.6%)
Lean toward: two boxes 6 /​ 31 (19.4%)
Agnostic/​undecided 2 /​ 31 (6.5%)
Other 2 /​ 31 (6.5%)
Lean toward: one box 1 /​ 31 (3.2%)


If all of these answers are about the correct decision theory (rather than what to do in the actual scenario), then two-boxing is the clear leader, with a 2.85:1 ratio of support (accept or lean) in its favor, but this skew would seem far short of that needed to justify 1000:1 confidence in two-boxing on Newcomb’s Problem.

Here are Less Wrong survey answers for 2012:

NEWCOMB’S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don’t understand: 86, 7.3%
No answer: 240, 20.3%

Here one-boxing is overwhelmingly dominant. I’d like to sort out how much of this is disagreement about theory, and how much reflects the extreme payoffs in the standard Newcomb formulation. So, I’ll be putting a poll in the comments below.