# New­comb’s Problem

TagLast edit: 2 Feb 2022 16:03 UTC by

Newcomb’s Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions.

## The Problem

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

One line of reasoning about the problem says that because Omega has already left, the boxes are set and you can’t change them. And if you look at the payoff matrix, you’ll see that whatever decision Omega has already made, you get \$1000 more for taking both boxes. This makes taking two boxes (“two-boxing”) a dominant strategy and therefore the correct choice. Agents who reason this way do not make very much money playing this game. This is because this line of reasoning ignores the connection between the agent and Omega’s prediction: two-boxing only makes \$1000 more than one-boxing if Omega’s prediction is the same in both cases, while the problem states Omega is extremely accurate in its predictions. Switching from one-boxing to two-boxing doesn’t give the agent a \$1000 more, it results in a loss of \$999,000.

Because the agent’s decision in this problem can’t causally affect Omega’s prediction (which happened in the past), Causal Decision Theory two-boxes. One-boxing is correlated with getting a million dollars, whereas two-boxing is correlated with getting only \$1000; therefore, Evidential Decision Theory one-boxes. Functional Decision Theory (FDT) also one-boxes, but for a completely different reason: FDT reasons that Omega must have had a model of the agent’s decision procedure in order to make the prediction. Therefore, your decision procedure is run not only by you, but also (in the past) by Omega; whatever you decide, Omega’s model must have decided the same. Either both you and Omega’s model two-box, or both you and Omega’s model one-box; of these two options, the latter is preferable, so FDT one-boxes.

The general class of decision problems that involve other agents predicting your actions are called Newcomblike Problems.

## Irrelevance of Omega’s Physical Impossibility

Sometimes people dismiss Newcomb’s problem because of the physical impossibility of a being like Omega. However, Newcomb’s problem does not actually depend on the possibility of Omega in order to be relevant. Similar issues arise if we imagine a skilled human psychologist who can predict other people’s actions with 65% accuracy.

# New­comb’s Prob­lem and Re­gret of Rationality

31 Jan 2008 19:36 UTC
144 points

# New­comblike prob­lems are the norm

24 Sep 2014 18:41 UTC
83 points

# The Ul­ti­mate New­comb’s Problem

10 Sep 2013 2:03 UTC
46 points

# New­comb’s prob­lem hap­pened to me

26 Mar 2010 18:31 UTC
56 points

# New­comb’s Prob­lem vs. One-Shot Pri­soner’s Dilemma

7 Apr 2009 5:32 UTC
14 points

# Two-box­ing, smok­ing and chew­ing gum in Med­i­cal New­comb problems

29 Jun 2015 10:35 UTC
29 points

# New­comb’s Prob­lem as an Iter­ated Pri­soner’s Dilemma

5 Jan 2022 22:48 UTC
13 points

# New­comb Variant

29 Aug 2023 7:02 UTC
25 points

# You May Already Be A Sinner

9 Mar 2009 23:18 UTC
50 points

25 Aug 2009 20:01 UTC
54 points

# Coun­ter­fac­tual Mugging

19 Mar 2009 6:08 UTC
80 points

# AXRP Epi­sode 5 - In­fra-Bayesi­anism with Vanessa Kosoy

10 Mar 2021 4:30 UTC
35 points

# Why 1-box­ing doesn’t im­ply back­wards causation

25 Mar 2021 2:32 UTC
7 points

# Meta De­ci­sion The­ory and New­comb’s Problem

5 Mar 2013 1:29 UTC
10 points

# FDT defects in a re­al­is­tic Twin Pri­son­ers’ Dilemma

15 Sep 2022 8:55 UTC
37 points

# “Ra­tional Agents Win”

23 Sep 2021 7:59 UTC
8 points

# Null-box­ing New­comb’s Problem

13 Jul 2020 16:32 UTC
33 points

# Re­jected Early Drafts of New­comb’s Problem

6 Sep 2022 19:04 UTC
112 points

# A full ex­pla­na­tion to New­comb’s para­dox.

12 Oct 2020 16:48 UTC
−6 points

# Thoughts from a Two Boxer

23 Aug 2019 0:24 UTC
18 points

# New­comb’s Prob­lem stan­dard positions

6 Apr 2009 17:05 UTC
7 points

# [Question] Is Agent Si­mu­lates Pre­dic­tor a “fair” prob­lem?

24 Jan 2019 13:18 UTC
22 points

23 Sep 2023 22:22 UTC
−5 points

# Ra­tion­al­ists lose when oth­ers choose

16 Jun 2009 17:50 UTC
−8 points

# The dumb­est kid in the world (joke)

6 Jun 2021 2:57 UTC
23 points

# Con­di­tional offers and low pri­ors: the prob­lem with 1-box­ing New­comb’s dilemma

18 Jun 2021 21:50 UTC
2 points

# Should VS Would and New­comb’s Paradox

3 Jul 2021 23:45 UTC
5 points

# Why do the­ists, un­der­grads, and Less Wrongers fa­vor one-box­ing on New­comb?

19 Jun 2013 1:55 UTC
27 points

# Omega can be re­placed by amnesia

26 Jan 2011 12:31 UTC
23 points

# Real-world New­comb-like Prob­lems

25 Mar 2011 20:44 UTC
25 points

# Nate Soares on the Ul­ti­mate New­comb’s Problem

31 Oct 2021 19:42 UTC
57 points

# Anti-Parfit’s Hitchhiker

4 Feb 2022 23:37 UTC
2 points

# Cri­tiquing Scasper’s Defi­ni­tion of Sub­junc­tive Dependence

10 Jan 2022 16:22 UTC
6 points

# New­comb’s Lot­tery Problem

27 Jan 2022 16:28 UTC
1 point

# [Question] New­comb’s Grandfather

28 Jan 2022 8:56 UTC
5 points

# The Calcu­lus of New­comb’s Problem

1 Apr 2022 14:41 UTC
3 points

# [Question] What does Func­tional De­ci­sion The­ory say to do in im­perfect New­comb situ­a­tions?

7 May 2022 22:26 UTC
4 points

# [Question] Are ya win­ning, son?

9 Aug 2022 0:06 UTC
14 points

# Break­ing New­comb’s Prob­lem with Non-Halt­ing states

4 Sep 2022 4:01 UTC
18 points

# Two New New­comb Variants

14 Nov 2022 14:01 UTC
26 points

# Why one-box?

30 Jun 2013 2:38 UTC
11 points

# Some Var­i­ants of Sleep­ing Beauty

1 Mar 2023 16:51 UTC
34 points

# New­comb’s para­dox com­plete solu­tion.

15 Mar 2023 17:56 UTC
−12 points

# Ex­tract­ing Money from Causal De­ci­sion Theorists

28 Jan 2021 17:58 UTC
26 points
(doi.org)

# The law of effect, ran­dom­iza­tion and New­comb’s problem

15 Feb 2018 15:31 UTC
7 points
(casparoesterheld.com)

# A sur­vey of polls on New­comb’s problem

20 Sep 2017 16:50 UTC
3 points
(casparoesterheld.com)

# The Bind­ing of Isaac & Trans­par­ent New­comb’s Prob­lem

22 Feb 2024 18:56 UTC
−11 points

# Re­peated Play of Im­perfect New­comb’s Para­dox in In­fra-Bayesian Physicalism

3 Apr 2023 10:06 UTC
2 points

# A few mis­con­cep­tions sur­round­ing Roko’s basilisk

5 Oct 2015 21:23 UTC
90 points

# Self-con­firm­ing pre­dic­tions can be ar­bi­trar­ily bad

3 May 2019 11:34 UTC
49 points

# You’re in New­comb’s Box

5 Feb 2011 20:46 UTC
59 points

# A model of UDT with a halt­ing oracle

18 Dec 2011 14:18 UTC
68 points

# Parfit’s Es­cape (Filk)

29 Mar 2019 2:31 UTC
39 points

# Oper­a­tional­iz­ing New­comb’s Problem

11 Nov 2019 22:52 UTC
34 points

# Ex­ploit­ing New­comb’s Game Show

25 May 2023 4:01 UTC
8 points

# The Pre­dic­tion Prob­lem: A Var­i­ant on New­comb’s

4 Jul 2018 7:40 UTC
25 points

# Coun­ter­fac­tu­als: Smok­ing Le­sion vs. New­comb’s

8 Dec 2019 21:02 UTC
9 points

# Ex­tremely Coun­ter­fac­tual Mug­ging or: the gist of Trans­par­ent Newcomb

9 Feb 2011 15:20 UTC
11 points

# Ex­am­ple de­ci­sion the­ory prob­lem: “Agent simu­lates pre­dic­tor”

19 May 2011 15:16 UTC
45 points

# Open-minded updatelessness

10 Jul 2023 11:08 UTC
65 points