RSS

New­comb’s Problem

TagLast edit: 27 Nov 2020 14:08 UTC by Multicore

Newcomb’s Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions.

The Problem

From Newcomb’s Problem and Regret of Rationality:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

One line of reasoning about the problem says that because Omega has already left, the boxes are set and you can’t change them. And if you look at the payoff matrix, you’ll see that whatever decision Omega has already made, you get $1000 more for taking both boxes. This makes taking two boxes a dominant strategy and therefore the correct choice. Agents who reason this way do not make very much money playing this game.

The general class of decision problems that involve other agents predicting your actions are called Newcomblike Problems.

Irrelevance of Omega’s Physical Impossibility

Sometimes people dismiss Newcomb’s problem because of the physical impossibility of a being like Omega. However, Newcomb’s problem does not actually depend on the possibility of Omega in order to be relevant. Similar issues arise if we imagine a skilled human psychologist who can predict other people’s actions with 65% accuracy.

Notable Posts

See Also

New­comb’s Prob­lem and Re­gret of Rationality

Eliezer Yudkowsky31 Jan 2008 19:36 UTC
118 points
609 comments10 min readLW link

New­comblike prob­lems are the norm

So8res24 Sep 2014 18:41 UTC
74 points
111 comments7 min readLW link

Con­fu­sion about New­comb is con­fu­sion about counterfactuals

AnnaSalamon25 Aug 2009 20:01 UTC
53 points
42 comments2 min readLW link

Null-box­ing New­comb’s Problem

Yitz13 Jul 2020 16:32 UTC
28 points
10 comments4 min readLW link

You May Already Be A Sinner

Scott Alexander9 Mar 2009 23:18 UTC
50 points
37 comments3 min readLW link

Coun­ter­fac­tual Mugging

Vladimir_Nesov19 Mar 2009 6:08 UTC
69 points
296 comments2 min readLW link

AXRP Epi­sode 5 - In­fra-Bayesi­anism with Vanessa Kosoy

DanielFilan10 Mar 2021 4:30 UTC
26 points
12 comments35 min readLW link

Why 1-box­ing doesn’t im­ply back­wards causation

Chris_Leong25 Mar 2021 2:32 UTC
7 points
13 comments4 min readLW link

A few mis­con­cep­tions sur­round­ing Roko’s basilisk

Rob Bensinger5 Oct 2015 21:23 UTC
82 points
134 comments5 min readLW link

Self-con­firm­ing pre­dic­tions can be ar­bi­trar­ily bad

Stuart_Armstrong3 May 2019 11:34 UTC
43 points
11 comments5 min readLW link

You’re in New­comb’s Box

HonoreDB5 Feb 2011 20:46 UTC
59 points
176 comments4 min readLW link

New­comb’s prob­lem hap­pened to me

Academian26 Mar 2010 18:31 UTC
51 points
99 comments3 min readLW link

A model of UDT with a halt­ing oracle

cousin_it18 Dec 2011 14:18 UTC
66 points
102 comments2 min readLW link

Parfit’s Es­cape (Filk)

G Gordon Worley III29 Mar 2019 2:31 UTC
37 points
0 comments1 min readLW link

New­comb’s Prob­lem vs. One-Shot Pri­soner’s Dilemma

Wei_Dai7 Apr 2009 5:32 UTC
14 points
16 comments1 min readLW link

Oper­a­tional­iz­ing New­comb’s Problem

ErickBall11 Nov 2019 22:52 UTC
34 points
23 comments1 min readLW link

The Pre­dic­tion Prob­lem: A Var­i­ant on New­comb’s

Chris_Leong4 Jul 2018 7:40 UTC
25 points
11 comments9 min readLW link

Coun­ter­fac­tu­als: Smok­ing Le­sion vs. New­comb’s

Chris_Leong8 Dec 2019 21:02 UTC
8 points
24 comments3 min readLW link

Ex­tremely Coun­ter­fac­tual Mug­ging or: the gist of Trans­par­ent Newcomb

Bongo9 Feb 2011 15:20 UTC
11 points
79 comments1 min readLW link

Ex­am­ple de­ci­sion the­ory prob­lem: “Agent simu­lates pre­dic­tor”

cousin_it19 May 2011 15:16 UTC
37 points
76 comments2 min readLW link

The Ul­ti­mate New­comb’s Problem

Eliezer Yudkowsky10 Sep 2013 2:03 UTC
30 points
116 comments1 min readLW link

A full ex­pla­na­tion to New­comb’s para­dox.

solomon alon12 Oct 2020 16:48 UTC
−6 points
12 comments3 min readLW link

Thoughts from a Two Boxer

jaek23 Aug 2019 0:24 UTC
17 points
11 comments5 min readLW link

Two-box­ing, smok­ing and chew­ing gum in Med­i­cal New­comb problems

Caspar4229 Jun 2015 10:35 UTC
23 points
93 comments1 min readLW link

New­comb’s Prob­lem stan­dard positions

Eliezer Yudkowsky6 Apr 2009 17:05 UTC
7 points
22 comments1 min readLW link

[Question] Is Agent Si­mu­lates Pre­dic­tor a “fair” prob­lem?

Chris_Leong24 Jan 2019 13:18 UTC
22 points
19 comments1 min readLW link

Ra­tion­al­ists lose when oth­ers choose

PhilGoetz16 Jun 2009 17:50 UTC
−9 points
58 comments5 min readLW link

The dumb­est kid in the world (joke)

CronoDAS6 Jun 2021 2:57 UTC
20 points
10 comments1 min readLW link

Con­di­tional offers and low pri­ors: the prob­lem with 1-box­ing New­comb’s dilemma

Andrew Vlahos18 Jun 2021 21:50 UTC
2 points
4 comments1 min readLW link

Should VS Would and New­comb’s Paradox

dadadarren3 Jul 2021 23:45 UTC
2 points
36 comments2 min readLW link
No comments.