RSS

Pas­cal’s Mugging

TagLast edit: 23 Sep 2020 18:49 UTC by Ruby

Pascal’s mugging refers to a thought experiment in decision theory, a finite analogue of Pascal’s wager.

Suppose someone comes to me and says, “Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people. Pascal’s Mugging: Tiny Probabilities of Vast Utilities

See also: Decision theory, Counterfactual Mugging, Shut up and multiply, Expected Utility, Utilitarianism, Scope Insensitivity

Unpacking the theory behind Pascal’s Mugging:

A rational agent chooses those actions with outcomes that, after being weighted by their probabilities, have a greater utility—in other words, those actions with greater expected utility. If an agent’s utilities over outcomes can potentially grow much faster than the probability of those outcomes diminishes, then it will be dominated by tiny probabilities of hugely important outcomes; speculations about low-probability-high-stakes scenarios will come to dominate its moral decision making.

A common method an agent could use to assign prior probabilities to outcomes is Solomonoff induction, which gives a prior inversely proportional to the length of the outcome’s description. Some outcomes can have a very short description but correspond to an event with enormous utility (i.e.: saving 3^^^^3 lives), hence they would have non-negligible prior probabilities but a huge utility. The agent would always have to take those kinds of actions with far-fetched results, that have low but non-negligible probabilities but extremely high returns.

This is seen as an unreasonable result. Intuitively, one is not inclined to acquiesce to the mugger’s demands—or even pay all that much attention one way or another—but what kind of prior does this imply?

Robin Hanson has suggested penalizing the prior probability of hypotheses which argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us. Since only one in 3^^^^3 people can be in a unique position to ordain the existence of at least 3^^^^3 other people who can’t have a symmetrical effect on this one person, the prior probability would be penalized by a factor on the same order as the utility.

Peter de Blanc has proven [1] that if an agent assigns a finite probability to all computable hypotheses and assigns unboundedly large finite utilities over certain environment inputs, then the expected utility of any outcome is undefined. Peter de Blanc’s paper, and the Pascal’s Mugging argument, are sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated. The unreasonable result can also be seen as an argument against the use of Solomonoff induction for weighting prior probabilities.

If an outcome with infinite utility is presented, then it doesn’t matter how small its probability is: all actions which lead to that outcome will have to dominate the agent’s behavior. This infinite case was stated by 17th century philosopher Blaise Pascal and named Pascal’s wager. Many other abnormalities arise when dealing with infinities in ethics.

Notable Posts

References

  1. Peter de Blanc (2007). Convergence of Expected Utilities with Algorithmic Probability Distributions.

  2. Nick Bostrom (2009). “Pascal’s Mugging”. Analysis 69 (3): 443-445. (PDF)

Pas­cal’s Mug­ging: Tiny Prob­a­bil­ities of Vast Utilities

Eliezer Yudkowsky19 Oct 2007 23:37 UTC
96 points
353 comments4 min readLW link

[Question] “Fa­nat­i­cal” Longter­mists: Why is Pas­cal’s Wager wrong?

Yitz27 Jul 2022 4:16 UTC
1 point
7 comments1 min readLW link

Pas­cal’s Mug­gle: In­finites­i­mal Pri­ors and Strong Evidence

Eliezer Yudkowsky8 May 2013 0:43 UTC
71 points
406 comments26 min readLW link

[Question] Has there been any work on at­tempt­ing to use Pas­cal’s Mug­ging to make an AGI be­have?

Chris_Leong15 Jun 2022 8:33 UTC
7 points
17 comments1 min readLW link

Dis­solve: The Petty Crimes of Blaise Pascal

JohnBuridan12 Aug 2022 20:04 UTC
17 points
4 comments6 min readLW link

Desider­ata for an Ad­ver­sar­ial Prior

shminux9 Nov 2022 23:45 UTC
13 points
2 comments1 min readLW link

Against the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Penalty

AlexMennen14 Dec 2017 18:38 UTC
39 points
47 comments11 min readLW link

The Lifes­pan Dilemma

Eliezer Yudkowsky10 Sep 2009 18:45 UTC
59 points
220 comments7 min readLW link

More on the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Prior

AlexMennen26 Feb 2018 23:53 UTC
16 points
4 comments9 min readLW link

Pas­cal’s Mug­gle (short ver­sion)

Eliezer Yudkowsky5 May 2013 23:36 UTC
47 points
53 comments11 min readLW link

Ob­served Pas­cal’s Mugging

[deleted]28 Jun 2011 15:53 UTC
36 points
61 comments1 min readLW link

Prob­a­bil­ities Small Enough To Ig­nore: An at­tack on Pas­cal’s Mugging

Kaj_Sotala16 Sep 2015 10:45 UTC
27 points
176 comments5 min readLW link

Pas­cal’s mug­ging in re­ward learning

Stuart_Armstrong5 Nov 2017 19:44 UTC
9 points
5 comments2 min readLW link

Ex­pected util­ity, un­los­ing agents, and Pas­cal’s mugging

Stuart_Armstrong28 Jul 2014 18:05 UTC
32 points
54 comments5 min readLW link

Tac­tics against Pas­cal’s Mugging

ArisKatsaris25 Apr 2013 0:07 UTC
26 points
59 comments8 min readLW link

Con­sider Re­con­sid­er­ing Pas­cal’s Mugging

Rafael Harth3 Jan 2018 0:03 UTC
5 points
9 comments3 min readLW link

A Thought on Pas­cal’s Mugging

komponisto10 Dec 2010 6:08 UTC
15 points
159 comments1 min readLW link

Pas­cal’s Mug­ging for bounded util­ity functions

Benya6 Dec 2012 22:28 UTC
18 points
49 comments2 min readLW link

Com­ments on Pas­cal’s Mugging

[deleted]3 May 2012 21:23 UTC
15 points
32 comments2 min readLW link

No, I won’t go there, it feels like you’re try­ing to Pas­cal-mug me

Rupert11 Jul 2018 1:37 UTC
9 points
0 comments2 min readLW link

Pas­cal’s Mug­ging—Pe­nal­iz­ing the prior prob­a­bil­ity?

XiXiDu17 May 2011 14:44 UTC
12 points
30 comments2 min readLW link

Pas­cal’s Gift

Bongo25 Dec 2010 19:42 UTC
13 points
46 comments1 min readLW link

What do you mean by Pas­cal’s mug­ging?

XiXiDu20 Nov 2014 16:38 UTC
10 points
24 comments2 min readLW link

An in­vest­ment anal­ogy for Pas­cal’s Mugging

[deleted]9 Dec 2014 7:50 UTC
9 points
36 comments1 min readLW link

Strong­man­ning Pas­cal’s Mugging

Pentashagon20 Feb 2013 12:36 UTC
4 points
21 comments2 min readLW link

Pas­cal’s Mug­gle Pays

Zvi16 Dec 2017 20:40 UTC
25 points
17 comments4 min readLW link
(thezvi.wordpress.com)

Model Uncer­tainty, Pas­calian Rea­son­ing and Utilitarianism

multifoliaterose14 Jun 2011 3:19 UTC
34 points
155 comments5 min readLW link

[Question] How do bounded util­ity func­tions work if you are un­cer­tain how close to the bound your util­ity is?

Ghatanathoah6 Oct 2021 21:31 UTC
13 points
26 comments2 min readLW link

Utility In Faith?

Matt Goldwater9 Oct 2022 17:49 UTC
10 points
0 comments5 min readLW link

[Question] Has Pas­cal’s Mug­ging prob­lem been com­pletely solved yet?

EniScien6 Nov 2022 12:52 UTC
3 points
11 comments1 min readLW link

[Question] Is acausal ex­tor­tion pos­si­ble?

sisyphus11 Nov 2022 19:48 UTC
−20 points
34 comments3 min readLW link

SBF, Pas­cal’s Mug­ging, and a Pro­posed Solution

Cole Killian18 Nov 2022 18:39 UTC
−2 points
5 comments5 min readLW link
(colekillian.com)

[Question] What qual­ities does an AGI need to have to re­al­ize the risk of false vac­uum, with­out hard­cod­ing physics the­o­ries into it?

RationalSieve3 Feb 2023 16:00 UTC
1 point
4 comments1 min readLW link

Could Roko’s basilisk aca­su­ally bar­gain with a pa­per­clip max­i­mizer?

Christopher King13 Mar 2023 18:21 UTC
1 point
7 comments1 min readLW link

AGI is un­con­trol­lable, al­ign­ment is impossible

Donatas Lučiūnas19 Mar 2023 17:49 UTC
−12 points
21 comments1 min readLW link
No comments.