Pascal’s Mugging and One-shot Problems

I’ve had some thoughts on Pascal’s Mugging which might be worth sharing. I’m assuming some familiarity with Pascal’s Mugging in this post.


Before we get really started, let’s transform Pascal’s Mugging into a problem that is easier to reason about but still gets at the core idea.

First, forget “utility”, forget money, we’re maximising paperclips. I know it’s kind of silly, but using money or utility really tends to muddy up thinking. I was surprised how much easier it was to reason about the problem when I switched to paperclips[0].

Now, here’s my version of the problem. Suppose there’s a casino, in which there is a googolplex-sided die (10^(10^100) sides). You can pay 10 paperclips to roll the die, which are destroyed. If you roll a 1 on the die, the casino manufactures 3^^^^3 paperclips. Otherwise, nothing else happens and you lost 10 paperclips.

3^^^^3 is much, much, much larger than a googolplex, so expected utility maximisation overwhelmingly says you should play at this casino if you’re trying to maximise the number of paperclips.

(From this point on, I will refer to “expected utility” as “EU”)


In some cases, EU is correct. Here are 3 cases.

Case 1: If you expect to live a googolplex years, then paying to roll the die a few times per year is a good idea, because over that time frame the probability of winning at least once is very high.

Case 2: You have a normal lifespan, but there are a googolplex other paperclip maximisers on Earth. In this case, everyone plays, and the probability that at least one person wins is very high.

Case 3: You are the only paperclip maximiser in this universe, but there are a googolplex alternate universes that contain alternate “you”s who are in the same situation as you, and for whom the dice rolls are all independent. In this case the probability that at least one “you” wins is high, so you should play.


Here is a different case.

Case 0: You are alone in existence. There is no one else on earth, there are no alternate realities, you are literally alone in the entirety of all existence. This is the only decision you will ever get the chance to make. Should you pay 10 paperclips to roll the die?

I think it’s clear in Case 0 that you should not pay. If you pay, you lose 10 paperclips[1] and, for all practical purposes, are certain to lose. If you don’t pay, at least you get to keep your 10 paperclips. Since we’re trying to maximise the number of paperclips, the latter wins.


The key difference between case 0 and the other 3 cases is that in case 0, you only get one chance to maximise paperclips. I’m calling this sort of scenario a one-shot problem.

A one-shot problem can basically be described as trying to maximise paperclips by choosing from a finite set of choices, each choice being mutually exclusive and having a finite set of outcomes whose probability sums up to 1, and each outcome containing a finite number of paperclips.

I think a key insight, which I don’t know enough to prove but which seems correct, is that any finite sequence of decisions can be transformed into a one-shot problem, simply by viewing each possible sequence of decisions as a single choice.

As a simple example, if choosing between either flipping a coin or rolling a die is one decision, then flipping a coin and then rolling a die is 2 decisions. But they can be combined together as a single choice where you both flip a coin and then subsequently roll a die. This combination can be done even when the sequence of decisions is more complicated, for example when choosing to flip a coin, and, if it’s heads, rolling a die, otherwise flipping a coin again.

I expect you can do something similar to group the decisions of alternate “you”s into one, or even the decisions of other people on Earth, so long as their decision-making procedure is similar enough to yours.

If I’m right that this transformation is possible, that means one-shot problems are isomorphic to finite multi-shot problems, and hence insights into one are applicable to the other. This means that a solution to one-shot problems should give a solution to Pascal’s Mugging in general.


Solving one-shot problems means finding a decision-making procedure that maximises paperclips when you only have one decision. One might expect EU, which is all about maximising paperclips, would at least provide some insight into this.

Surprisingly, EU doesn’t seem to help. The key property of an EU maximiser is that as it makes more and more decisions, the probability that it will get more paperclips approaches 1.

For example, in my casino version of Pascal’s Mugging, at 1 repetition there is a very low probability of an EU maximiser winning. But at 1 googolplex repetitions, there is an ~63% chance of it winning at least once. At 2 googolplex, that probability becomes ~86%. In the long run, the probability that EU will come out on top approaches 1.

This means that EU completely sidesteps the problem of how to make decisions under uncertainty, by choosing the sequence of decisions that has a virtually 100% probability of winning out in the long run!


In summary, Pascal’s Mugging occurs because expected utility depends on there being a long time frame, and solving what to do when there isn’t a long enough time frame is equivalent to solving it for the simpler case where you only get to make a single decision.

Thank you for reading this, and I hope to learn a lot from your replies!


[0] Switching to paperclip maximising also helps show why I think bounded utility functions are an incomplete solution to Pascal’s Mugging. Which choice is optimal for maximising the number of paperclips in the world? This is a seemingly factual question, and our best answer is expected utility maximisation, which is vulnerable to Pascal’s Mugging. This question is independent of our utility function, and can’t be resolved by saying that we should use a bounded utility function.

Using paperclip maximisation also helps remove anthropic problems in Pascal’s Mugging. You can argue that producing 3^^^^3 utility requires creating 3^^^^3 people, which means the probability that you are one of those 3^^^^3 people counterbalances the reward from being Pascal Mugged. But this reasoning does not work if your “utility” is paperclips.

[1] Note that the price being 10 paperclips is only a courtesy from the casino. They could charge a billion paperclips per roll and EU would still say that you should pay up.


Update: On further reflection, my criticism of bounded utility functions in the zeroth footnote is wrong. I’ve updated in this direction due to Dagon’s second point in the comments (thank you!). Maximising the number of paperclips can be done using a bounded utility function as well, for example the function 1-1/​2^p is bounded between 0 and 1, where p is an nonnegative integer giving the number of paperclips.

That this can be done is surprising to me right now, and suggests that I need to think some more about all this.