Pascal’s Mugging, Finite or Unbounded Resources?

This article addresses Pascal’s Mugging, for information about this scenario see here.


I am going to attack the problem by breaking it up into two separate cases, one in which the mugger claims to be from a meta-world with large but finite resources, and one in which the mugger claims to be from a meta-world with unbounded resources. I will demonstrate that in both cases the mugging fails, for different reasons, and argue that much of the appeal of the mugging comes from the conflation of these two cases.

Large but finite resources


In this case, the mugger claims to be from a world with a bounded number of resources, but still large enough to torture n, e.g. n=3^^^^3, people. I will argue that the prior for such a world should be of the order of 1/​n or lower, and in particular not 1/​complexity(n). With a prior of 1/​n or less, the mugging fails, because no matter how large a number the mugger claims, the likelihood of their claim being true decreases proportionally. Thus there need be no value for which the claim is more worrisome than implausible.

We’re faced with uncertainty because the world the mugger claims to be from is outside our universe. We have no information on which to base our estimate of its size, other than that it is substantially bigger than our universe (at least in the case of a matrix-like simulating world this is necessary). However, that ignorance is also our strength, because a prior distribution in the face of ignorance of a scale exists, and that prior is 1/​n.

The first reason not to use a complexity prior is that there is simply no reason to use one. What reason is there that a world with a particular finite number of resources would be more likely to be a computable size? If you were to guess the size of our universe, certainly you might round the number to the nearest power of 10, but not because you think a round number is more likely to be correct. A world of difficult to describe size is just as likely to exist as a world with a similar but easily describable size.

A critical point here is that the complexity of the world itself of size n is proportional to n, not complexity(n). In order for a computer program to model the behaviour of a world of size n, it does not suffice to just generate the number n itself. It needs to model the behaviour of every single one of those n elements that make up the world. Such a program would need memory of size n just to keep track of one time step. To say that such a world should be give a prior of 1/​complexity(n) is to conflate complexity(n) with complexity(world(n)). If AIXI were to consider such a world, it would need to treat that world as having a complexity of n. Otherwise it would be like AIXI measuring the complexity of the size of the computer program that could generate its inputs, rather than measuring the complexity of the program itself.

You may have noticed that the 1/​n prior is itself unnormalisable, due to its infinite integral (at both zero and towards infinity in this case). Ignorance priors all have this property of being “improper” priors, which cannot be normalised. They work because once you add a single piece of evidence, the resulting distribution can be normalised. Which raises the question: What is that additional evidence in this case?

Well, in the particular case of a matrix like simulating world, there’s the one other piece of knowledge we have that it’s large enough to simulate our universe. Aside from setting a lower bound (which helps with the infinite integral near zero but not out to infinity), you might then ask, given a world of a particular size, what are the chances that it would simulate a universe of specifically the size of ours. The number of alternatively sized universes which they could simulate is proportional to n for sufficiently large n, thus the chance of ours being the size it is becomes 1/​n. Combined with the ignorance prior you reach 1/​n^2, and now you can actually integrate and normalise.

Thus I would conclude that overall the plausibility of the large but finite world of size n which the mugger claims to be from is proportional to 1/​n^2, making the desire to pay lower, not higher, as ‘n’ grows. Note that either of the two arguments here is sufficient for the mugging to fail.

An aside on sufficient evidence

One final aside on this case; in Pascal’s Muggle: Infinitesimal Priors and Strong Evidence, Eliezer ridicules the idea of assigning priors this low to an event based on the idea that it would imply that compelling evidence to the contrary would be unable to convince you otherwise. However, this is a flat-out misapplication of probability theory.
p(unlikely scenario | extreme evidence) = p(unlikely scenario) * p(extreme evidence | unlikely scenario) /​ p(extreme evidence)

In order for p(unlikely scenario | extreme evidence) ~= 1 in the face of the prior p(unlikely scenario) ~= 1/​3^^^^3, all that’s required is p(extreme evidence) ~= 1/​3^^^^3. That is to say, the likelihood of seeing such evidence is low. Forget “no amount of evidence”, just one such piece of evidence would be sufficient. All that’s required is that the evidence itself is unlikely. And evidence which can only be generated by an unlikely scenario will of course be itself unlikely. As a simple example, imagine I found a method of picking a random integer between 0 and 3^^^^3 (just assume for the sake of argument that such a thing was possible to do). I would correctly assign a probability of 1/​3^^^^3 of seeing the number ‘7’. But, if I performed the method, and saw the output of 7, I wouldn’t “fail to consider this sufficient evidence to convince me” that the result was 7. The arguments relating to the bandwidth of our sensory system fail to account for (inefficient) encodings of that information which may have some configurations with arbitrarily low likelihood.

Of course in practice, in these unlikely situations, competing theories that start with “I’m dreaming” or “I’m delusional” may dominate. All scenarios markedly less likely than those have the burden of disproving those possibilities first. But this is not an impossible burden, and is in any case exactly as it should be.

Unbounded resources

I’m going to use access to a machine with unlimited computing resources as my working example here but I hope that the points translate well enough to other settings. I’m also going to briefly make a distinction between “infinite” and “unbounded”: There are infinitely many of something if the cardinality of the set of such things in existance is infinite. There are unboundedly many of something if, for any number ‘n’, it would be possible to generate ‘n’ of those things. Unbounded is a lower requirement, but is sufficient for this discussion. I make this distinction mostly just to explain why I’m using the term at all (since you might otherwise expect “infinite”).

In contrast to the finite resources scenario, in the infinite or unbounded resources scenario I think it’s quite correct to say that the difficulty of generating a program that would torture n people is in proportion to the complexity, rather than the scale, of ‘n’. Given unlimited resources, the only barrier is writing the program itself, the difficulty of which is barely any more work than required by the definition of complexity.

However, in this scenario, there’s no need for a mugger at all! We’ve mugged ourselves already with our own moral mathematics. The 3^^^^3 people to mugger wishes to torture are utterly insignificant in the face of the 3^^^^^3 people who we could simulate in paradise, if we outsmart or overpower the mugger and take control of those resources ourselves. Does it sound unlikely we’d be able to overcome them? Of course, but how unlikely? It certainly doesn’t scale with the value, so as with the original dilemna just pick a bigger number if you need to (which you don’t, I can assure you, it’s big enough).

And yet, even that is insufficiently ambitious. I would posit that with unbounded resources available, any course of action we could describe is dominated by considerations of an only slightly more complicated but substantially more important alternative. We’re frozen with inaction in the face of the utter futility of anything we’re even capable of thinking of. And we don’t even need a mugger to trigger this catastrophe. So long as we assign a non-zero probability of such a mugging occurring in the future, we should be worrying about it right now.

The point is that in this situation, just paying the mugger and carrying on cannot be the best course of action, because it’s not the right choice if they’re lying, and if they’re not then it’s dominated by other much larger considerations. Thus the mugging still fails, not necessarily because of the implausibility of their threat but because of the utter irrelevance of it in the face of unboundedly more important other considerations.

Bounding the unbounded

Although this is tangential to my main point, I will consider how the concept of unbounded resources could be handled. Even though I’ve demonstrated that the mugging fails, the larger issue of considering the possibility of unbounded resources still seems a little unresolved. Here’s a few options, each of which take seriously but none of which I’m completely convinced of yet. In some cases I also talk about how this resolution impacts the mugging. I’ll add that they are not at all mutually exclusive either, they could all be valid.

* Ignore the possibility, at least until we actually have to deal with it, which will most likely be never and in any case gives us time to work out the maths in the meantime. A practical if thoroughly unsatisfying solution. A sub-case of this would be to plan to completely reinvent or even abandon quantitative morality in the face of the collapse of quantitative limits. What we replace it with is hard to say without better understanding the nature of the unlimited resources available.

* Ignore the possibility by symmetry. We know nothing about worlds with unbounded resources, so any action we take is just as likely to hurt as help our chances of utilising them for unbounded good. The question then is whether a mugger as described would be sufficient to break that symmetry. Personally I don’t think they do, in the same way that I don’t think the religions on earth break the symmetry of what a god might want were one to exist. I see no reason to privilege their hypotheses over the negation of them. Similarly the threats of a mugger who is clearly psychopathic and in any case has absolutely no need of my money may not break the symmetry on what I might expect to happen if I pay or don’t. Essentially, I’m saying don’t trust the mugger any more than I distrust them. Still, even if you accept this claim, it feels a little like dodging the question. It shouldn’t be that hard to reformulate the scenario in a way that’s sufficient to break the symmetry.

* Assign probability zero to infinite (and unbounded?) hypotheticals. Note that mathematically, something can be “possible” and still have probability 0. One example is the chance of a randomly chosen Real number chosen within (0, 1) being rational. This would be the natural extension of the 1/​n prior for resources of scale n. While mathematically plausible and philosophically satisfying, I’m willing to be, but not yet quite convinced this is correct. The trouble I have is that infinite things seem in some ways far less complex than large finite things. Generating an infinite loop is one of the easiest things to program a computer to do. In saying so though, am I making the same mistake I describe above, in conflating complexity(X) with complexity(size(X))? AIXI may consider an unbounded space of programs and unbounded computing resources, but it certainly does not integrate over any programs of themselves infinite length (and indeed would get nowhere if it even tried). Do unbounded resources correspond to a program of infinite length or just a finite program running on unbounded hardware? I’m not yet sure either way.

* Fail to lose sleep over it regardless. Personally I act to optimise my own utility. That utility does honestly consider the utility of others, but it is nonetheless my own. It is also bounded within a time-frame because there’s only so happy or sad I can be, and also bounded over time by geometric discounting. Being just my own utility it’s not subject to being multiplied by an arbitrary number of people (and no I don’t care if they’re copies of me either). In being bounded, the harsh reality is that there’s only so much I can care about the scale of a tragedy before it all just becomes numbers. So call me evil if you like but either way I’m not motivated to pay, nor, more generally, motivated to worry about the possibility of unbounded resources existing. Of course this doesn’t really resolve the mugging itself. You could modify the scenario to replace myself having to pay with instead a small, plausible but entirely moral threat (e.g. “I’ll punch that guy in the face”). I would then be motivated to make the correct moral decision regardless of bounds on my utility (though I suppose my motivation to be correct is itself bounded). It makes me wonder actually, nobody wants to pay themselves, but how many people actually would pay in this alternative case of an entirely moral trade off?

Conclusion


In the finite resources case, the decision to make is real, and not dominated by unavoidable larger considerations. The scenario itself is reasonable and entirely finite.

In the infinite resources case, the plausibility of the mugger’s threat is only as low as 1/​complexity(n) and thus they are able to create a threat which scales faster than its implausibility.

By not making it entirely clear which of these cases is considered, the original presentation of Pascal’s Mugging served to generate a scenario which appeared to have the merits of both cases and the weaknesses of neither. However, by separating these two cases it becomes clear that the mugging fails in both, either because of the implausibility of finite but large resources, or the overwhelming, moral-system destroying power of unbounded resources. Although the unbounded resources problem is still unresolved (to my satisfaction at least), any resolution of it would be very likely to also resolve this case of the mugging (or if not then at least change our thinking about it substantially). Thus, in no case is it correct to pay, at least without the mugger providing unimaginably stronger evidence than is presented.

The collapse our our moral systems in the face of unlimited resources may have been the key point Elizier was making with Pascal’s Mugging, and I certainly haven’t contradicted that here. But I have I hope made it clear that unbounded resources are required to do this not just large numbers, and the hypothetical muggers are the least of our problems in these scenarios.