Alternate approaches to Pascal’s Mugging

A lot’s been said about Pascal’s Mugging, counter-muggings, adding extra arrows to the hyper-power stack, and so forth; but if anyone’s said anything about my own reaction, I’ve yet to read it; and, in case it might spur some useful discussion, I’ll try explaining it.

Over the years, I’ve worked out a rough rule-of-thumb to figure out useful answers to most everyday sorts of ethical quandaries, one which seems to do at least as well as any other I’ve seen: I call it the “I’m a selfish bastard” rule, though my present formulation of it continues with the clauses, “but I’m a smart selfish bastard, interested in my long-term self-interest.” This seems to give enough guidance to cover anything from “should I steal this or not?” to “is it worth respecting other peoples’ rights in order to maximize the odds that my rights will be respected?” to “exactly whose rights should I respect, anyway?”. From this latter question, I ended up with a ‘Trader’s Definition’ for personhood: if some other entity can make a choice about whether or not to make an exchange with me of a banana for a backrub, or playtime for programming, or anything of the sort, then at least generally, it’s in my own self-interest to treat them as if they were a person, whether or not they match any other criteria for personhood.

Which brings us to Pascal’s Mugging itself: “Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people.”

To put it bluntly… why should I care? Even if the Mugger’s claim is accurate, the entities he threatens to simulate and calls ‘people’ don’t seem as if they will have any opportunity to interact with my portion of the Matrix; they will never have any opportunity to offer me any benefit or any harm. How would it benefit myself to treat such entities as if they not only had a right to life, but that I had an obligation to try to defend their right?

For an alternate approach: Most forms of ethics that have been built, have been created with an unstated assumption about the number of possible beings someone can interact with in their life—on the outside, someone who lived 120 years and met with a new person every second would meet less than 4 billion individuals. It’s only in the past few centuries that hard, physical experience has given us sufficient insights into some of the rather more basic foundations of economics for humanity to have even developed enough competing theories of ethics for some to start being winnowed out; and we’re still a long way away from getting a broad consensus on an ethical theory that can deal with the existence of a mere 10 billion individuals. What are the odds that we possess sufficient information to have any inkling of the assumptions required to deal with a mere 3^^^3 individuals existing? What knowledge would be required so that when the answer is figured out, we would actually be able to tell that that was it?

For another alternate approach: Assuming that the Mugger is telling the truth… is what he threatens to do actually a bad thing? He doesn’t say anything about the nature of the lives the people he simulates; one approach he might take could be to simulate a large number of copies of the universe, which eventually peters out in heat-death; would the potential inhabitants of such a simulated universe really object to their existence being created in the first place?

For yet another alternate approach: “You have outside-the-Matrix powers capable of simulating 3^^^^3 people? Whoah, that implies so much about the nature of reality that this particular bet is nearly meaningless. Which answer could I give that would induce you to tell me more about the true substrate of existence?”

Do any of these seem to be a worthwhile way of looking at PM? Do you have any out-of-left-field approaches of your own?