I have a question about Pascal’s mugging. This does break the standard question-answer format, but you said not to be squeamish about that, so here goes the problem I am currently considering.
According to the wiki, the Standard Pascal’s mugging is formulated like this:
Now suppose someone comes to me and says:
“Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people.”
Now, further suppose that someone says
“Never give into a Pascal’s Mugging except this one. If you do, I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills n^^^^n people, where n is the amount of people threatened by the other Pascal’s mugger.”
Let’s call this a Meta Pascal’s Mugging, since it is a Pascal’s Mugging which is contingent on your reaction to a Standard Pascal’s Mugging. This is a fairly complicated mugging!
Now further suppose a third person says:
“Regardless of the fact that you are under a Meta Pascal’s Mugging to not give into a Standard Pascal’s Mugging, I am still going to commit Pascal’s Mugging on you for five dollars. If you don’t give me the money, I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills m^^^^m people, where m is the amount of people threatened by the Meta Pascal’s mugger who threatened you if you gave into another Pascal’s mugging.”
So we could call this a Recursive Pascal’s Mugging. Both people are making muggings which refer to mugging MORE people than the other one, since the Meta Pascal’s mugging applied to all other muggings, regardless of their level or recursion, although it itself did not start a recursive loop.
Now let’s say I am mugged by all THREE Pascal’s muggers simultaneously. What do I do?
Clearly, the answer “All Pascal’s muggings are not worth worrying about and I don’t need to give into any of them.” is an answer. But it’s also really easy to get to in answer space, so I’m curious if there are any other answers I might not be thinking of.
My own response is that all Pascal’s muggings are not worth worrying about.
I’m curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal’s Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives—but they don’t inform you of that decision.
The probability of this is vanishingly small, of course, but it’s only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?
That’s a good point. Let me see if I understand the conclusion correctly:
I should consider that there is a opposing Pascal’s Anti-Mugging for any Pascal’s Mugging, and it seems reasonable that I don’t have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.
Once the mugger asserts that there is a mugging, I can ask “What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?” If this is a fake mugging (which seems likely), he won’t have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.
If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn’t thought of.
If it sounds like I haven’t gotten it, please explain in more detail.
So, this is correct enough, but I would recommend generalizing the principle.
The (nominally) interesting thing about Pascal’s Mugging scenarios (and also about the original Pascal’s Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal’s mugging).
It’s conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things… e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.
But I’ve never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.
In general, one such tactic is to maintain a broader perspective than just the specific problem I’ve been invited to think about.
So when the Mugger asserts that there is a mugging, I can ask “Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?”
The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods’ claims equally compelling.
The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.
I have a question about Pascal’s mugging. This does break the standard question-answer format, but you said not to be squeamish about that, so here goes the problem I am currently considering.
According to the wiki, the Standard Pascal’s mugging is formulated like this:
Now suppose someone comes to me and says:
Now, further suppose that someone says
Let’s call this a Meta Pascal’s Mugging, since it is a Pascal’s Mugging which is contingent on your reaction to a Standard Pascal’s Mugging. This is a fairly complicated mugging!
Now further suppose a third person says:
So we could call this a Recursive Pascal’s Mugging. Both people are making muggings which refer to mugging MORE people than the other one, since the Meta Pascal’s mugging applied to all other muggings, regardless of their level or recursion, although it itself did not start a recursive loop.
Now let’s say I am mugged by all THREE Pascal’s muggers simultaneously. What do I do?
Clearly, the answer “All Pascal’s muggings are not worth worrying about and I don’t need to give into any of them.” is an answer. But it’s also really easy to get to in answer space, so I’m curious if there are any other answers I might not be thinking of.
My own response is that all Pascal’s muggings are not worth worrying about.
I’m curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal’s Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives—but they don’t inform you of that decision.
The probability of this is vanishingly small, of course, but it’s only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?
That’s a good point. Let me see if I understand the conclusion correctly:
I should consider that there is a opposing Pascal’s Anti-Mugging for any Pascal’s Mugging, and it seems reasonable that I don’t have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.
Once the mugger asserts that there is a mugging, I can ask “What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?” If this is a fake mugging (which seems likely), he won’t have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.
If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn’t thought of. If it sounds like I haven’t gotten it, please explain in more detail.
Either way, thank you for the explanation!
So, this is correct enough, but I would recommend generalizing the principle.
The (nominally) interesting thing about Pascal’s Mugging scenarios (and also about the original Pascal’s Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal’s mugging).
It’s conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things… e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.
But I’ve never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.
In general, one such tactic is to maintain a broader perspective than just the specific problem I’ve been invited to think about.
So when the Mugger asserts that there is a mugging, I can ask “Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?”
The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods’ claims equally compelling.
The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.