Pascal’s mugging and Bayes

Suppose that your prior probability that giving $1000 to a stranger will save precisely N beings is P(1000$ saves N beings)=f(N) , where f is some sort of probability distribution.

When the stranger makes a claim that he will torture N beings unless you give him the $1000 , the probability has to be increased to

P(1000$ saves N beings | asking for $1000 to save N beings) = f(N) * P(Asking for $1000 to save N beings | 1000$ saves N beings) /​ P(asking for $1000 to save N beings)

The probability is increased by factor of P(Asking for $1000 to save N beings | 1000$ saves N beings) /​ P(asking for $1000 to save N beings) ⇐ 1/​ P(asking for $1000 to save N beings)

If you are attending philosophical events, and being pascal-mugged by a philosopher, the 1/​P(asking for $1000 to save N beings) can be less than 100 . Being asked then only raises the probability by at most factor of 100 over your f(N). If there was only one person in the world who came up with Pascal’s mugging, the factor is at most a few billions.

edit: Note (it may not be very clear from the post) that if your f(N) is not small enough, not only should you be Pascal-mugged, you should also give money to random stranger when he did not even Pascal-mug you—unless the utility of the mugging is very close to 1000$.

I think it is fairly clear that it is reasonable to have f(N) that decreases monotonously with N, and it has to sum to 1 which implies that it has to fall off faster than 1/​N . So the f(3^^^3) is much much smaller than 1/​(3^^^3) . If one is not to do that, one is not only prone to being Pascal-mugged, one should run around screaming ‘take my money and please don’t torture 3^^^3 beings’ at random people.

[Of course there is still a problem if one is to assign prior probability to N via Kolmogorov’s complexity, but it seems to me that it doesn’t make much sense to do so as such f won’t be monotonously decreasing]

Other issue is the claim of ‘more than 3^^^3 beings’, but any reasonable f(N) seem to eat up that sum as well.

This highlight a practically important problem with use of probabilistic reasoning in decision making. A proposition may be pulled out of immensely huge space of similar propositions, which should give it appropriately small prior; but we typically don’t know of the competing propositions, especially when it was transmitted from person to person, and substitute ‘do we trust that person’ in place of original statement. One needs to be very careful when trying to be rational and abandon intuitions, as it is very difficult to transform word problems into mathematical problems—and this operation itself relies on intuitions—and thus one could easily make a gross mistake that one’s intuitions do correctly veto, providing only a very vague hint along the lines of “anyone can make this claim” .

While typing this up I found a post that goes in greater detail on the issue.

(This sort of outgrew the reply I wanted to post in the other thread)