(Ir)rationality of Pascal’s wager

During the last few weeks I’ve spent a lot of time thinking about ,,Pascalian” themes, like the paradoxes generated by introducing infinities in ethics or decision theory. In this post I want to focus on Pascal’s wager (Hajek, 2018), and why it is (ir)rational to accept it.

Firstly, it seems to me that a huge part of responses to Pascal’s wager are just unsuccessful rationalizations, which people create to avoid the conclusion. It is common to see people who (a) claim that this conclusion is plainly absurd and just dismiss it without argument, or (b) people who try to give an argument which at first glance seems to work, but at the second glance it backfires and leads to even worse absurdities than the wager.

In fact it is not very surprising if we take into account the psychological studies showing how motivated reasoning and unconscious processes leading to cognitive biases are common (Haidt, 2001; Greene, 2007). Arguably, accepting the Pascal’s wager goes against at least a few of those biases (like scope insensitivity, risk aversion when dealing with small probabilities, even not to mention that the conclusion is rather uncomfortable).

Nevertheless, although I think that the arguments typically advanced against Pascal’s wager are not successful, it still may be rational not to accept the wager.

Here is why I think so.

I regard the expected utility theory as the right approach to making choices under uncertainty, even when dealing with tiny probabilities of the large outcomes. This is simply because it can be shown, that over long series of such choices following such strategy would pay off. However, this argument holds only over the long series of choices, when there is enough time for such improbable scenarios to occur.
For example, imagine a person, let’s call her Sue, who knows with absolute certainty that she has only one decision to make in her life, and after that she will magically disappear. She has a choice between two options.

Option A: 0,0000000000000000001 % probability of creating infinite value.

Option B: 99 % probability of creating of a huge, but finite amount of value.

I think that in this case, option B is the right choice.
However, if instead of having this one choice in her life Sue had to face infinitely many such choices over an infinitely long time, then I think the option A is the right choice, because it gives the best results in expectation.
Of course our lives are, in some sense, the long series of choices, so we ought to follow the expected utility theory. But, what if someone decides to make just one decision, which is worse in expectation but very improbable to have any negative consequences? Of course, if this person would start to make such decisions repeatedly, then she will predictably end worse off, but if she is able to reliably restrict herself to making just this single decision solely on the basis of its small probability, and following the expected utility otherwise, then for me it seems to be rational.

I’ll illustrate what I mean by an example. Imagine three people: Bob, John and Sam. They all think that the acceptance of the Pascal’s wager is unlikely to result in salvation/​avoidance of hell. However, they also think that they should maximize the expected utility, and that expected utility in the case of Pascal’s wager is infinite.

Confronted with that difficult dilemma, Bob abandons the expected utility theory and decides to rely more on his ,,intuitive’’ assessment of the choiceworthiness of actions. In other words, he just goes with his gut feelings.

John takes a different strategy, and decides to follow the expected utility theory, so he devotes the rest of his live to researching which religion is most likely to be true. (Since he is not sure which religion is true and he thinks that information value is extremely high in this case)

Sam adopts a mixed strategy. He decides to follow the expected utility theory, but in this one case he decides to make an exception and to not accept the wager, because he thinks it is unlikely to pay off. But he doesn’t want to abandon the expected utility approach either.

It seems to me that the Sam’s strategy achieve the best result at the end. Bob’s strategy is a nonstarter for me, since it predictably will lead to a bad outcomes. On the other hand, John’s strategy commits him to devote whole his life to something, what in the end gives no effects.
Meanwhile, it is unlikely that this one decision to not accept the wager will harm Sam, and following the expected utility theory in other decisions will predictably lead to the most desirable results.

For me it seems to work. Nevertheless, I have to admit that my solution also may seem as an ad hoc rationalization designed to avoid the uncomfortable conclusion.
This argument has also some important limitations, which I won’t address in detail here in order not to make this post too long. However I want to highlight them quickly.

1. How can you be sure that you will stick with your decision not to make any more such exceptions from the expected utility theory?

2. Why making the exception for this particular decision and not any other?

3. The problem posed by tiny probabilities of infinite value, the so called fanaticism problem, is not resolved by this trick, since the existence of God is not the only possible source of the infinite value (Beckstead, 2013; Bostrom, 2011).

4. What if taking that kind of approach would popularise it, causing more people to adopt it, but in different decisions than this concerning Pascal’s wager (or give the evidence that infinitely many copies of you has adopted it, if the universe is infinite (Bostrom, 2011))?

I don’t think that any of this objections is fatal, but I think they are worth considering. Of course, it is possible, and indeed quite probable, that I may have missed something important, fall prey to my bias or made some other kind of error. This is why I decided to write this post. I want to ask anyone who has some thoughts about this topic to comment about my argument, whether the conclusion is right, wrong, or right from wrong reasons. The whole issue may seem abstract, but I think it is really important, so I would appreciate giving it a serious thought.
Thanks in advance for all your comments! :)

Sources:

1. Beckstead, N. (2013). On the Overwhelming Importance of Shaping the Far Future
Bostrom, N. (2011). Infinite Ethics. Analysis and Metaphysics, Vol. 10 (2011): pp. 9-59
Greene, J. D. (2007). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 3. The neuroscience of morality: Emotion, brain disorders, and development (pp. 35-80). Cambridge, MA, US: MIT Press.
Haidt, J. (2001). The Emotional Dog and it’s Rational Tail: A Social Intuitionist Approach to Moral Judgement. Psychological Review 108: 814-834

Hájek, A. “Pascal’s Wager”, The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.), URL = <https://​​plato.stanford.edu/​​archives/​​sum2018/​​entries/​​pascal-wager/​​>.

All of the above sources can be found online.