Thanks for your comment
In some sense I would agree that foregoing a finite chance of infinite payoff for finite chance of finite payoff needs infinite risk aversion. Nevertheless, I think that even such extreme risk aversion could be justified in some specific cases. When I consider this issue, I usually do it in terms of a thought experiment, similar to this with Sue, which I presented in my post.
Imagine you are the only being in the entire universe. You know with certainty that you have just one decision to make and after that you will magically disappear. You are faced with a choice between option A, which gives you 0,001 probability of creating really bad outcome and 0,999 probability of creating a moderately good outcome. You have also option B, and if you choose it nothing happens and universe just remains empty. I think that A is the right choice in this case. In my view, when faced with such a single decision it makes sense to go with whatever option which gives you above 0,5 probability of the best possible outcome.
However, this has very counterintuitive implications of its own. On this account, when faced with such only-one-case scenario as described above, it would be rational to choose option A, which gives you 0,49 probability of infinite negative utility and 0,51 probability of some tiny positive outcome, over option B on which just nothing happens.
This is an extremely counterintuitive result, but ,,pure” expected utility theory (EU) also can generate extremely counterintuitive results, such as choosing the option B (nothing happens) instead of the option A with 0,0000000000000000000000000001 probability of creating infinite negative value and 0,999999999999999999999999999 probability of creating an enormously good, but finite outcome. In a reply to the different comment by Wolajacy above I described how I think about this issue, so I don’t want to repeat it here again. Also, I in my view we should not trust our intuitions in such cases, since they evolved to help us spread our genes in the familiar environment, not to tackle the infinity paradoxes. Therefore I’m ready to accept even counterintuitive results based on explicit reasoning.
You mentioned that maybe it would be worth to revise the assumption that ,,negligible chances can be always adequately expressed by a (finite precision) real number”. That would be a way out of the paradox. However, I don’t think that this is a very promising approach. Surely, in some (maybe most) cases it is hard to speak about precise probabilities that we attach to different beliefs. Nevertheless, I doubt that infinitesimals would be an adequate representation of such probabilities, especially in the case of a belief in God, where I think we can do better.
I agree that whether infinite payoffs make sense or not may be problematic. On the most basic, standard formulation of EU it seems that we are facing a problem, since if there are multiple options which can lead to infinite payoffs then we have no standard to choose between them. However, I think that this could be fix by a relatively uncontroversial addition, stating that when we are faced with multiple options of infinite value, we just go for the one with the highest probability. There may be other issues connected with the comparability of different outcomes, as you also mentioned that kind of problem in your example with being a fictional character, but it seems that discussing those issues would lead us even further away for the original topic.
If you haven’t yet, you can also check my reply to the other comment under this post, where I’ve tried to express myself more clearly. Of course if you have any objections to my reasoning outlined here feel free to criticize it, I really appreciate a well-thought feedback.
Thanks for your comment. I’ll try to express myself more clearly.
You’ve asked ,, in what sense it’s the right choice/rational/achieving best result?”
This is what I had in mind.
I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside. Let’s call this approach the ,,first order” approach (I’m still uncertain about this exact formulation and I may revise it in the future, but let’s stick with this at the moment).
For example: I have to choose between options A, B and C
Option A: probability 0,9 of gaining 100 utility points and probability 0,1 of gaining 1000 negative utility points
Option B: probability 0,01 of gaining 100000 utility points and probability 0,99 of gaining 10 negative utility points
Option C: probability 0,75 of gaining 1000 utility points and probability 0,25 of gaining 10000 negative utility points
From this set of options first A and C would be selected and finally option C would be chosen. By this choice I will most probably gain 1000 utility points and loose nothing.
However, now is the moment when the expected utility theory (EU) comes into play. Let’s imagine that I know that during my life (let’s say 80 years) I will be confronted with that set of potions many times. If each time I would follow the procedure outlined above, then I would predictably end worse off, since option C has negative EU (indeed, the highest negative EU from all the options).
I think that the approach I’ve defined above is not in contradiction with EU if we look at our life holistically. I used to think of it as choosing the best decision framework, which at the end will lead to the best possible outcome. So my rationale for adopting EU is ultimately based on the first order approach that I defined earlier. I’m not sure how exact probabilities and utility points should look like here, but the situation looks roughly like this:
Option A: Adopt EU (with probability above 0,5 will lead to the best possible result overall)
Option B: Use the first order approach in every single decision (with probability above 0,5 will not lead to the best possible result overall)
That shows that the first order approach leads to the acceptance of EU if we look at the situation holistically. Of course, now the question may arise ,,So for what was that whole fancy theorizing about the first order approach? Isn’t it better to just adopt the EU from the start?”
Well, at least for mi the EU is not self-evident and it needs some further rationale to be justified. The first order approach tries to capture a fundamental intuition which, I think, stand behind the EU.
So what about Pascal’s wager? In this case the acceptance of wager is the best options according to EU. However, as I’ve tried to show above, EU works only because it pays off to follow it over the long series of choices under uncertainty. If some agent is able to reliably restrict herself to making just one exception to following EU, in the case when it is improbable that it would have any negative consequences, then it seems to me that such exception could be justified.
Let’s illustrate it on the same example that I’ve gave above. Suppose that indeed during 80 years of my life I was many times confronted with a choice between the options A, B and C. I followed the EU, so overall I gained a lot of utility points. Now I’m on my deathbed and this is the last hour of my life. Someone approaches me and offers me one more time the choice between options A, B and C. I know that the option B is the best in expectation. However, I have no expectation to life longer than an hour, so there is no more time to make the EU reasoning work. So I decide to choose option C this time. Most probably, I would gain more utility points than if I chose the option B one more time.
Sorry for making this response so long, but I tried to be clear in explaining my reasoning. However, I’m not an expert on probability theory nor on decision theory. If you think that I messed up something in the argument outlined above, fell free to press me on that point. It is really important for me to get things right in this case, so I appreciate the constructive critique.