Let’s take an example: I bump into Omega, who offers me a choice: I can take a certain 1 unit of utility, or have a 1 in 10 million chance of getting 1 billion utility. The naive expectation maximiser will take that chance: after all, their expectation will be 100 units of utility, which is much better than a measly one! In all likelihood, our maximiser will walk away with nothing.
And the naïve expectation maximiser would make a correct decision. Billion utils are so great that they are worth spending one util even against such astronomical odds. In most sensible approaches this is how utilities are defined: A has n-times greater utility than B iff you are considering certain B equally valuable as a gamble with 1/n chance of getting A.
It probably seems wrong to you because you are unable to imagine how great billion utils are, or because you round the tiny probability to zero. It is easy to commit such a fallacy—it’s hard to imagine two things that differ in value billion times, and on the other hand quite easy to subconsciously conflate utilities with money, even if you know that their relation is non-linear (you are explicitly conflating utils and dollars in the second example). Having billion dollars is hardly much better than having hundred thousand dollars, so it would be silly to bet a hundred thousand against a billion with 1:10,000 odds of winning. But this is not true for utils.
Even without conflating utilities with money, it is difficult to imagine such a huge difference. The reasons are: first, our imagination of utilities is bounded (and some say that so is the utility function), second, our intuitive utility detection has finite resolution, and third, our probability imagination has finite resolution too. Now when I read the described scenario, my intuition translates “billion utils” to “the best thing I can imagine” (which is, for most people, something like having a great family and a lot of money and friends and a nice job), “one util” to “the least valuable non-zero gain” (say eating a small piece of chocolate) and perhaps even “chance 1 in 10,000,000” to “effectively zero”. Now it becomes “would you refrain from eating the chocolate for an effectively zero increase in chance of getting a really great family and a lot of money”, where the reasonable answer is of course “no”. Even without rounding the probabilities to zero it is unlikely that the best imaginable thing has ten million (or even billion) times greater utility than the smallest detectable utility amount; that would need us to be able to measure our utilities to 8 (or even 12) significant digits, which is clearly not the case.
It may be helpful to realise that some people, namely the lottery players, make similar rounding error with opposite consequences. A lottery player’s feelings translate “1:100,000,000 chance of winning” to the lowest imaginable non-zero probability, something like “perhaps once in life” or “1:1,000” and the player goes to buy the ticket.
And the naïve expectation maximiser would make a correct decision. Billion utils are so great that they are worth spending one util even against such astronomical odds. In most sensible approaches this is how utilities are defined: A has n-times greater utility than B iff you are considering certain B equally valuable as a gamble with 1/n chance of getting A.
It probably seems wrong to you because you are unable to imagine how great billion utils are, or because you round the tiny probability to zero. It is easy to commit such a fallacy—it’s hard to imagine two things that differ in value billion times, and on the other hand quite easy to subconsciously conflate utilities with money, even if you know that their relation is non-linear (you are explicitly conflating utils and dollars in the second example). Having billion dollars is hardly much better than having hundred thousand dollars, so it would be silly to bet a hundred thousand against a billion with 1:10,000 odds of winning. But this is not true for utils.
Even without conflating utilities with money, it is difficult to imagine such a huge difference. The reasons are: first, our imagination of utilities is bounded (and some say that so is the utility function), second, our intuitive utility detection has finite resolution, and third, our probability imagination has finite resolution too. Now when I read the described scenario, my intuition translates “billion utils” to “the best thing I can imagine” (which is, for most people, something like having a great family and a lot of money and friends and a nice job), “one util” to “the least valuable non-zero gain” (say eating a small piece of chocolate) and perhaps even “chance 1 in 10,000,000” to “effectively zero”. Now it becomes “would you refrain from eating the chocolate for an effectively zero increase in chance of getting a really great family and a lot of money”, where the reasonable answer is of course “no”. Even without rounding the probabilities to zero it is unlikely that the best imaginable thing has ten million (or even billion) times greater utility than the smallest detectable utility amount; that would need us to be able to measure our utilities to 8 (or even 12) significant digits, which is clearly not the case.
It may be helpful to realise that some people, namely the lottery players, make similar rounding error with opposite consequences. A lottery player’s feelings translate “1:100,000,000 chance of winning” to the lowest imaginable non-zero probability, something like “perhaps once in life” or “1:1,000” and the player goes to buy the ticket.