Against Expected Utility

Expected utility is optimal as the number of bets you take approaches infinity. You will lose bets on some days, and win bets on other days. But as you take more and more bets, the day to day randomness cancels out.

Say you want to save as many lives as possible. You can plug “number of lives saved” into an expected utility maximizer. And as the amount of bets it takes increases, it will start to save more lives than any other method.

But the real world obviously doesn’t have an infinite number of bets. And following this algorithm in practice will get you worse results. It is not optimal.

In fact, as Pascal’s Mugging shows, this could get arbitrarily terrible. An agent following expected utility would just continuously make bets with muggers and worship various religions, until it runs out of resources. Or worse, the expected utility calculations don’t even converge, and the agent doesn’t make any decisions.

So how do we fix it? Well we could just go back to the original line of reasoning that led us to expected utility, and fix it for finite cases. Instead of caring what method does the best on infinite bets, we might say we want the one that does the best the most on finite cases. That would get you median utility.

For most things, median utility will approximate expected utility. But for very very small risks, it will ignore them. It only cares that it does the best in most possible worlds. It won’t ever trade away utility from the majority of your possible worlds to very very unlikely ones.

A naive implementation of median utility isn’t actually viable, because at different points in time, the agent might make inconsistent decisions. To fix this, it needs to decide on policies instead of individual decisions. It will pick a decision policy which it believes will lead to the highest median outcome.

This does complicate making a real implementation of this procedure. But that’s what you get when you generalize results, and try to make things work on the messy real world. Instead of idealized infinite worlds. The same issue occurs in the multi-armed bandit problem. Where the optimal infinite solution is simple, but finite solutions are incredibly complicated (or simple but require brute force.)

But if you do this, you don’t need the independence axiom. You can be consistent and avoid money pumping without it. By not making decisions in isolation, but considering the entire probability space of decisions you will ever make. And choosing the best policies to navigate them.

It’s interesting to note this actually solves some other problems. Such an agent would pick a policy that one-boxes on Newcomb’s problems, simply because that is the optimal policy. Whereas a straightforward implementation of expected utility doesn’t care.


But what if you really like the other mathematical properties of expected utility? What if we can just keep it and change something else? Like the probability function or the utility function?

Well the probability function is sacred IMO. Events should have the same probability of happening (given your prior knowledge), regardless what utility function you have, or what you are trying to optimize. And it’s probably inconsistent too. An agent could exploit you. By giving you bets in the areas where your beliefs are forced to be different from reality.

The utility function is not necessarily sacred though. It is inherently subjective, with the goal of just producing the behavior we want. Maybe there is some modification to it that could fix these problems.

It seems really inelegant to do this. We had a nice beautiful system where you could just count the number of lives saved, and maximize that. But assume we give up on that. How can we change the utility function to make it work?

Well you could bound utility to get out of mugging situations. After a certain level, your utility function just stops. It can’t get any higher.

But then you are stuck with a bound. If you ever reach it, then you suddenly stop caring about saving any more lives. Now it’s possible that your true utility function really is bounded. But it’s not a fully general solution for all utility functions. And I don’t believe that human utility is actually bounded, but that will have to be a different post.

You could transform the utility function so it asymptotic. But this is just a continuous bound. It doesn’t solve much. It still makes you care less and less about obtaining more utility, the closer you get to it.

Say you set your asymptote around 1,000. It can be much larger, but I need an example that is manageable. Now, what happens if you find yourself to exist in a world where all utilities are multiplied by a large number? Say 1,000. E.g. you save a 1,000 lives in situations where before, you would have saved only 1.

An example asymptoting function that is capped at 1,000. Notice how 2,000 is only slightly higher than 1,000, and everything after that is basically flat.

Now the utility of each additional life is diminishing very quickly. Saving 2,000 lives might have only 0.001% more utility than 1,000 lives.

This means that you would not take a 1% risk of losing 1,000 people, for a 99% chance at saving 2,000.

This is the exact opposite situation of Pascal’s mugging! The probability of the reward is very high. Why are we refusing such an obviously good trade?

What we wanted to do was make it ignore really low probability bets. What we actually did was just make it stop caring about big rewards, regardless of the probability.

No modification to it can fix that. Because the utility function is totally indifferent to probability. That’s what the decision procedure is for. That’s where the real problem is.


In researching this topic I’ve seen all kinds of crazy resolutions to Pascal’s Mugging. Some try to attack the exact thought experiment of an actual mugger. And miss the general problem of low probability events with large rewards. Others try to come up with clever arguments why you shouldn’t pay the mugger. But not any general solution to the problem. And not one that works under the stated premises, where you care about saving human lives equally, and where you assign the mugger less than 1/​3↑↑↑3 probability.

In fact Pascal’s Mugger was originally written just to be a formalization of Pascal’s original wager. Pascal’s wager was dismissed for reasons like involving infinite utilities, and the possibility of an “anti-god” that exactly cancels the benefits out. Or that God wouldn’t reward fake worshippers. People mostly missed the whole point about whether or not you should take low probability, high reward bets.

Pascal’s Mugger showed that, no, it works fine in finite cases, and the probabilities do not have to exactly cancel each other out

Some people tried to fix the problem by adding hacks on top of the probability or utility functions. I argued against these solutions above. The problem is fundamentally with the decision procedure of expected utility.

I’ve spoken to someone who decided to just bite the bullet. He accepted that our intuition about big numbers is probably wrong, and we should just do what the math tells us.

But even that doesn’t work. One of the points made in the original Pascal’s Mugging post is that EU doesn’t even converge. There is a hypothesis which has even less probability than the mugger, but promises 3↑↑↑↑3 utility. And a hypothesis even smaller than that which promises 3↑↑↑↑↑3 utility, and so on. Expected utility is utterly dominated by increasingly more improbable hypotheses. The expected utility of all actions approaches positive or negative infinity.

Expected utility is at the heart of the problem. We don’t really want the average of our utility function over all possible worlds. No matter how big the numbers are or improbable they may be. We don’t really want to trade away utility from the majority of our probability mass to infinitesimal slices of it.

The whole justification for EU being optimal in the infinite case, doesn’t apply to the finite real world. The axioms that imply you need it to be consistent aren’t true if you don’t assume independence. So it’s not sacred, and we can look at alternatives.

Median utility is just a first attempt at an alternative. We probably don’t really want to maximize median utility either. Stuart Armstrong suggests using the mean of quantiles. There are probably better methods too. In fact there is an entire field of summary statistics and robust statistics, that I’ve barely looked at yet.

We can generalize and think of agents has having two utility functions. The regular utility function, which just gives a numerical value representing how preferable an outcome is. And a probability preference function, which gives a numerical value to each probability distribution of utilities.

Imagine we want to create an AI which acts the same as the agent would, given the same knowledge. Then we would need to know both of these functions. Not just the utility function. And they are both subjective, with no universally correct answer. Any function, so long as it converges (unlike expected utility), should produce perfectly consistent behavior.