Uncommon Utilitarianism #3: Bounded Utility Functions

Previous Post

For context on how I discuss utilitarianism in this sequence, read the first post.

The Proof

There is a mathematical proof that is a compelling case for bounded utility functions, but isn’t the whole story.

tl;dr: Vann McGee proves that agents with unbounded utility functions and under reasonable assumptions about their epistemics are consistently vulnerable to Dutch Books which exploit their willingness to seek out high-utility low-probability outcomes in some contexts.

Proof Outline

Consider an agent in a world with infinitely many states, and the agent believes that some infinite (not necessarily strict) subset of those states is possible (although they can have zero probability).

If the agent has an unbounded utility function, then you can subject it to a Dutch Book using an infinite sequence of bets about truth values of the propositions :

Bet 1: You lose one util if is true and gain utils if is false.

Bet 2: You lose 2 utils if is false, and you gain utils if is true and is false; otherwise, the bet is called off.


Bet : You lose utils if are all true and is false. You gain utils if are all true and is false. Otherwise, the bet is called off.

Each of these bets has an expected utility of 1, making it advantageous to take them, assuming that the casino has unbounded utility to hand out. However, only finitely many of the bets will be won with any reality-measure, so this scheme leads to the agent always losing net utility if it chooses to take the whole infinite bundle of bets.

McGee goes into more detail on the mathematical nuances of this. Peter De Blanc investigates a more general and abstract angle on this problem.

Beyond the Proof

The above proof implies the trilemma:

  1. Agents should defy those reasonable epistemic assumptions and instead have unreasonable epistemics (by concentrating all their probability mass into finitely many outcomes in the infinite outcome space).

  2. Agents should have bounded utility functions.

  3. Agents should do some other weird thing, like whatever this post is hinting at.

I won’t be responding to that post here, and I think we can agree to not do the thing labeled “unreasonable epistemics”, since 0 and 1 are not probabilities.

This leaves us with option 2, but does it really make sense to have a bounded utility function? I’m going to try and come at this from several different angles, in the hopes of conveying why bounding utility makes sense as a property of coherent agents.

The Proof is Limitedly Useful

The proof assumes an infinite sequence of bets, which nobody ever has time to execute, so there’s some question as to whether this conclusion holds up in real life. For that, I reference this passage from McGee’s paper which gives a reason to care about this argument, albeit one that is more poetic than satisfying:

Even a simply infinite sequence of bets is something in which we mortals never have a chance to participate, so as long as our interest in decision theory is purely practical, we needn’t fret over the example, whether it’s presented to us in the static or dynamic version. There is, however, some theoretical interest in trying to devise a standard of rationality so steadfast that one could safely rely on it even if confronted with an infinite array of choices. We only have occasion to make finitely many choices, but it would be surprising if the finitude of the set of choices we make were a prerequisite for rationality.

I don’t have an argument that is properly satisfying, but I do have several different suggestive intuition pumps that constitute much of the reason that I think of myself as having a bounded utility function.

Other Paradoxes of Unbounded Utility

Pascal

There is a classic situation in decision theory called Pascal’s Wager:

Some people claim there is a God who will send you to heaven the land of infinite utility, if and only if you Believe in Him and perform some specific rituals. You’re a good Bayesian, and you don’t assign literally zero probability to this God being real. Infinite utility multiplied by a positive probability is infinity, so you should pick up the infinite expected utility on the ground and join this religion.

This argument is generally considered around LessWrong to be Wrong, and if you haven’t seen it before I encourage you to look for flaws yourself.




Done looking for flaws? Okay. Here is mine:

This argument proves far too many things to be jointly the optimal action. For every possible action, there is a logical possibility of a god that rewards that specific thing with infinite utility, and so the expected utility of every action is infinite, even the ones we class as obviously stupid, like donating all of your money to Effective Evil. This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.

Pascal’s Mugging is a variant of this that classically goes as follows:

There is a Guy on the street who asks you for $5, threatening that otherwise He will step outside The Matrix and simulate [1] suffering humans.

This steps around the problem of all the infinities being the same, but in doing so it creates several more minor problems and one major one:

  • At least for me, my utility function seems to be sublinear in number of humans, and so this Very Big Number is uninteresting to me[2], just like the original Pascal’s Wager’s Infinity.

    • We can get around this by talking about utils instead.

  • Most of the framings around this seem like threats, decision-theoretically, and LDT agents don’t give in to threats, and thus receive fewer threats.

    • This raises the question of how to deal with entities that just intrinsically want to mug you, rather than threatening to do so for some other reason. There isn’t a good way around these that I know of.

  • Giving into this mugging means you are a very exploitable agent, and anyone can extract free money from you.

If your utility function is bounded, then you aren’t exploitable in this way.

St. Peter

The St. Petersburg paradox proposes a game:

I flip a fair coin until it comes up tails for the first time, and I note down the total number of flips (including the final tails) as . After, I pay you $.

The question is, how much should you pay to enter into this game? Once again, I encourage you to work it out if you haven’t seen this before.




We can calculate the expected payout as follows:

Hmmmm.

That’s weird.

If we’re measuring payouts in money, then this runs into the finite funds of St. Peter’s Casino, as well as the fact that many agents have sublinear utility in money. If payouts are given in utility, then this runs into the same infinity problem as Pascal’s Wager. Among the solutions for both the money version and the utility version is, of course, bounded utility functions.

Maxima and Minima

Outside the realm of thought experiments with mysterious Russian games and interdimensional wizards, it still makes sense to me to bound my utility function. I won’t say this is quite “practical” in the sense that I use it for real decisions in my real life, but it does genuinely provide ontological clarity.

One equivalent rephrasing of “my utility function is bounded” is “my utility function has a maximum and minimum value”.

This fact isn’t quite explained by my concept Sublinear Utility in Population despite that addressing utility functions asymptoting to some fixed value, as mhampton (correctly) notes in a comment (lightly reformatted and truncated):

This applies only to interpersonal aggregation, and so if you can have unboundedly high utility in one individual, your utility function is not truly bounded, right? I.e., it would get you out of Pascal’s muggings of the form, “Pay me five dollars and I will create happy copies of Alice” but not of the form “Pay me five dollars and I will create one copy of Alice and give her utils.”

When I try to think about getting utils, this doesn’t really seem well defined. When I try to construct an ideal utopia or anti-ideal dystopia in my head, this doesn’t intuitively look like the scale of goodness keeps going up or down infinitely, it feels like any changes I make are asymptoting towards an optimum, whether it’s in making there be more happy people or making one person really really happy, or any of the other things I value.

I model that, if my utility function were actually unbounded, then the process of finding a utopia would feel like always going “hmmm, I could make this world a substantial amount better by just changing this set of details” and always getting returns in utility of similar sizes, rather than running into diminishing returns.

To overuse an example, if my utility function was exactly the number of paper clips in the universe, then each time I would try to imagine a concrete utopia, I would always be able to add more paper clips to get a better world, without ever asymptoting or plateauing in utility, and without reaching an optimum.

Maybe some people’s intuitive values are structured like this, in which case they can work things out between them and the unbounded utility paradoxes I’ve listed here. I don’t have a good sense of how many people have introspective assessments of their values matching mine in this respect, but I’m curious to find out.

  1. ^

    very very very large number, see Knuth’s up-arrow notation for the definition

  2. ^

    Not to say that I’m not interested in Very Big Numbers categorically, I’m just not inclined to care when the number of people is so much larger than the number of atoms in the universe.

  3. ^

    “But the casino must have finite funds available” and “But I have sublinear utility in money”, respectively.