Is Equality Really about Diminishing Marginal Utility?

In Robert Nozick’s famous “Utility Monster” thought experiment he proposes the idea of a creature that does not receive diminishing marginal utility from resource consumption, and argues that this poses a problem for utilitarian ethics. Why? Utilitarian ethics, while highly egalitarian in real life situations, does not place any intrinsic value on equality. The reason utilitarian ethics tend to favor equality is that human beings seem to experience diminishing returns when converting resources into utility. Egalitarianism, according to this framework, is good because sharing resources between people reduces the level of diminishing returns and maximizes the total amount of utility people generate, not because it’s actually good for people to have equal levels of utility.

The problem the Utility Monster poses is that, since it does not receive diminishing marginal utility, there is no reason, under traditional utilitarian framework, to share resources between it and the other inhabitants of the world it lives in. It would be completely justified in killing other people and taking their things for itself, or enslaving them for its own benefit. This seems counter-intuitive to Nozick, and many other people.

There seem to be two possible reasons for this. One, of course, is that most people’s intuitions are wrong in this particular case. The reason I am interesting in exploring, however, is the other one, namely that equality is valuable for its own sake, not just as a side effect of diminishing marginal utility.

Now, before I go any further I should clarify what I mean by “equality.” There are many different types of equality, not all of which are compatible with each other. What I mean is equality of utility, everyone has the same level of satisfied preferences, happiness, and whatever else “utility” constitutes. This is not the same thing as fiscal equality, as some people may differ in their ability to convert money and resources into utility (people with horrible illnesses, for instance, are worse at doing so than the general population). It is also important to stress that “lifespan” should be factored in as part of the utility that is to be equalized (i.e. killing someone increases inequality). Otherwise one could achieve equality of utility by killing all the poor people.

So if equality is valuable for its own sake, how does one factor it into utilitarian calculations? It seems wrong to replace utility maximization with equality maximization. That would imply that a world where everyone had 10 utilons and a society where everyone had 100 utilons are morally identical, which seems wrong, to say the least.

What about making equality lexically prior to utility maximization? That seems just as bad. It would imply, among other things, that in a stratified world where some people have far greater levels of utility than others, that it would be morally right to take an action that would harm every single person in the world, as long as it hurt the best off slightly more than the worst off. That seems insanely wrong. The Utility Monster thought experiment already argues against making utility maximization lexically prior to equality.

So it seems like the best option would be to have maximizing utility and increasing equality as two separate values. How then, to trade one off against the other? If there is some sort of straight, one-to-one value then this doesn’t do anything to dismiss the problem of the Utility Monster. A monster good enough at utility generation could simply produce so much utility that no amount of equality could equal its output.

The best possible solution I can see would be to have utility maximization and equality have diminishing returns relative to each other. This would mean that in a world with high equality, but low utility, raising utility would be more important, while in a world of low equality and high utility, establishing equality would be more important.

This solution deals with the utility monster fairly effectively. No matter how much utility the monster can generate, it is always better to share some of its resources with other people.

Now, you might notice that this doesn’t eliminate every aspect of the utility monster problem. As long as the returns generated by utility maximization do not diminish to zero you can always posit an even more talented monster. And you can then argue that the society created by having that monster enslave the rest of the populace is better than one where a less talented monster shares with the rest of the populace. However, this new society would instantly become better if the new Utility Monster was forced to share its resources with the rest of the population.

This is a huge improvement over the old framework. Ordinary utility maximizing ethics would not only argue that a world where a Utility Monster enslaved everyone else might be a better world. They would argue that it was the optimal world, the best possible world given the constraints the inhabitants face. Under this new ethical framework, however, that is never the case. The optimal world, under any given level of constraints, is one where a utility monster shares with the rest of the population.

In other words, under this framework, if you were to ask, “Is it good for a utility monster to enslave the rest of the population?” the answer would always be “No.”

Obviously the value of equality has many other aspects to be considered. For instance is it better described by traditional egalitarianism, or by prioritarianism? Values are often more complex than they first appear.

It also seems quite possible that there are other facets of value besides maximizing utility and equality of utility. For instance, total and average utilitarianism might be reconciled by making them two separate values that are both important. Other potential candidates include prioritarian concerns (if they are not included already), number of worthwhile lives (most people would consider a world full of people with excellent lives better than one inhabited solely by one ecstatic utility monster), consideration of prior-existing people, and perhaps many, many more. As with utility and equality, these values would have diminishing returns relative to each other, and an optimum society would be one where all receive some measure of consideration.

An aside. This next section is not directly related to the rest of the essay, but develops the idea in a direction I thought was interesting:

It seems to me that the value of equality could be the source of a local disagreement in population ethics. There are several people (Robin Hanson, most notably) who have argued that it would be highly desirable to create huge amounts of poor people with lives barely worth living, and that this may well be better than having a smaller, wealthier population. Many other people consider this to be a bad idea.

The unspoken assumption in this argument is that multiple lives barely worth living generate more utility than a single very excellent life. At first this seems like an obvious truth, based on the following chain of logic:

1. It is obviously wrong for Person A, who has a life barely worth living, to kill Person B, who also has a life barely worth living, and use B’s property to improve their own life.

2. The only reason something is wrong is that it decreases the level of utility.

3. Therefore, killing Person B must decrease the level of utility.

4. Therefore, two lives barely worth living must generate more utility than a single excellent life.

However, if equality is valued for its own sake, then the reason it is wrong to kill Person B might be because of the vast inequality in various aspects of utility (lifespan, for instance) that their death would create between A and B.

This means that a society that has a smaller population living great lives might very well be generating a much larger amount of utility than a larger society whose inhabitants live lives barely worth living.