Unbounded linear utility functions?

The LW community seems to assume, by default, that “unbounded, linear utility functions are reasonable.” That is, if you value the existence of 1 swan at 1.5 utilons, then 10 swans should be worth 15, etc.

Yudkowsky in his post on scope insensitivity argues that nonlinearity of personal utility functions is a logical fallacy.

However, unbounded and linearly increasing utility functions lead to conundrums such as Pascal’s Mugging. A recent discussion topic on Pascal’s Mugging suggests ignoring probabilities that are too small. However, such extreme measures are not necessary if tamer utility functions are used: one images a typical personal utility function to be bounded and nonlinear.

In that recent discussion topic, V_V and I questioned the adoption of such an unbounded, linear utility function. I would argue that nonlinear of utility functions is not a logical fallacy.

To make my case clear, I will clarify my personal interpretation of utilitarianism. Utility functions are mathematical constructs that can be used to model individual or group decision-making. However, it is unrealistic to suppose that every individual actually has an utility function or even a preference ordering; at best, one could find a utility function which approximates the behavior of the individual. This is confirmed by studies demonstrating the inconsistency of human preferences. The decisions made by coordinated groups: e.g. corporate partners, citizens in a democracy, or the entire community of effective altruists could also be more or less well-approximated by a utility function: presumably, the accuracy of the utility function model of decision-making depends on the cohesion of the group. Utilitarianism, as proposed by Bentham and Mills, proposes an ethical framework based on some idealized utility function. Rather than using utility functions to model group decision-making, Bentham and Mills propose to use some utility function to guide decision-making, in the form of an ethical theory. It is important to distinguish these two different use-cases of utility functions, which might be termed descriptive utility and prescriptive utility.

But what is ethics? I hold the hard-nosed position that moral philosophies (including utiliarianism) are human inventions which serve the purpose of facilitating large-scale coordination. Another way of putting it is that moral philosophy is a manifestation of the limited superrationality that our species possesses. [Side note: one might speculate that the intellectual aspect of human political behavior, of forming alliances based on shared ideals (including moral philosophies), is a memetic or genetic trait which propogated due to positive selection pressure: moral philosophy is necessary for the development of city-states and larger political entities, which in turn rose as the dominant form of social organization in our species. But this is a separate issue from the the discussion at hand.]

In this larger context, we can be prepared to evaluate the relative worth of a moral philosophy, such as utiliarianism, against competing philosophies. If the purpose of a moral philosophy is to facilitate coordination, then an effective moral philosophy is one that can actually hope to achieve that kind of coordination. Utiliarianism is a good candidate for facilitating global-level coordination due to its conceptual simplicity and because most people can agree with its principles, and it provides a clear framework for decision-making, provided that a suitable utility function can be identified, or at least that the properties of the “ideal utility function” can be debated. Furthermore, utiliarianism, and related consequentialist moralities are arguably better equipped to handle tragedy of the commons than competing deontological theories.

And if we accept utiliarianism, and if our goal is to facilitate global coordination, we can go further to evaluate the properties of any proposed utility function, by the same criteria as before: i.e., how well will the proposed utility function facilitate global coordination. Will the proposed utility function find broad support among the key players in the global community? Unbounded, linearly increasing utility functions clearly fail, because few people would support conclusions such as “it’s worth spending all our resources to prevent a 0.001% chance that 1e100 human lives will be created and tortured.”

If so, why are such utility functions so dominant in the LW community? One cannot overlook the biased composition of the LW community as a potential factor: generally proficient in mathematical or logical thinking, but less adept than the general population in empathetic skills. Oversimplified theories, such as linear unbounded utility functions, appeal more strongly to this type of thinker, while more realistic but complicated utility functions are instinctively dismissed as “illogical” or “irrational”, when they real reason that they are dismissed is not because they are actually concluded to be illogical, but because because they are precieved as uglier.

Yet another reason stems from the motives of the founders of the LW community, who make a living primarily out of researching existential risk and friendly AI. Since existential risks are the kind of low-probability, long-term and high-impact event which would tend to be neglected by “intuitive” bounded and nonlinear utility functions, but favored by unintuitive, unbounded linear utility functions, it is in the founders’ best interests to personally adopt a form of utiliarianism employing the latter type of utility function.

Finally, let me clarify that I do not dispute the existence of scope insensitivity. I think the general population is ill-equipped to reason about problems on a global scale, and that education could help remedy this kind of scope insensitivity. However, even if natural utility functions asymptote far too early, I doubt that the end result of proper training against scope insensitivity would be an unbounded linear utility function; rather, it would still be a nonlinear utility function, but which asymptotes at a larger scale.