To capture anti-death intuitions, include memory in utilitarianism

EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that’s ethics for you. :)

In the last Stupid Questions Thread, solipsist asked

Making a person and unmaking a person seem like utilitarian inverses, yet I don’t think contraception is tantamount to murder. Why isn’t making a person as good as killing a person is bad?

People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I’ll shorten to UFU from now on) behave.

Consider these commonly held intuitions:

  1. If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.

  2. If a living person X is painlessly murdered at time T, then this is worse than if the X’s parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.

Also, the next intuition isn’t necessarily commonly held, since it’s probably deemed mostly science fictiony, but in transhumanist circles one also sees:

  • If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.

Assume that we think the instrumental arguments in favor of these intuitions (like societies with fewer murders being better off for everyone) are insufficient—we think that the intuitions should hold even if disregarding them had no effect on anything else. Now, many forms of utilitarianism will violate these intuitions, saying that in all cases both of the offered scenarios are equally good or equally bad.

The problem is that UFUs ignore the history of the world, looking only at individual states. By analogy to stochastic processes, we could say that UFUs exhibit the Markov property: that is to say, the value of a state depends only on that state, not on the sequence of events that preceded it. When deciding whether a possible world at time t+1 is better or worse than the actual world at t, UFUs do not look at any of the earlier times. Actually, UFUs do not really even care about the world at time t: all they do is compare the possible worlds at t+1, and choose the one with the highest happiness (or lowest suffering, or highest preference satisfaction, or…) as compared to the alternatives. As a result, they do not care about people getting murdered or resurrected, aside for the impact that this has on the general level of happiness (or whatever).

We can fix this by incorporating a history to the utility function. Suppose that a person X is born at time T: we enter the fact of “X was born” into the utility function’s memory. From now, for every future state the UF checks whether or not X is still alive. If yes, good, if not, that state loses one point of utility. Now the UF has a very large “incentive” to keep X from getting killed: if X dies, then every future state from that moment on will be a point worse than it would otherwise have been. If we assume the lifetime of the universe to be 10100 years, say, then with no discounting, X dying means a loss of 10100 points of utility. If we pick an appropriate value for the “not alive anymore” penalty, then it won’t be so large as to outweigh all other considerations, but enough that situations with unnecessary death will be evaluated as clearly worse than ones where that death could have been prevented.

Similarly, if it becomes possible to resurrect someone from physical death, then that is better than creating an entirely new life, because it will allow us to get rid of the penalty of them being dead.

This approach could also be construed to develop yet another attack on the Repugnant Conclusion, though the assumption we need to make for that might be more controversial. Suppose that X has 50 points of well-being at time T, whereas at T+1, X only has 25 points of well-being, but we have created another person Y who also has 25 points of well-being. UFUs would consider this scenario to be equally good as the one with no person Y and where X kept their 50 points. We can block this by maintaining a memory of the amount of peak well-being that anyone has ever had, and if they fall below their past peak well-being, apply the difference as a penalty. So if X used to have 50 points of well-being but now only has 25, then we apply an extra −25 to the utility of that scenario.

This captures the popular intuition that, while a larger population can be better, a larger population that comes at the cost of reducing the well-being of people who are currently well off is worse, even if the overall utility was somewhat greater. It’s also noteworthy that if X is dead, then their well-being is 0, which is presumably worse than their peak well-being, so there’s an eternal penalty applied to the value of future states where X is dead. Thus this approach, of penalizing states by the difference between the current and peak well-being of the people in those states, can be thought of as a generalization of the “penalize any state in which the people who once lived are dead” approach.