I think we can add a positive term as well: we gain some utility for happiness that once existed but doesn’t any more. E.g. we assign more utility to the state “there used to be a happy hermit, then she died” than the state “there used to be a sad hermit, then she died”. For certain values, this would be enough to be better than the state “there has never been a hermit”, which doesn’t get the “dead hermit” loss, but also doesn’t get the “past happiness” bonus.
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who’s died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn’t care about death by a factor of “t”, and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that’s too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person’s death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren’t afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we’ll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can’t change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of “lifespan juice”, which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as “dividing a single person up”, by killing them partway through their otherwise lifespan and replacing them with a new person.
I think we can add a positive term as well: we gain some utility for happiness that once existed but doesn’t any more. E.g. we assign more utility to the state “there used to be a happy hermit, then she died” than the state “there used to be a sad hermit, then she died”. For certain values, this would be enough to be better than the state “there has never been a hermit”, which doesn’t get the “dead hermit” loss, but also doesn’t get the “past happiness” bonus.
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who’s died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn’t care about death by a factor of “t”, and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that’s too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person’s death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren’t afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we’ll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can’t change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of “lifespan juice”, which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as “dividing a single person up”, by killing them partway through their otherwise lifespan and replacing them with a new person.