If we pick an appropriate value for the “not alive anymore” penalty, then it won’t be so large as to outweigh all other considerations, but enough that situations with unnecessary death will be evaluated as clearly worse than ones where that death could have been prevented.
Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can’t keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don’t care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don’t object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don’t care to stop it). Or to put it in less weird terms, we won’t object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.
There’s also the incentive for an agent with this system to self-modify to stop changing their utility function over time.
No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
That’s true, but note that if e.g. 20 billion people have died up to this point, then that penalty of −20 billion gets applied equally to every possibly future state, so it won’t alter the relative ordering of those states. So the fact that we’re getting an infinite amount of disutility from people who are already dead isn’t a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don’t care about their death.
That’s an interesting idea, but it wasn’t what I had in mind. As you point out, there are some pretty bad problems with that model.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don’t recall hearing anyone mention something like this before.
It’s not unusual to count “thwarted aims” as a positive bad of death (as I’ve argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person’s thwarted ends).
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don’t know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.
I think we can add a positive term as well: we gain some utility for happiness that once existed but doesn’t any more. E.g. we assign more utility to the state “there used to be a happy hermit, then she died” than the state “there used to be a sad hermit, then she died”. For certain values, this would be enough to be better than the state “there has never been a hermit”, which doesn’t get the “dead hermit” loss, but also doesn’t get the “past happiness” bonus.
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who’s died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn’t care about death by a factor of “t”, and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that’s too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person’s death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren’t afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we’ll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can’t change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of “lifespan juice”, which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as “dividing a single person up”, by killing them partway through their otherwise lifespan and replacing them with a new person.
Yes, this is at first glance in conflict with our current understanding of the universe. However, it is probably one of the strategies with the best hope of finding a way out of that universe.
Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can’t keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don’t care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don’t object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don’t care to stop it). Or to put it in less weird terms, we won’t object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.
There’s also the incentive for an agent with this system to self-modify to stop changing their utility function over time.
That’s true, but note that if e.g. 20 billion people have died up to this point, then that penalty of −20 billion gets applied equally to every possibly future state, so it won’t alter the relative ordering of those states. So the fact that we’re getting an infinite amount of disutility from people who are already dead isn’t a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
That’s an interesting idea, but it wasn’t what I had in mind. As you point out, there are some pretty bad problems with that model.
It only breaks that specific choice of memory UFU. The general approach admits lots of consistent functions.
That’s true.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don’t recall hearing anyone mention something like this before.
It’s not unusual to count “thwarted aims” as a positive bad of death (as I’ve argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person’s thwarted ends).
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.
Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don’t know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.
I think we can add a positive term as well: we gain some utility for happiness that once existed but doesn’t any more. E.g. we assign more utility to the state “there used to be a happy hermit, then she died” than the state “there used to be a sad hermit, then she died”. For certain values, this would be enough to be better than the state “there has never been a hermit”, which doesn’t get the “dead hermit” loss, but also doesn’t get the “past happiness” bonus.
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who’s died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn’t care about death by a factor of “t”, and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that’s too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person’s death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren’t afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we’ll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can’t change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of “lifespan juice”, which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as “dividing a single person up”, by killing them partway through their otherwise lifespan and replacing them with a new person.
Yes, this is at first glance in conflict with our current understanding of the universe. However, it is probably one of the strategies with the best hope of finding a way out of that universe.