No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
That’s true, but note that if e.g. 20 billion people have died up to this point, then that penalty of −20 billion gets applied equally to every possibly future state, so it won’t alter the relative ordering of those states. So the fact that we’re getting an infinite amount of disutility from people who are already dead isn’t a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don’t care about their death.
That’s an interesting idea, but it wasn’t what I had in mind. As you point out, there are some pretty bad problems with that model.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don’t recall hearing anyone mention something like this before.
It’s not unusual to count “thwarted aims” as a positive bad of death (as I’ve argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person’s thwarted ends).
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don’t know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.
That’s true, but note that if e.g. 20 billion people have died up to this point, then that penalty of −20 billion gets applied equally to every possibly future state, so it won’t alter the relative ordering of those states. So the fact that we’re getting an infinite amount of disutility from people who are already dead isn’t a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don’t expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that’s ethics for you. :)
That’s an interesting idea, but it wasn’t what I had in mind. As you point out, there are some pretty bad problems with that model.
It only breaks that specific choice of memory UFU. The general approach admits lots of consistent functions.
That’s true.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don’t recall hearing anyone mention something like this before.
It’s not unusual to count “thwarted aims” as a positive bad of death (as I’ve argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person’s thwarted ends).
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.
Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don’t know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.