What if it’s a preference that doesn’t have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
For instance, let’s imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).
Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.
I’m not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.
Only if it makes me happy. I’m not a preference utilitarian.
It won’t, according to Parfit’s stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.
I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
For instance, let’s imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).
Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.
I’m not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.
It won’t, according to Parfit’s stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.
I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.