That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.