Furthermore, even if a preference is bad to create in the first place because it violates a Global Preference, that does not mean satisfying that newly created preference is bad.
Doesn’t that mean that if you satisfy it enough it’s a net good?
If you give someone an addicting drug, this gives them a Global Preference-violating preference, causing x units of disutility. Once they’re addicted, each dose of the drug creates y units of utility. If you give them more than x/y doses, it will be net good.
I have a strong Global Preference to never have this preference for the torture to stop come to exist in the first place.
What’s so bad about being against torture? I can see why you’d dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.
Doesn’t that mean that if you satisfy it enough it’s a net good?
No, in Parfit’s initial example with the highly addictive drug your preference is 100% satisfied. You have a lifetime supply of the drug. But it still hasn’t made your life any better.
This is like Peter Singer’s “debit” model of preferences where all preferences are “debts” incurred in a “moral ledger.” Singer rejected this view because if it is applied to all preferences it leads to antinatalism. Parfit, however, has essentially “patched” the idea by introducing Global Preferences. In his theory we use the “debit” model when a preference is not in line with a global preference, but do not use it if the preference is in line with a global preference.
What’s so bad about being against torture? I can see why you’d dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.
It’s not that I dislike the preference, it’s that I would prefer to never have it in the first place (since I have to be tortured in order to develop it). I have a Global Preference that the sorts of events that would bring this preference into being never occur, but if they occur in spite of this I would want this preference to be satisfied.
If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit’s hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not? (Assuming, of course, that taking the drug has no bad side effects, and getting rid of the addiction is not possible).
No, in Parfit’s initial example with the highly addictive drug your preference is 100% satisfied.
What if it’s a preference that doesn’t have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?
If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit’s hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not?
Only if it makes me happy. I’m not a preference utilitarian.
Me being addicted to a drug and getting it is no higher on my current preference ranking than being addicted to a drug and not getting it.
What if it’s a preference that doesn’t have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
For instance, let’s imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).
Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.
I’m not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.
Only if it makes me happy. I’m not a preference utilitarian.
It won’t, according to Parfit’s stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.
I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.
Doesn’t that mean that if you satisfy it enough it’s a net good?
If you give someone an addicting drug, this gives them a Global Preference-violating preference, causing x units of disutility. Once they’re addicted, each dose of the drug creates y units of utility. If you give them more than x/y doses, it will be net good.
What’s so bad about being against torture? I can see why you’d dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.
No, in Parfit’s initial example with the highly addictive drug your preference is 100% satisfied. You have a lifetime supply of the drug. But it still hasn’t made your life any better.
This is like Peter Singer’s “debit” model of preferences where all preferences are “debts” incurred in a “moral ledger.” Singer rejected this view because if it is applied to all preferences it leads to antinatalism. Parfit, however, has essentially “patched” the idea by introducing Global Preferences. In his theory we use the “debit” model when a preference is not in line with a global preference, but do not use it if the preference is in line with a global preference.
It’s not that I dislike the preference, it’s that I would prefer to never have it in the first place (since I have to be tortured in order to develop it). I have a Global Preference that the sorts of events that would bring this preference into being never occur, but if they occur in spite of this I would want this preference to be satisfied.
If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit’s hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not? (Assuming, of course, that taking the drug has no bad side effects, and getting rid of the addiction is not possible).
What if it’s a preference that doesn’t have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?
Only if it makes me happy. I’m not a preference utilitarian.
Me being addicted to a drug and getting it is no higher on my current preference ranking than being addicted to a drug and not getting it.
That opens up the question of infinities in ethics, which is a whole other can of worms. There’s still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.
For instance, let’s imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).
Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.
I’m not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.
It won’t, according to Parfit’s stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.
I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).
With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it’s a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.
You’re right, thinking more on it, it seems like it’s not that hard to avoid a paradox with the following principles.
Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.
This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit’s drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.
However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I’m not yet sure how to measure how bad its creation would be, but “how fulfilled it is” would not be the only consideration. This was what messed me up in my previous post.
The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.