Making a person and unmaking a person seem like utilitarian inverses
Doesn’t seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you’re only focusing on the utility for the person made or unmade, then maybe (although see blacktrance’s comment on that), but as a utilitarian you have no license for doing that.
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple’s child leads a life far longer and happier than the forgotten Hermit’s ever would have been.
Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?
Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it’s easy to see and measure the good you’re doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn’t detract from others’ was a complicated conjunction of things that worked out well.
Ah, must have misread your representation, but English is not my first language, so sorry about that.
I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.
A scenario that would capture the spirit of the problem is:
“Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive.” (The “hermit” portion of the problem is unnecessary now—you can replace hermit with “family” or “society” if you want.)
Compare with...
“Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive.” (Again, hermit / family / society are interchangeable)
and
“Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive.”
The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.
I thought the same thing and went to dig up the original. Here it is:
One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save their lives, while killing the “donor”. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Any problems here?
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
Doesn’t seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you’re only focusing on the utility for the person made or unmade, then maybe (although see blacktrance’s comment on that), but as a utilitarian you have no license for doing that.
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple’s child leads a life far longer and happier than the forgotten Hermit’s ever would have been.
Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?
Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.
If you’re asking me, I’d say no, but I’m not a utilitarian, partly because utilitarianism answers “yes” to questions similar to this one.
Only if you use a stupid utility function.
Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Yes, I should have rephrased that as ‘Only because hedonic utilitarianism is stupid’—how’s that?
If there are a large number of “yes” replies, the hermit lfestyle becomes very unappealing.
Sure, Eve did a good thing.
Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?
Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it’s easy to see and measure the good you’re doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn’t detract from others’ was a complicated conjunction of things that worked out well.
I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.
I didn’t mean for the hermit to be sad, just less happy than the child.
Ah, must have misread your representation, but English is not my first language, so sorry about that.
I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
It’s specified that he was killed painlessly.
It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
We live in a world full of utility monsters. We call them humans.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/
Robert Nozick:
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.
A scenario that would capture the spirit of the problem is:
“Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive.” (The “hermit” portion of the problem is unnecessary now—you can replace hermit with “family” or “society” if you want.)
Compare with...
“Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive.” (Again, hermit / family / society are interchangeable)
and
“Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive.”
This looks very similar to the trolley problem, specifically the your-organs-are-needed version.
The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.
So you think that the only problem with the Transplant scenario is that it discourages people from using hospitals..?
Not the only one, but the deal-breaking one.
See this
Well, that’s the standard rationalization utilitarians use to get out of that dilemma.
I thought the same thing and went to dig up the original. Here it is:
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
That’s a very weak objection given that the real world is full or perverse incentives and still manages to function, more or less, sorta-kinda...
Only if the Q in QALY takes into account the fact that people will be constantly worried they might be picked by the RNG.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
Yes, but I wouldn’t do that myself because of ethical injunctions.