Making a person and unmaking a person seem like utilitarian inverses, yet I don’t think contraception is tantamount to murder. Why isn’t making a person as good as killing a person is bad?
ETA: Potentially less contentious rephrase: why isn’t making a life as important as saving a life?
Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn’t, since nonexisting entities can’t have preferences.
The question also turns on issues about population ethics. The previous paragraph assumes the “total view”: that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn’t count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn’t.
For further discussion, see Peter Singer’s Practical Ethics, chap. 4 (’What’s wrong with killing?”).
Pablo makes great points about the suffering of loved ones, etc. But, modulo those points, I’d say making a life is as important as saving a life. (I’m only going to address the potentially contentious “rephrase” here, and not the original problem; I find the making life / saving life case more interesting.) And I’m not a utilitarian.
When you have a child, even if you follow the best available practices, there is a non-trivial chance that the child will have a worse-than-nothing existence. They could be born with some terminal, painful, and incurable illness. What justifies taking that risk? Suggested answer: the high probability that a child will be born to a good life. Note that in many cases, the child who would have an awful life is a different child (coming from a different egg and/or sperm—a genetically defective one) than the one who would have a good life.
Making a person and unmaking a person seem like utilitarian inverses
Doesn’t seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you’re only focusing on the utility for the person made or unmade, then maybe (although see blacktrance’s comment on that), but as a utilitarian you have no license for doing that.
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple’s child leads a life far longer and happier than the forgotten Hermit’s ever would have been.
Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?
Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it’s easy to see and measure the good you’re doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn’t detract from others’ was a complicated conjunction of things that worked out well.
Ah, must have misread your representation, but English is not my first language, so sorry about that.
I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose . . . the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
Humans mostly act as though they are utility monsters with respect to entities they have empathy with; they act as though the utility of entities they have no empathy toward is vastly smaller than the utility of those they relate to and so caring for them is always the best option.
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.
A scenario that would capture the spirit of the problem is:
“Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive.” (The “hermit” portion of the problem is unnecessary now—you can replace hermit with “family” or “society” if you want.)
Compare with...
“Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive.” (Again, hermit / family / society are interchangeable)
and
“Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive.”
The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.
I thought the same thing and went to dig up the original. Here it is:
One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save their lives, while killing the “donor”. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Any problems here?
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
Cheap answer, but remember that it might be the true one: because utilitarianism doesn’t accurately describe morality, and the right way to live is not by utilitarianism.
Upvoted. Remember to keep in mind the answer might be “making a person is as good as killing a person is bad.
Here’s a simple argument for why we can’t be indifferent to creating people. Suppose we have three worlds:
Jon is alive and has 10 utils
Jon was never conceived
1Jon is alive and has 20 utils
Assume we prefer Jon to have 20 utils to 10. Assume also we’re indifferent between 10 utils and Jon’s. Hence by transitivity we must prefer Jon exist and have 20 utils to Jon’s non-existance. So we should try to create Jon, if we think he’ll have over 10 utils.
Note that this kind of utilon calculation also equates your scenarios with those where, magically, a whole bunch of people came and ceased to exist a few minutes ago with lots of horrible torture, followed by amnesia, in between.
Why isn’t making a person as good as killing a person is bad
Possibly because...
I don’t think contraception is tantamount to murder.
You have judged. It’s possible that this is all there is to it… not killing people who do not want to die might just be a terminal value for humans, while creating people who would want to be created might not be a terminal value.
(Might. If you think that it’s an instrumental value in favor of some other terminal goal, you should look for it)
As far as I can tell killing/not-killing a person isn’t the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.
This is the thought experiment that comes to mind. It’s worth noting that all that follows depends heavily on how one calculates things.
Comparing the universes where we choose to make Jon to the one where we choose not to:
Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
Universe A’: Jon not-made; Jon doesn’t exist in this universe so the amount of utility he has is undefined.
Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:
Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
Universe B’: Jon killed; Jon’s life is cut short, his life has a global net utility of u.
The marginal utility for Jon in Universe B vs B’ is easy to calculate, (2u—u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.
However the marginal utility for Jon in Universe A vs A’ is undefined (in the same sense 1⁄0 is undefined). As Jon doesn’t exist in universe A’ it is impossible to assign a value to Utility_Jon_A’, as a result our marginal (Utility_Jon_A—Utility_Jon_A’) is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A’ is undefined.
It follows from this that the marginal utility between any universe and A’ is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.
I myself (probably) don’t have a preference for creating universes where I exist over ones where I don’t. However I’m sure that I don’t want this current existence of me to terminate.
So personally I choose maximise the utility of people who already exist over creating more people.
Eliezer explains here why bringing people into existence isn’t all that great even if someone existing over not existing has a defined(and positive) marginal utility.
Here are two related differences between a child is and an adult. (1) It is very expensive to turn a child into an adult. (2) An adult is highly specific and not replaceable, while a fetus has a lot of subjective uncertainty and is fairly easily duplicated within that uncertainty. Uploading is relevant to both of these points.
Because killing a person deprives them of positive experiences that they otherwise would have had, and they prefer to have them. But a nonexistent being doesn’t have preferences.
Presumably what should matter (assuming preference utilitarianism) when we evaluate an act are the preferences that exist at (or just before) the time of commission of the act. If that’s right, then the non-existence of those preferences after the act is performed is irrelevant.
The Spanish Inquisition isn’t exculpated because it’s victims’ preferences no longer exist. They existed at the time they were being tortured, and that’s what should matter.
So it’s fine to do as much environmental damage as we like, as long as we’re confident the effects won’t be felt until after everyone currently alive is dead?
That’s a get out of utilitarianism free card. Many people’s preferences include terms for acting in accordance with their own nonutilitarian moral systems.
Preference utilitarianism isn’t a tool for deciding what you should prefer, it’s a tool for deciding how you should act. It’s entirely consistent to prefer options which involve you acting according to whim or some nonutilitarian system (example: going to the pub), yet for it to dictate—after taking into account the preferences of others—that you should in fact do something else (example: taking care of your sick grandmother).
There may be some confusion here, though. I normally think of preferences in this context as being evaluated over future states of the world, i.e. consequences, not over possible actions; it sounds like you’re thinking more in terms of the latter.
Yeah, I sometimes have trouble thinking like a utilitarian.
If we’re just looking at future states of the world, then consider four possible futures: your (isolated hermit) granddaughter exists and has a happy life, your granddaughter exists and has a miserable life, your granddaughter does not exist because she died, your granddaughter does not exist because she was never born.
It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same—if we’re allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.
If we accept that the utility of the last two options is the same, then we have an awkward dilemma. Either this utility value is higher than option 2 - meaning that if someone’s life is sufficiently miserable, it’s better to kill them than allow them to continue living. Or it’s lower, meaning that it’s always better to give birth to someone than not. Worse, if your first granddaughter was going to be miserable and your second would be happy, it’s a morally good action if you can do something that kills your first granddaughter but gives rise to the birth of your second granddaughter. It’s weirdly discontinuous to say that your first granddaughter’s preferences become valid once she’s born—does that mean that killing her after she’s born is a bad thing, but if you set up some rube goldberg contraption that will kill her after she’s born then that’s a good thing?
It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same—if we’re allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.
Whatever action I take right now, eventually the macroscopic state of the universe is going to look the same (heat death of the universe). Does this mean the utilitarian is committed to saying that all actions available to me are morally equivalent? I don’t think so. Even though the (macroscopic) end state is the same, the way the universe gets there will differ, depending on my actions, and that matters from the perspective of preference utilitarianism.
What, then, would you say is the distinction between a utilitarian and a virtue ethicist? Are they potentially just different formulations of the same idea? Are there any moral systems that definitely don’t qualify as preference utilitarianism, if we allow this kind of distinction in a utility function?
Do you maybe mean the difference between utilitarianism and deontological theories? Virtue ethics is quite obviously different, because it says the business of moral theory is to evaluate character traits rather than acts.
Deontology differs from utilitarianism (and consequentialism more generally) because acts are judged independently of their consequences. An act can be immoral even if it unambiguously leads to a better state of affairs for everyone (a state of affairs where everyone’s preferences are better satisfied and everyone is happier, say), or even if it has absolutely no impact on anyone’s life at any time. Consequentialism doesn’t allow this, even if it allows distinctions between different macroscopic histories that lead to the same macroscopic outcome.
Yes, but there may be a moral difference between frustrating a preference that once existed, and causing a preference not to be formed at all. See my reply to the original question.
Even within pleasure- or QALY-utilitarianism, which seems technically wrong, you can avoid this by recognizing that those possible people probably exist regardless in some timeline or other. I think. We don’t understand this very well. But it looks like you want lots of people to follow the rule of making their timelines good places to live (for those who’ve already entered the timeline). Which does appear to save utilitarianism’s use as a rule of thumb.
From a classical utilitarian perspective, yeah, it’s pretty much a wash, at least relative to non-fatal crimes that cause similar suffering.
However, around here, “utilitarian” is usually meant as “consistent consequentialism.” In that frame we can appeal to motives like “I don’t want to live in a society with lots of murder, so it’s extra bad.”
It takes a lot of resources to raise someone. If you’re talking about getting an abortion, it’s not a big difference, but if someone has already invested enough resources to raise a child, and then you kill them, that’s a lot of waste.
Isn’t negative utility usually more motivating to people anyway? This seems like a special case of that, if we don’t count the important complications of killing a person that pragmatist pointed out.
Making a person and unmaking a person seem like utilitarian inverses, yet I don’t think contraception is tantamount to murder. Why isn’t making a person as good as killing a person is bad?
ETA: Potentially less contentious rephrase: why isn’t making a life as important as saving a life?
Whether this is so or not depends on whether you are assuming hedonistic or preference utilitarianism. For a hedonistic utilitarian, contraception is, in a sense, tantamount to murder, except that as a matter of fact murder causes much more suffering than contraception does, both to the person who dies, to his or her loved ones, and to society at large (by increasing fear). By contrast, preference utilitarians can also appeal to the preferences of the individual who is killed: whereas murder causes the frustration of an existing preference, contraception doesn’t, since nonexisting entities can’t have preferences.
The question also turns on issues about population ethics. The previous paragraph assumes the “total view”: that people who do not exist but could or will exist matter morally, and just as much. But some people reject this view. For these people, even hedonistic utilitarians can condemn murder more harshly than contraception, wholly apart from the indirect effects of murder on individuals and society. The pleasure not experienced by the person who fails to be conceived doesn’t count, or counts less than the pleasure that the victim of murder is deprived of, since the latter exists but the former doesn’t.
For further discussion, see Peter Singer’s Practical Ethics, chap. 4 (’What’s wrong with killing?”).
Pablo makes great points about the suffering of loved ones, etc. But, modulo those points, I’d say making a life is as important as saving a life. (I’m only going to address the potentially contentious “rephrase” here, and not the original problem; I find the making life / saving life case more interesting.) And I’m not a utilitarian.
When you have a child, even if you follow the best available practices, there is a non-trivial chance that the child will have a worse-than-nothing existence. They could be born with some terminal, painful, and incurable illness. What justifies taking that risk? Suggested answer: the high probability that a child will be born to a good life. Note that in many cases, the child who would have an awful life is a different child (coming from a different egg and/or sperm—a genetically defective one) than the one who would have a good life.
Only if the hedonistic utilitarian is also a total utilitarian, rather than an average utilitarian, right?
Edit: Read your second paragraph, now I feel silly.
Doesn’t seem that way at all to me. A person who already exists has friends, family, social commitments, etc. Killing that person would usually effect all of these things negatively, often to a pretty huge extent. Using contraception maybe creates some amount of disutility in certain cases (for staunch Catholics, for instance) but not nearly to the degree that killing someone does. If you’re only focusing on the utility for the person made or unmade, then maybe (although see blacktrance’s comment on that), but as a utilitarian you have no license for doing that.
A hermit, long forgotten by the rest of the world, lives a middling life all alone on a desert island. Eve kills the hermit secretly and painlessly, sell his organs, and uses the money to change the mind of a couple who had decided against having additional children. The couple’s child leads a life far longer and happier than the forgotten Hermit’s ever would have been.
Eve has increased QALYs, average happiness, and total happiness. Has Eve done a good thing? If not, why not?
Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.
If you’re asking me, I’d say no, but I’m not a utilitarian, partly because utilitarianism answers “yes” to questions similar to this one.
Only if you use a stupid utility function.
Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Yes, I should have rephrased that as ‘Only because hedonic utilitarianism is stupid’—how’s that?
If there are a large number of “yes” replies, the hermit lfestyle becomes very unappealing.
Sure, Eve did a good thing.
Does that mean we should spend more of our altruistic energies on encouraging happy productive people to have more happy productive children?
Maybe. I think the realistic problem with this strategy is that if you take an existing human and help him in some obvious way, then it’s easy to see and measure the good you’re doing. It sounds pretty hard to figure out how effectively or reliably you can encourage people to have happy productive children. In your thought experiment, you kill the hermit with 100% certainty, but creating a longer, happier life that didn’t detract from others’ was a complicated conjunction of things that worked out well.
I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.
I didn’t mean for the hermit to be sad, just less happy than the child.
Ah, must have misread your representation, but English is not my first language, so sorry about that.
I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
It’s specified that he was killed painlessly.
It is true, I wasn’t specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably—killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
We live in a world full of utility monsters. We call them humans.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don’t become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn’t exist otherwise, but the same cannot be said for battery animals.
But driving this reasoning to its logical conclusion you get a lot of strange results.
The premise is that humans are differnt from animals in that they know that they inflict suffering and are thus able to change it, and according to some ethics have to.
Actually this would be kind of a disadvantage of knowledge. There was a not so recent game theoretic post about situations where if you know more you have to choose probabilistically to win on average whereas those who don’t know will always choose defect and thus reap a higher benefit than you—except if they are too many.
So either
You need to construct a world without animals as animals suffer from each other and humans know that and can modify the world to get rid of this.
Humans could alter themselves to not know that they inflict harm (or consider harm unimportant or restrict empathy to humans...) and thus avoid the problem thereby.
The key point I think is that a concept that rests on some aspect of human being is being selected and taken to its ‘logical conclusion’ out of context and without regard to that this concept is an evolved feature itself.
As there is no intrinsic moral fabric of the universe we effectively force our evolved values on our environment and make it conform to it.
In sofar excessive empathy (which is an aggregated driver behind ethics) is not much different from excessive greed which also affects our environment—only we have already learned that the latter might be no good idea).
The conclusion is that you also have to balance extreme empathy with reality.
ADDED: Just found this relevant link: http://lesswrong.com/lw/69w/utility_maximization_and_complex_values/
Robert Nozick:
My point is that humans mostly act as though they are utility monsters with respect to non-humans (and possibly humans they don’t identify with); they act as though the utility of non-sapient animal is vastly smaller than the utility of a human and so making the humans happy is always the best option. Some people put a much higher value on animal welfare than others, but there are few environmentalists willing to say that there is some number of hamsters (or whatever you assign minimal moral value to) worth killing a child to protect.
That way it looks. And this is probably part of being human.
I’d like to rephrase your answer as follows to drive home that ethics is most driven by empathy:
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don’t explicitly include animals in my utility calculations, and while I don’t support causing suffering for the sake of suffering, I don’t have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
This fails to fit the spirit of the problem, because it takes the preferences of currently living beings (the childless couple) into account.
A scenario that would capture the spirit of the problem is:
“Eve kills a moderately happy hermit who moderately prefers being alive, uses the money to create a child who is predisposed to be extremely happy as a hermit. She leaves the child on the island to live life as an extremely happy hermit who extremely prefers being alive.” (The “hermit” portion of the problem is unnecessary now—you can replace hermit with “family” or “society” if you want.)
Compare with...
“Eve must choose between creating a moderately happy hermit who moderately prefers being alive OR an extremely happy hermit who extremely prefers being alive.” (Again, hermit / family / society are interchangeable)
and
“Eve must choose between kliling a moderately happy hermit who moderately prefers being alive OR killing an extremely happy hermit who extremely prefers being alive.”
This looks very similar to the trolley problem, specifically the your-organs-are-needed version.
The grounds to avoid discouraging people from walking into hospitals are way stronger than the grounds to avoid discouraging people from being hermits.
So you think that the only problem with the Transplant scenario is that it discourages people from using hospitals..?
Not the only one, but the deal-breaking one.
See this
Well, that’s the standard rationalization utilitarians use to get out of that dilemma.
I thought the same thing and went to dig up the original. Here it is:
This is from the consequentialism page on the SEP, and it goes on to discuss modifications of utilitarianism that avoid biting the bullet (scalpel?) here.
This situation seems different for me for two reason:
Off-topic way: Killing the “donor” is bad for similar reasons as 2-boxing the Newcomb problem is bad. If doctors killed random patients then patients wouldn’t go to hospitals and medicine would collapse. IMO the supposedly utilitarian answer to the transplant problem is not really utilitarian.
On-topic way: The surgeons transplant organs to save lives, not to make babies. Saving lives and making lives seem very different to me, but I’m not sure why (or if) they differ from a utilitarian perspective.
Analogically, “killing a less happy person and conceiving a more happy one” may be wrong in a long term, by changing a society into one where people feel unsafe.
You’re fixating on the unimportant parts.
Let me change the scenario slightly to fix your collapse-of-medicine problem: Once in a while the government consults its random number generator and selects one or more, as needed, people to be cut up for organs. The government is careful to keep the benefits (in lives or QALYs or whatever) higher than the costs. Any problems here?
That people are stupefyingly irrational about risks, especially in regards to medicine.
As an example; my paternal grandmother died of a treatable cancer less than a year before I was born, out of a fear of doctors which she had picked up from post-war propaganda about the T4 euthenasia program. Now this is a woman who was otherwise as healthy as they come, living in America decades after the fact, refusing to go in for treatment because she was worried some oncologist was going to declare a full-blooded German immigrant as genetically impure and kill her to improve the Aryan race.
Now granted that’s a rather extreme case, and she wasn’t exactly stable on a good day from what I hear, but the point is that whatever bits of crazy we have get amplified completely out of proportion when medicine comes into it. People already get scared out of seeking treatment over rumors of mythical death panels or autism-causing vaccine programs, so you can only imagine how nutty they would get over even a small risk of actual government-sanctioned murder in hospitals.
(Not to mention that there are quite a lot of people with a perfectly legitimate reason to believe those RNGs might “just happen” to come up in their cases if they went in for treatment; it’s not like American bureaucrats have never abused their power to target political enemies before.)
The traditional objection to this sort of thing is that it creates perverse incentives: the government, or whichever body is managing our bystander/trolley tracks interface, benefits in the short term (smoother operations, can claim more people saved) if it interprets its numbers to maximize the number of warm bodies it has to work with, and the people in the parts pool benefit from the opposite. At minimum we’d expect that to introduce a certain amount of friction. In the worst case we could imagine it leading to a self-reinforcing establishment that firmly believes it’s being duly careful even when independent data says otherwise: consider how the American War on Drugs has played out.
That’s a very weak objection given that the real world is full or perverse incentives and still manages to function, more or less, sorta-kinda...
Only if the Q in QALY takes into account the fact that people will be constantly worried they might be picked by the RNG.
And of course, I wouldn’t trust a government made of mere humans with such a determination, because power corrupts humans. A friendly artificial intelligence on the other hand...
Edited away an explanation so as not to take the last word
Short answer, no.
I’d like to keep this thread focused to making a life vs. saving a life, not arguments about utilitarianism in general. I realize there is much more to be said on this subject, but I propose we end discussion here.
Yes, but I wouldn’t do that myself because of ethical injunctions.
Cheap answer, but remember that it might be the true one: because utilitarianism doesn’t accurately describe morality, and the right way to live is not by utilitarianism.
Upvoted. Remember to keep in mind the answer might be “making a person is as good as killing a person is bad.
Here’s a simple argument for why we can’t be indifferent to creating people. Suppose we have three worlds:
Jon is alive and has 10 utils
Jon was never conceived
1Jon is alive and has 20 utils
Assume we prefer Jon to have 20 utils to 10. Assume also we’re indifferent between 10 utils and Jon’s. Hence by transitivity we must prefer Jon exist and have 20 utils to Jon’s non-existance. So we should try to create Jon, if we think he’ll have over 10 utils.
Note that this kind of utilon calculation also equates your scenarios with those where, magically, a whole bunch of people came and ceased to exist a few minutes ago with lots of horrible torture, followed by amnesia, in between.
Possibly because...
You have judged. It’s possible that this is all there is to it… not killing people who do not want to die might just be a terminal value for humans, while creating people who would want to be created might not be a terminal value.
(Might. If you think that it’s an instrumental value in favor of some other terminal goal, you should look for it)
As far as I can tell killing/not-killing a person isn’t the same not-making/make a person. I think this becomes more apparent if you consider the universe as timeless.
This is the thought experiment that comes to mind. It’s worth noting that all that follows depends heavily on how one calculates things.
Comparing the universes where we choose to make Jon to the one where we choose not to:
Universe A: Jon made; Jon lives a fulfilling life with global net utility of 2u.
Universe A’: Jon not-made; Jon doesn’t exist in this universe so the amount of utility he has is undefined.
Comparing the universes where we choose to kill an already made Jon to the one where we choose not to:
Universe B: Jon not killed; Jon lives a fulfilling life with global net utility of 2u.
Universe B’: Jon killed; Jon’s life is cut short, his life has a global net utility of u.
The marginal utility for Jon in Universe B vs B’ is easy to calculate, (2u—u) gives a total marginal utility (i.e. gain in utility) from choosing to not kill Jon over killing him of u.
However the marginal utility for Jon in Universe A vs A’ is undefined (in the same sense 1⁄0 is undefined). As Jon doesn’t exist in universe A’ it is impossible to assign a value to Utility_Jon_A’, as a result our marginal (Utility_Jon_A—Utility_Jon_A’) is equal to (u - [an undefined value]). As such our marginal utility lost or gained by choosing between universes A and A’ is undefined.
It follows from this that the marginal utility between any universe and A’ is undefined. In other words our rules for deciding which universe is better for Jon break down in this case.
I myself (probably) don’t have a preference for creating universes where I exist over ones where I don’t. However I’m sure that I don’t want this current existence of me to terminate.
So personally I choose maximise the utility of people who already exist over creating more people.
Eliezer explains here why bringing people into existence isn’t all that great even if someone existing over not existing has a defined(and positive) marginal utility.
I created a new article about this.
Here are two related differences between a child is and an adult. (1) It is very expensive to turn a child into an adult. (2) An adult is highly specific and not replaceable, while a fetus has a lot of subjective uncertainty and is fairly easily duplicated within that uncertainty. Uploading is relevant to both of these points.
Because killing a person deprives them of positive experiences that they otherwise would have had, and they prefer to have them. But a nonexistent being doesn’t have preferences.
Once you’ve killed them and they’ve become nonexistent, then they don’t have preferences either.
Presumably what should matter (assuming preference utilitarianism) when we evaluate an act are the preferences that exist at (or just before) the time of commission of the act. If that’s right, then the non-existence of those preferences after the act is performed is irrelevant.
The Spanish Inquisition isn’t exculpated because it’s victims’ preferences no longer exist. They existed at the time they were being tortured, and that’s what should matter.
So it’s fine to do as much environmental damage as we like, as long as we’re confident the effects won’t be felt until after everyone currently alive is dead?
I’d presume that many people’s preferences include terms for the expected well-being of their descendants.
That’s a get out of utilitarianism free card. Many people’s preferences include terms for acting in accordance with their own nonutilitarian moral systems.
Preference utilitarianism isn’t a tool for deciding what you should prefer, it’s a tool for deciding how you should act. It’s entirely consistent to prefer options which involve you acting according to whim or some nonutilitarian system (example: going to the pub), yet for it to dictate—after taking into account the preferences of others—that you should in fact do something else (example: taking care of your sick grandmother).
There may be some confusion here, though. I normally think of preferences in this context as being evaluated over future states of the world, i.e. consequences, not over possible actions; it sounds like you’re thinking more in terms of the latter.
Yeah, I sometimes have trouble thinking like a utilitarian.
If we’re just looking at future states of the world, then consider four possible futures: your (isolated hermit) granddaughter exists and has a happy life, your granddaughter exists and has a miserable life, your granddaughter does not exist because she died, your granddaughter does not exist because she was never born.
It seems to me that if utilitarianism is to mean anything then the utility of the last two options should be the same—if we’re allowed to assign utility values to the history of whether she was born and died, even though both possible paths result in the same world-state, then it would be equally valid to assign different utilities to different actions that people took even if they turned out the same, and e.g. virtue ethics would qualify as a particular kind of utilitarianism.
If we accept that the utility of the last two options is the same, then we have an awkward dilemma. Either this utility value is higher than option 2 - meaning that if someone’s life is sufficiently miserable, it’s better to kill them than allow them to continue living. Or it’s lower, meaning that it’s always better to give birth to someone than not. Worse, if your first granddaughter was going to be miserable and your second would be happy, it’s a morally good action if you can do something that kills your first granddaughter but gives rise to the birth of your second granddaughter. It’s weirdly discontinuous to say that your first granddaughter’s preferences become valid once she’s born—does that mean that killing her after she’s born is a bad thing, but if you set up some rube goldberg contraption that will kill her after she’s born then that’s a good thing?
Whatever action I take right now, eventually the macroscopic state of the universe is going to look the same (heat death of the universe). Does this mean the utilitarian is committed to saying that all actions available to me are morally equivalent? I don’t think so. Even though the (macroscopic) end state is the same, the way the universe gets there will differ, depending on my actions, and that matters from the perspective of preference utilitarianism.
What, then, would you say is the distinction between a utilitarian and a virtue ethicist? Are they potentially just different formulations of the same idea? Are there any moral systems that definitely don’t qualify as preference utilitarianism, if we allow this kind of distinction in a utility function?
Do you maybe mean the difference between utilitarianism and deontological theories? Virtue ethics is quite obviously different, because it says the business of moral theory is to evaluate character traits rather than acts.
Deontology differs from utilitarianism (and consequentialism more generally) because acts are judged independently of their consequences. An act can be immoral even if it unambiguously leads to a better state of affairs for everyone (a state of affairs where everyone’s preferences are better satisfied and everyone is happier, say), or even if it has absolutely no impact on anyone’s life at any time. Consequentialism doesn’t allow this, even if it allows distinctions between different macroscopic histories that lead to the same macroscopic outcome.
No, deontologists are simply allowed to consider factors other than consequences.
That’s true, but they have preferences before you kill them. In the case of contraception, there is no being to have ever had preferences.
They never do “become nonexistent”. You just happen to have found one of their edges.
Yes, but there may be a moral difference between frustrating a preference that once existed, and causing a preference not to be formed at all. See my reply to the original question.
Even within pleasure- or QALY-utilitarianism, which seems technically wrong, you can avoid this by recognizing that those possible people probably exist regardless in some timeline or other. I think. We don’t understand this very well. But it looks like you want lots of people to follow the rule of making their timelines good places to live (for those who’ve already entered the timeline). Which does appear to save utilitarianism’s use as a rule of thumb.
From a classical utilitarian perspective, yeah, it’s pretty much a wash, at least relative to non-fatal crimes that cause similar suffering.
However, around here, “utilitarian” is usually meant as “consistent consequentialism.” In that frame we can appeal to motives like “I don’t want to live in a society with lots of murder, so it’s extra bad.”
It takes a lot of resources to raise someone. If you’re talking about getting an abortion, it’s not a big difference, but if someone has already invested enough resources to raise a child, and then you kill them, that’s a lot of waste.
Isn’t negative utility usually more motivating to people anyway? This seems like a special case of that, if we don’t count the important complications of killing a person that pragmatist pointed out.
No because time is directional.