Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don’t find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn’t happen to be true.
Point 2 states that total utilitarianism won’t magically implement itself and requires “technology” rather than philosophy; that is, people have to come up with specific contingent techniques of estimating utility, rather than just reading it off via a simple method which can be proven mathematically perfect. But we have some Stone Age utility-comparing technologies like money and the popular vote, and QALYs might be metaphorically a Bronze Age technology. I suppose I take it on faith that there’s a lot of room for more advanced technology before we hit mathematical limits.
That leaves the introductory paragraph and Point 3 as the only places I still disagree:
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).
In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don’t give credit for creating potential people, isn’t most people’s preference not to be killed enough to stop preference utilitarians from killing them?
And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that “If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira.” You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.
Can you explain this further? If we don’t allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.
The fact that the repugnant conclusion has “repugnant” right in the name suggests that most people don’t want it. Therefore if total utilitarianism is about satisfying the preferences of as many people as possible much as possible, and it results in a conclusion nobody prefers, that should be a red flag.
If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.
So I don’t see what you mean when you say this reasoning “pre-supposes total utiltarianism”. It presupposes people’s intuitive moral preferences for a happy world full of culture to a just-barely-not-unhappy-world without, and it pretends we can solve the aggregation problem, but where’s the vicious self-reference?
If we don’t allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.
That’s Peter Singer’s view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it’s impossible to kill it afterwards.
One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a “life not worth living”. But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive.
The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You’d end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn’t necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?
The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You’d end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence.
A potential major problem with this approach has occurred to me, namely, the fact that people tend to have infinite or near infinite preferences. We always want more. I don’t see anything wrong with that, but it does create headaches for the ethical system under discussion.
The human race’s insatiable desires makes negative total preference-utilitarianism vulnerable to an interesting variant of the various problems of infinity in ethics. Once you’ve created a person, who then dies, it is impossible to do any more harm. There’s already an infinite amount of unsatisfied preferences in the world from their existence and death. Creating more people will result in the same total amount of unsatisfied preferences as before: infinity. This would render negative utilitarianism as always indifferent to whether one should create more people, obviously not what we want.
Even if you posit that our preferences are not infinite, but merely very large, this still runs into problems. I think most people, even anti-natalists, would agree that it is sometimes acceptable to create a new person in order to prevent the suffering of existing people. For instance, I think even an antinatalist would be willing to create one person who will live a life with what an upper-class 21st Century American would consider a “normal” amount of suffering, if doing so would prevent 7 billion people from being tortured for 50 years. But if you posit that the new person has a very large, but not infinite amount of preferences (say, a googol) then it’s still possible for the badness of creating them to outweigh the torture of all those people. Again, not what we want.
Hedonic negative utilitarianism doesn’t have this problem, but it’s even worse, it implies we should painlessly kill everyone ASAP! Since most antinatalists I know believe death to be a negative thing, rather than a neutral thing, they must be at least partial preference utilitarians.
Now, I’m sure that negative utilitarians have some way around this problem. There wouldn’t be so many passionate advocates for it if it could be killed by a logical conundrum like this. But I can’t find any discussion of this problem after doing some searching on the topic. I’m really curious to know what the proposed solution is, and would appreciate it if someone told me.
Sure, existing people tend to have such preferences. But hypothetically it’s possible that they didn’t, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.
One possibility might be phrasing it as “Maximize preference satisfaction for everyone who exists and ever will exist, but not for everyone who could possibly exist..”
This captures the intuition that it is bad to create people who have low levels of preference satisfaction, even if they don’t exist yet and hence can’t object to being created, while preserving the belief that existing people have a right to not create new people whose existence would seriously interfere with their desires. It does this without implying anti-natalism. I admit that the phrasing is a little clunky and needs refinement, and I’m sure a clever enough UFAI could find some way to screw it up, but I think it’s a big step towards resolving the issues you point out.
EDIT: Another possibility that I thought of is setting “creating new worthwhile lives” and “improving already worthwhile lives” as two separate values that have diminishing returns relative to each other. This is still vulnerable to some forms of repugant-conclusion type arguments, but it totally eliminates what I think is the most repugnant aspect of the RC—the idea that a Malthusian society might be morally optimal.
Thank you. Apparently total utilitarianism really is scary, and I had routed around it by replacing it with something more useable and assuming that was what everyone else meant when they said “total utilitarianism”.
I suppose I take it on faith that there’s a lot of room for more advanced technology before we hit mathematical limits.
Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don’t need to assume ahead of time that the progress will take the form of “better individual utilities and definition of summation” rather than “other ways of doing population ethics”.
In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don’t give credit for creating potential people, isn’t most people’s preference not to be killed enough to stop preference utilitarians from killing them?
Yes, the act is not morally neutral in preference utilitarianism. In those cases, we’d have to talk about how many people we’d have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences—and if creating more people is a way of doing this, then it’s in favour.
If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.
This is not preference total utilitarianism. It’s something like “satisfying the maximal amount of preferences of currently existing people”. In fact, it’s closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency).
So I don’t see what you mean when you say this reasoning “pre-supposes total utiltarianism”.
Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.
In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).
In hedonic utilitarianism, yes.
Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.
A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It’s a more accurate claim, and I personally would accept it.
Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.
These are all practical considerations. Most people believe it is wrong in principle to kill someone and replace them with a being of comparable happiness. You don’t see people going around saying:
“Look at that moderately happy person. It sure is too bad that it’s impractical to kill them and replace them with a slightly happier person. The world would be a lot better if that were possible.”
I also doubt that an aversion to violence is what prevents people from endorsing replacement either. You don’t see people going around saying:
“Man, I sure wish that person would get killed in a tornado or a car accident. Then I could replace them without breaking any social covenants.”
I believe that people reject replacement because they see it as a bad consequence, not because of any practical or deontological considerations. I wholeheartedly endorse such a rejection.
A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It’s a more accurate claim, and I personally would accept it.
The reason that that claim seems acceptable is because, under many understandings of how personal identity works, if a copy of someone exists, they aren’t really dead. You killed a piece of them, but there’s still another piece left alive. As long as your memories, personality, and values continue to exist you still live.
The OP makes it clear that what they mean is that total utilitarianism (hedonic and otherwise) maintains that it is morally neutral to kill someone and replace them with a completely different person who has totally different memories, personality, and values, providing the second person is of comparable happiness to the first. I believe any moral theory that produces this result ought to be rejected.
Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don’t find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn’t happen to be true.
Point 2 states that total utilitarianism won’t magically implement itself and requires “technology” rather than philosophy; that is, people have to come up with specific contingent techniques of estimating utility, rather than just reading it off via a simple method which can be proven mathematically perfect. But we have some Stone Age utility-comparing technologies like money and the popular vote, and QALYs might be metaphorically a Bronze Age technology. I suppose I take it on faith that there’s a lot of room for more advanced technology before we hit mathematical limits.
That leaves the introductory paragraph and Point 3 as the only places I still disagree:
In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don’t give credit for creating potential people, isn’t most people’s preference not to be killed enough to stop preference utilitarians from killing them?
Can you explain this further? If we don’t allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.
The fact that the repugnant conclusion has “repugnant” right in the name suggests that most people don’t want it. Therefore if total utilitarianism is about satisfying the preferences of as many people as possible much as possible, and it results in a conclusion nobody prefers, that should be a red flag.
If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.
So I don’t see what you mean when you say this reasoning “pre-supposes total utiltarianism”. It presupposes people’s intuitive moral preferences for a happy world full of culture to a just-barely-not-unhappy-world without, and it pretends we can solve the aggregation problem, but where’s the vicious self-reference?
That’s Peter Singer’s view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it’s impossible to kill it afterwards.
One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a “life not worth living”. But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive.
The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You’d end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn’t necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?
A potential major problem with this approach has occurred to me, namely, the fact that people tend to have infinite or near infinite preferences. We always want more. I don’t see anything wrong with that, but it does create headaches for the ethical system under discussion.
The human race’s insatiable desires makes negative total preference-utilitarianism vulnerable to an interesting variant of the various problems of infinity in ethics. Once you’ve created a person, who then dies, it is impossible to do any more harm. There’s already an infinite amount of unsatisfied preferences in the world from their existence and death. Creating more people will result in the same total amount of unsatisfied preferences as before: infinity. This would render negative utilitarianism as always indifferent to whether one should create more people, obviously not what we want.
Even if you posit that our preferences are not infinite, but merely very large, this still runs into problems. I think most people, even anti-natalists, would agree that it is sometimes acceptable to create a new person in order to prevent the suffering of existing people. For instance, I think even an antinatalist would be willing to create one person who will live a life with what an upper-class 21st Century American would consider a “normal” amount of suffering, if doing so would prevent 7 billion people from being tortured for 50 years. But if you posit that the new person has a very large, but not infinite amount of preferences (say, a googol) then it’s still possible for the badness of creating them to outweigh the torture of all those people. Again, not what we want.
Hedonic negative utilitarianism doesn’t have this problem, but it’s even worse, it implies we should painlessly kill everyone ASAP! Since most antinatalists I know believe death to be a negative thing, rather than a neutral thing, they must be at least partial preference utilitarians.
Now, I’m sure that negative utilitarians have some way around this problem. There wouldn’t be so many passionate advocates for it if it could be killed by a logical conundrum like this. But I can’t find any discussion of this problem after doing some searching on the topic. I’m really curious to know what the proposed solution is, and would appreciate it if someone told me.
Well don’t existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?
Sure, existing people tend to have such preferences. But hypothetically it’s possible that they didn’t, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.
This might be one reason why Eliezer talks about morality as a fixed computation.
P.S. Also, doesn’t the being itself have a preference for not-suffering?
One possibility might be phrasing it as “Maximize preference satisfaction for everyone who exists and ever will exist, but not for everyone who could possibly exist..”
This captures the intuition that it is bad to create people who have low levels of preference satisfaction, even if they don’t exist yet and hence can’t object to being created, while preserving the belief that existing people have a right to not create new people whose existence would seriously interfere with their desires. It does this without implying anti-natalism. I admit that the phrasing is a little clunky and needs refinement, and I’m sure a clever enough UFAI could find some way to screw it up, but I think it’s a big step towards resolving the issues you point out.
EDIT: Another possibility that I thought of is setting “creating new worthwhile lives” and “improving already worthwhile lives” as two separate values that have diminishing returns relative to each other. This is still vulnerable to some forms of repugant-conclusion type arguments, but it totally eliminates what I think is the most repugnant aspect of the RC—the idea that a Malthusian society might be morally optimal.
Thank you. Apparently total utilitarianism really is scary, and I had routed around it by replacing it with something more useable and assuming that was what everyone else meant when they said “total utilitarianism”.
Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don’t need to assume ahead of time that the progress will take the form of “better individual utilities and definition of summation” rather than “other ways of doing population ethics”.
Yes, the act is not morally neutral in preference utilitarianism. In those cases, we’d have to talk about how many people we’d have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences—and if creating more people is a way of doing this, then it’s in favour.
This is not preference total utilitarianism. It’s something like “satisfying the maximal amount of preferences of currently existing people”. In fact, it’s closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency).
Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.
Shouldn’t we then just create people with simpler and easier to satisfy preferences so that there’s more preference-satisfying in the world?
Indeed, that’s a very counterintuitive conclusion. It’s the reason why most preference-utilitarians I know hold a prior-existence view.
Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.
A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It’s a more accurate claim, and I personally would accept it.
These are all practical considerations. Most people believe it is wrong in principle to kill someone and replace them with a being of comparable happiness. You don’t see people going around saying:
“Look at that moderately happy person. It sure is too bad that it’s impractical to kill them and replace them with a slightly happier person. The world would be a lot better if that were possible.”
I also doubt that an aversion to violence is what prevents people from endorsing replacement either. You don’t see people going around saying:
“Man, I sure wish that person would get killed in a tornado or a car accident. Then I could replace them without breaking any social covenants.”
I believe that people reject replacement because they see it as a bad consequence, not because of any practical or deontological considerations. I wholeheartedly endorse such a rejection.
The reason that that claim seems acceptable is because, under many understandings of how personal identity works, if a copy of someone exists, they aren’t really dead. You killed a piece of them, but there’s still another piece left alive. As long as your memories, personality, and values continue to exist you still live.
The OP makes it clear that what they mean is that total utilitarianism (hedonic and otherwise) maintains that it is morally neutral to kill someone and replace them with a completely different person who has totally different memories, personality, and values, providing the second person is of comparable happiness to the first. I believe any moral theory that produces this result ought to be rejected.