Well, if I believe that ending existing worthwhile lives is a bad thing to do (perhaps because contemplating doing so feels icky), and I believe generally that what makes an act bad is that the state of the world after performing that act is worse than the state of the world after not performing it, it’s easy (perhaps not justified, but certainly easy) for me to conclude therefrom than more existing worthwhile lives is better than fewer.
It’s possible that deontologists are less subject to this… they might believe that ending existing worthwhile lives is a bad thing to do, but just embrace that as a rule to follow, without any implication that worthwhile lives are themselves valuable.
Hmm. I hadn’t looked at it from the angle of the implications of reversing the “murder is bad” maxim. Thanks.
It doesn’t feel very satisfying, though. The questions of whether to add new people or not and whether to subtract existing people or not seem like two entirely different things; trying to address both of them with one ethical recipe about the number of worthwhile lives being lead doesn’t seem justified—after all, only one of the questions deals with existing entities. Frankly, I’m more inclined to think it flat-out isn’t justified to conclude that more (worthwhile) lives should be produced just from the rule/preference/whatever that pre-existing worthwhile lives shouldn’t be ended; that conclusion looks like it requires some additional moral framework.
While that may be a true point about deontologists being less subject to it, ad-hoc deontological injunctions aren’t necessary for consistently disliking murder and disliking overpopulation (i.e. situation A). Simply not assigning equal value to existing and non-existing lives does the trick just fine.
...Which is my kind of my Square One here. I just don’t see how the whole line of argument leading towards the Repugnant Conclusion even gets off the ground unless equal value is assumed, or why the assumption should be made.
Well, it does seem to me that there are two different questions one can be asking here, and it’s potentially valuable to separate them.
One question is “Why do people assume that making hypothetical people actual increases the net value of the world?” That’s a question about psychology, though it may also be a moral question, or it may not, I’m not really sure. I tend to believe that moral questions are a subset of psychological questions in the same sense that economic questions are, but I don’t feel able to defend that belief against serious challenge.
Anyway, I find that my tendency to assume that moral value is conserved through multiple iterations of a hypothetical reversible operation is a strong one. That is, if killing someone removes utility, then bringing them back to life should add roughly the same utility… otherwise I could in principle kill and resurrect the same person a million times in an instant, ending up with the world in the same state it started in but with massive utility gains or losses coming from no net state change, which seems… bizarre.
So the line of argument that leads to the Repugnant Conclusion doesn’t seem all that alien to me, although as with a lot of thought experiments along these lines, my takeaway from the RC is that value has more than one source, and while more people may mean more value all else being equal, when adding more people starts to require trading off other sources of value, all else is no longer equal.
The other question is “Is it justified to assume that making hypothetical people actual increases the net value of the world?” I don’t really know how to even approach this question, except maybe by asking what the expected results of assuming it are, which mostly doesn’t seem like what people who ask this question mean.
Yeah, that first question is the one I’m stuck at. From my point of view it just looks like the utility function that assigns value to hypothetical people has to have an error somewhere… but then again, that might be a byproduct of some problem in me rather than them. Possible, though it sure doesn’t seem likely from in here. Still, I do wonder what psychological fact I’d have to acquire that’d make sense of that perspective.
Anyway, I find that my tendency to assume that moral value is conserved through multiple iterations of a hypothetical reversible operation is a strong one. That is, if killing someone removes utility, then bringing them back to life should add roughly the same utility… otherwise I could in principle kill and resurrect the same person a million times in an instant, ending up with the world in the same state it started in but with massive utility gains or losses coming from no net state change, which seems… bizarre.
Well, trivially and possibly in violation of the spirit of the presented scenario, the effecting of those changes (such as switching a simulation of a person on and off at one million Hz) would itself consume energy, and pouring perfectly usable energy into a status quo outcome is likely to be undesirable.
The other question is “Is it justified to assume that making hypothetical people actual increases the net value of the world?” I don’t really know how to even approach this question, except maybe by asking what the expected results of assuming it are, which mostly doesn’t seem like what people who ask this question mean.
The RC can be justified (at least up to some point) by appealing to probable real-world consequences, such as the added capacity of producing Fun (for an individual) that arises from a civilization of a sufficient size. Specialization, gains from trade, added Social Fun opportunities from having lots of people, etc. Things such as mounting interstellar rescue operations also seem easier if the same people who put the ship together don’t need to design it. But all of this seems beside the point—assumptions added on top of the original thought experiment which, for whatever reason, treats the sum total of existing happiness as an intrinsic good as if the universe cares how many humans containing utilons it contains.
The only avenue of approach to the original thought experiment I was able to think of (that doesn’t include added assumptions) is trying to place the first round of burden of proof on the side claiming A) that hypothetical people don’t have value, and/or B) that the universe doesn’t care. Even if this side accepts the burden of proof, it seems like these things should be possible to prove, however hard those proofs may be to formalize.
But suffice to say I think this burden lies on the other party first, and that such a proof, should it ever be formulated, would not be likely to turn out to make much sense, especially if actually applied. If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic where resources go to those who’re able to devise the most powerful mathematical notations for expressing the very large numbers of hypothetical people they’ve got stashed in their astral Pokéballs.
If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic
Sure. OTOH, if we give hypothetical entities no weight at all, it seems to follow naturally that any project that won’t see benefits within a century or so is not worth doing, since no actual people will benefit from it, merely hypothetical people who haven’t yet been born.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future. Whether those entities exist right now or not—that is, whether they are actual or hypothetical—doesn’t really matter.
I’m realizing I made some overly sweeping generalizations about “hypothetical people” there. Whoops.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future.
This, I don’t disagree with. Optimizing for the people we expect to exist seems fine to me; it’s the normative leap from that to “we should produce more people” that throws me off.
The distinction between those two things gets a little tricky for me to hold on to when one of the things that significantly contributes to my expectations about the existence of someone is precisely how much I value them existing… or, more precisely, how much I expect my future self to value them if and when the opportunity to create them presents itself. E.g., if I really don’t want a child, my expectation of a child of mine existing in the future should be lower than if I really want one.
Conversely, if I expect an entity X to exist a year from now if things remain as they are now, and I judge that X would, if actual, make the world worse, it seems to follow that I should take steps to prevent X from becoming actual.
It seems moderately clear to me that, while I value more people rather than fewer all else being equal, that’s not a particularly important value; there are lots of things that I’ll trade it for.
I just don’t see how the whole line of argument leading towards the Repugnant Conclusion even gets off the ground unless equal value is assumed, or why the assumption should be made.
There are many lines of attack on this, but consider the case where you are choosing between different futures where all currently-existing humans are dead. Then, refusing to assign equal value to existing and non-existing lives doesn’t buy you anything.
I can sort of get behind that reasoning on a thought experiment level, but it’s harder to put to practice. Avoiding the more artificial scenarios where there’s a mass extinction followed by a deliberate repopulation of the world with new people, the transition to any future state would be a gradual process that always involves actual, living people as intermediaries—who, being living, should (IMO) be accordingly valued.
That, and in my mind the privilegedness of existing versus non-existing lives arises not only from the fact that existing lives have greater value, but from the fact that non-existing lives have zero value; and multiplying that with whatever will still leave zero. The value of those lives would only become realized when their existence did, at which point we’d still be left with the same problem of resource allocation: if we went for a more situation B-ish solution at some past point in time, the people of our chosen future would have less resources per person to enjoy their lives with. In this case, too, it seems correct to plan for situation A.
Hmm. I hadn’t looked at it from the angle of the implications of reversing the “murder is bad” maxim. Thanks.
It doesn’t feel very satisfying, though. The questions of whether to add new people or not and whether to subtract existing people or not seem like two entirely different things; trying to address both of them with one ethical recipe about the number of worthwhile lives being lead doesn’t seem justified
I agree. My intuition is that, when calculating average wellbeing, you include dead people, and you include whomever is going to end up existing in the future, but not potential people who will never exist unless you take action. So killing someone lowers average wellbeing, but failing to create someone does not. A person who manages to live as long as they possibly can with good quality of life dies with a big positive contribution to average wellbeing, a person who dies prematurely has a much lower contribution and is a permanent black mark on our collective moral records. A person who never exists, however, isn’t factored into the equation.
Of course, taken by itself this might imply the Problem of the Ecstatic Psychopath. But, as I state in my main post, average wellbeing isn’t the only value, though it is an important one. Total wellbeing (having lots of people who contribute fun and other positive values to the world) is important too, sometimes it may be worth risking someone lowering the average if they increase the total..
Avoiding the more artificial scenarios where there’s a mass extinction followed by a deliberate repopulation of the world with new people, the transition to any future state would be a gradual process that always involves actual, living people as intermediaries—who, being living, should (IMO) be accordingly valued.
The quandary we have seems to be that it seems like we have a duty to make sure people who don’t exist yet, but will in the future, will have satisfied preferences, but also think that we have no duty to satisfy the hypothetical preferences of hypothetical people by creating them.
I’ve concluded that the primary reason this seems like a quandary is that we are trying to apply the Person-Affecting Principle to situations where it doesn’t work. The Person Affecting Principle states, in short that an event is only good or bad if it makes things better or worse for some specific person. This works fine in situations where the population doesn’t grow. However, in instances where it does, it gives insane-seeming results.
The classic example is this: imagine a plan to store nuclear waste in one of two places. In one place it’ll keep forever, in another it will leak and kill everyone in the area in 500 years. However, due to the Butterfly Effect, what plan you pick will result in different people meeting, mating, and having children, so the future generations in the alternate storage scenarios will be composed of different individuals. So the choices aren’t better or worse for any specific person, because they change what people will end up existing. According to the PAP neither scenario is better or worse.
I’ve concluded that this can be resolved by replaced the Person Affecting Principle with the World Creating Principle. The World Creating principle states that an event is good or bad if it creates a world which has lower (average utility)+(total utility)+(equality utility)+(other relevant complex values) for the world it creates, whomever the inhabitants end up being. So storing the nuclear waste permanently is a good thing, all other things being equal.
The Person Affecting Principle is a special case of the World Creating Principle, in the same way Newtonian physics is a special case of General Relativity. It’s the World Creating Principle applied in special cases where there is no potential for population growth.
I might expand this thinking into a post at some point.
Well, if I believe that ending existing worthwhile lives is a bad thing to do (perhaps because contemplating doing so feels icky), and I believe generally that what makes an act bad is that the state of the world after performing that act is worse than the state of the world after not performing it, it’s easy (perhaps not justified, but certainly easy) for me to conclude therefrom than more existing worthwhile lives is better than fewer.
It’s possible that deontologists are less subject to this… they might believe that ending existing worthwhile lives is a bad thing to do, but just embrace that as a rule to follow, without any implication that worthwhile lives are themselves valuable.
Hmm. I hadn’t looked at it from the angle of the implications of reversing the “murder is bad” maxim. Thanks.
It doesn’t feel very satisfying, though. The questions of whether to add new people or not and whether to subtract existing people or not seem like two entirely different things; trying to address both of them with one ethical recipe about the number of worthwhile lives being lead doesn’t seem justified—after all, only one of the questions deals with existing entities. Frankly, I’m more inclined to think it flat-out isn’t justified to conclude that more (worthwhile) lives should be produced just from the rule/preference/whatever that pre-existing worthwhile lives shouldn’t be ended; that conclusion looks like it requires some additional moral framework.
While that may be a true point about deontologists being less subject to it, ad-hoc deontological injunctions aren’t necessary for consistently disliking murder and disliking overpopulation (i.e. situation A). Simply not assigning equal value to existing and non-existing lives does the trick just fine.
...Which is my kind of my Square One here. I just don’t see how the whole line of argument leading towards the Repugnant Conclusion even gets off the ground unless equal value is assumed, or why the assumption should be made.
Well, it does seem to me that there are two different questions one can be asking here, and it’s potentially valuable to separate them.
One question is “Why do people assume that making hypothetical people actual increases the net value of the world?” That’s a question about psychology, though it may also be a moral question, or it may not, I’m not really sure. I tend to believe that moral questions are a subset of psychological questions in the same sense that economic questions are, but I don’t feel able to defend that belief against serious challenge.
Anyway, I find that my tendency to assume that moral value is conserved through multiple iterations of a hypothetical reversible operation is a strong one. That is, if killing someone removes utility, then bringing them back to life should add roughly the same utility… otherwise I could in principle kill and resurrect the same person a million times in an instant, ending up with the world in the same state it started in but with massive utility gains or losses coming from no net state change, which seems… bizarre.
So the line of argument that leads to the Repugnant Conclusion doesn’t seem all that alien to me, although as with a lot of thought experiments along these lines, my takeaway from the RC is that value has more than one source, and while more people may mean more value all else being equal, when adding more people starts to require trading off other sources of value, all else is no longer equal.
The other question is “Is it justified to assume that making hypothetical people actual increases the net value of the world?” I don’t really know how to even approach this question, except maybe by asking what the expected results of assuming it are, which mostly doesn’t seem like what people who ask this question mean.
Yeah, that first question is the one I’m stuck at. From my point of view it just looks like the utility function that assigns value to hypothetical people has to have an error somewhere… but then again, that might be a byproduct of some problem in me rather than them. Possible, though it sure doesn’t seem likely from in here. Still, I do wonder what psychological fact I’d have to acquire that’d make sense of that perspective.
Well, trivially and possibly in violation of the spirit of the presented scenario, the effecting of those changes (such as switching a simulation of a person on and off at one million Hz) would itself consume energy, and pouring perfectly usable energy into a status quo outcome is likely to be undesirable.
The RC can be justified (at least up to some point) by appealing to probable real-world consequences, such as the added capacity of producing Fun (for an individual) that arises from a civilization of a sufficient size. Specialization, gains from trade, added Social Fun opportunities from having lots of people, etc. Things such as mounting interstellar rescue operations also seem easier if the same people who put the ship together don’t need to design it. But all of this seems beside the point—assumptions added on top of the original thought experiment which, for whatever reason, treats the sum total of existing happiness as an intrinsic good as if the universe cares how many humans containing utilons it contains.
The only avenue of approach to the original thought experiment I was able to think of (that doesn’t include added assumptions) is trying to place the first round of burden of proof on the side claiming A) that hypothetical people don’t have value, and/or B) that the universe doesn’t care. Even if this side accepts the burden of proof, it seems like these things should be possible to prove, however hard those proofs may be to formalize.
But suffice to say I think this burden lies on the other party first, and that such a proof, should it ever be formulated, would not be likely to turn out to make much sense, especially if actually applied. If we can be convinced to privilege hypothetical entities at the expense of currently existing ones, reality-played-straight ends up looking like a crack fic where resources go to those who’re able to devise the most powerful mathematical notations for expressing the very large numbers of hypothetical people they’ve got stashed in their astral Pokéballs.
Sure. OTOH, if we give hypothetical entities no weight at all, it seems to follow naturally that any project that won’t see benefits within a century or so is not worth doing, since no actual people will benefit from it, merely hypothetical people who haven’t yet been born.
Personally, I conclude that when planning for the future, I should plan based on the expected value of that future, which includes the value of entities I expect to exist in that future. Whether those entities exist right now or not—that is, whether they are actual or hypothetical—doesn’t really matter.
I’m realizing I made some overly sweeping generalizations about “hypothetical people” there. Whoops.
This, I don’t disagree with. Optimizing for the people we expect to exist seems fine to me; it’s the normative leap from that to “we should produce more people” that throws me off.
The distinction between those two things gets a little tricky for me to hold on to when one of the things that significantly contributes to my expectations about the existence of someone is precisely how much I value them existing… or, more precisely, how much I expect my future self to value them if and when the opportunity to create them presents itself. E.g., if I really don’t want a child, my expectation of a child of mine existing in the future should be lower than if I really want one.
Conversely, if I expect an entity X to exist a year from now if things remain as they are now, and I judge that X would, if actual, make the world worse, it seems to follow that I should take steps to prevent X from becoming actual.
It seems moderately clear to me that, while I value more people rather than fewer all else being equal, that’s not a particularly important value; there are lots of things that I’ll trade it for.
There are many lines of attack on this, but consider the case where you are choosing between different futures where all currently-existing humans are dead. Then, refusing to assign equal value to existing and non-existing lives doesn’t buy you anything.
Interesting… Thanks for weighing in!
I can sort of get behind that reasoning on a thought experiment level, but it’s harder to put to practice. Avoiding the more artificial scenarios where there’s a mass extinction followed by a deliberate repopulation of the world with new people, the transition to any future state would be a gradual process that always involves actual, living people as intermediaries—who, being living, should (IMO) be accordingly valued.
That, and in my mind the privilegedness of existing versus non-existing lives arises not only from the fact that existing lives have greater value, but from the fact that non-existing lives have zero value; and multiplying that with whatever will still leave zero. The value of those lives would only become realized when their existence did, at which point we’d still be left with the same problem of resource allocation: if we went for a more situation B-ish solution at some past point in time, the people of our chosen future would have less resources per person to enjoy their lives with. In this case, too, it seems correct to plan for situation A.
Gastogh says:
I agree. My intuition is that, when calculating average wellbeing, you include dead people, and you include whomever is going to end up existing in the future, but not potential people who will never exist unless you take action. So killing someone lowers average wellbeing, but failing to create someone does not. A person who manages to live as long as they possibly can with good quality of life dies with a big positive contribution to average wellbeing, a person who dies prematurely has a much lower contribution and is a permanent black mark on our collective moral records. A person who never exists, however, isn’t factored into the equation.
Of course, taken by itself this might imply the Problem of the Ecstatic Psychopath. But, as I state in my main post, average wellbeing isn’t the only value, though it is an important one. Total wellbeing (having lots of people who contribute fun and other positive values to the world) is important too, sometimes it may be worth risking someone lowering the average if they increase the total..
The quandary we have seems to be that it seems like we have a duty to make sure people who don’t exist yet, but will in the future, will have satisfied preferences, but also think that we have no duty to satisfy the hypothetical preferences of hypothetical people by creating them.
I’ve concluded that the primary reason this seems like a quandary is that we are trying to apply the Person-Affecting Principle to situations where it doesn’t work. The Person Affecting Principle states, in short that an event is only good or bad if it makes things better or worse for some specific person. This works fine in situations where the population doesn’t grow. However, in instances where it does, it gives insane-seeming results.
The classic example is this: imagine a plan to store nuclear waste in one of two places. In one place it’ll keep forever, in another it will leak and kill everyone in the area in 500 years. However, due to the Butterfly Effect, what plan you pick will result in different people meeting, mating, and having children, so the future generations in the alternate storage scenarios will be composed of different individuals. So the choices aren’t better or worse for any specific person, because they change what people will end up existing. According to the PAP neither scenario is better or worse.
I’ve concluded that this can be resolved by replaced the Person Affecting Principle with the World Creating Principle. The World Creating principle states that an event is good or bad if it creates a world which has lower (average utility)+(total utility)+(equality utility)+(other relevant complex values) for the world it creates, whomever the inhabitants end up being. So storing the nuclear waste permanently is a good thing, all other things being equal.
The Person Affecting Principle is a special case of the World Creating Principle, in the same way Newtonian physics is a special case of General Relativity. It’s the World Creating Principle applied in special cases where there is no potential for population growth.
I might expand this thinking into a post at some point.