Okay, so Parfit’s paradox doesn’t prove that we should make more people if our resources are constant. And it doesn’t prove that we should make more people when we get more resources. But it might still prove that we should agree to make more people and more resources if it’s a package deal.
More concretely, if you had a button that created (or made accessible) one additional unit of resource and a million people using that resource to live lives barely worth living, would you press that button? Grabbing only the resources and skipping the people isn’t on the menu of the thought experiment. It seems to me that if you would press that button, and also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn’t completely dead...
But it might still prove that we should agree to make more people and more resources if it’s a package deal.
It does, but by definition.
Let X and Y be populations. Each population has a number of people and an amount of resources. Resources are distributed evenly, so the average utility of a population and each individual’s utility is given by: resources over people. We will say the “standard of living”, the level at which a life is ‘barely worth living’, is a utility of 1. And we will say that Z is when the utility is below the standard of living. These are our definitions.
For numbers, let’s say X and Y start out with 100 people and 500 resources, giving each a utility of 5. This is good! In X, we will perform the false method: simply adding people. In one step, we go to 105 people (utility 4.7, still good), then 110 (utility 4.5) and in 80 steps we will have reached our repugnant Z, with 505 people and 500 resources giving us a utility of 0.99. Now in Y, we will perform the strengthened method: absorb a small population with bare minimum living standards, thus bringing everyone down slightly. In one step, we got to 105 people and 505 resources (4.8 utility, still good) then 110 and 510 (4.6, still good) and then Z arrives ….
No, it doesn’t. Utility in Y will asymptotically approach 1 from above and we will never reach Z. Thus, the repugnant conclusion is dead.
You may argue that “just barely above the absolute bare minimum” is not worth living, but you won’t get very far: previously, we defined any life above the minimum standard as worth living. So if you say that, instead, 2 utility is the minimum worth living for, Y will asymptotically approach 2. And you can hardly argue that “just above 2″ isn’t worth living for, because you just said before that 2 is the minimum! So yes, the repugnant conclusion is truly dead.
(An analogy for this population Y is colonising new planets: the older planets will be affluent, but the frontier new colonies will be hardscrabble and just barely worth it. But this is not a repugnant conclusion! This is like Firefly, and that would be badass!)
Or you may argue that comparing our original Y to a Y++ after many steps, it’s obvious that Y is better. But this won’t get you far either, because in what way is Y better than Y++? If you tell me this comparison beforehand, I will no longer desire to add people when it would reverse that comparison, and if you don’t tell me, well, that’s unfair—it’s no surprise that optimising for one criterion might abandon other criteria, especially ones it didn’t know about.
Footnote: I tried this:
b = 500 a = 100 while (b / a) > 1 b += 5 a += 5 end
and it didn’t terminate, thus the student became enlightened.
I dislike this post. I don’t mean this to be a personal attack and I don’t want to come off as hostile, but I do want to make my objections known. I am choosing to state my reasons in lieu of downvoting.
First, “It does, but by definition.” is clearly false, otherwise you wouldn’t spend 6 paragraphs explaining it. This is something of a pet peeve of mine from grading homework, but whatever, it’s not important.
More importantly, its not really addressing the problems being discussed here. The discussion is whether 100 people at 500 resources is better than your asymptotically-worthless massive population, which is something that you don’t mention at all. Instead, you argue that if we have N+400 resources and N people and each person needs 1 resource to barely survive, then everyone survives when resources are evenly distributed, no matter what N you pick. Okay, but the conclusion is somehow “the repugnant conclusion is dead”? To be honest, I thought you were trying to argue in favor of the repugnant conclusion, at least in the specialized case of a universe that offers you N resources for every additional N people. But the only conclusion I see you really reaching is that a lot of people at a better-than-dead state is better than a world where there aren’t people—this doesn’t strike me as very exciting.
It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility.
That said, I think most of my dislike for this post is caused by the tone and manner of expression. It was fairly disorganized and overly long. The tone was demeaning and combative: assuming the reader will disagree with basic premises and the use of phrases like “thus the student became enlightened”. Note how the top-level post gives the opposing voice to a fictional character rather than forcing it upon the reader—this is a much friendlier approach.
Lastly, can you tell me where you bought your Halting Machine? I wouldn’t mind one for myself… ;)
It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility.
Yeah, on reflection the post is very unclear. I agree with the quoted sentiment, but the point I should have made was that we get to Y+ by a process that reduces average utility (redistributing resources evenly), so it doesn’t seem surprising or confusing that Y has greater average utility.
Or any scenario where adding more people increases our capacity to take advantage of available resources. (such as most agricultural communities throughout history)
But it might still prove that we should agree to make more people and more resources if it’s a package deal.
You’re right, my argument does not prohibit the particular hypothetical you offered up. The one quibble I have is that I’m not sure how much resources “one unit” is, but it would have to be a sizable amount for a million people to live lives barely worth living on it.
In fact, your hypothetical is pretty much structurally identical to the cable bill hypothetical that Bob offers up. And, if you recall, Alice does not disagree that buying Package A+ would be irrational if the government really was going to give her $50 if she did it.
So I might have only killed the repugnant conclusion 99.9% dead. For now I’m content with that, I’ve eliminated it as a possibility from any situation that is remotely likely to happen in real life, and that’s good enough for now.
As for whether I’d push the button? I probably wouldn’t, even though my argument doesn’t exclude it. However, I don’t know if that’s because there is some other moral objection to the repugnant conclusion that I haven’t articulated yet, or if it’s just because I can be kind of selfish sometimes.
I’ve eliminated it as a possibility from any situation that is remotely likely to happen in real life
Hmm, I can imagine situations where you can’t extract the resources without adding people. For example, should humans settle a place if it can support life, but only at a low level of comfort, and exporting resources from there isn’t economically viable?
It seems to me that if the settlement is done voluntarily that it must fulfill some preference that the settlers value more than comfort. Freedom, adventure, or the feeling that you’re part of something bigger, to name three possibilities. For that reason their lives couldn’t really be said to have lowered in quality. If it’s done involuntarily my first instinct is to say that no, we shouldn’t do it, although you could probably get me to say yes by introducing some extenuating circumstance, like it being the only way to prevent extinction.
Of course, this then brings up the issue of whether or not the settlers should have children who might not feel the same way they do. I’m much less sure about the morality of doing that.
I would say yes, to the extent that it reduces species ex-risk to have those extra people. (For instance, having a Mars colony as per RichardKennaway’s example would reduce Ex-Risk) However, it is possible that adding extra people in some cases might instead increase Ex-Risk (say, a slum outside of a city which might breed disease that spreads to the city) and in that case I might say No.
That’s a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn’t do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out.
So if you have “Potatoes, Carrots and Muzak” in Pop Y, and “Potatoes” in Pop Z, averaging it out to “Potatoes and Muzak” for everyone might increase average happiness, and Pop Z wouldn’t mind, but it wouldn’t be safe for Pop Y and Z together as a species, because they lose the safety of being able to come back from a potato famine.
That also seems to come with a built in idea of what kind of averaging is acceptable and where there are limits on averaging. Taking from a richer populations status to improve a poorer populations health would be fine. Taking from a richer populations health or safety to improve a lower populations safety would be unreasonable.
And a life where your health and safety are well guaranteed certainly sounds a hell of a lot better than “barely worth living.” so it doesn’t descend down into repugnance.
Basically, if instead of just looking at Population and Utility, you look at Population, Utility, and Ex-risk, the problem seems to vanish. It seems to say “Yes, add” and “Yes average” when I want it to add and average and say “no, don’t add” and “No don’t average” when I want it to now add and not average.
You could also just say “Well, Ex-Risk is part of my utility function” but that seems to lead to tricky calculation questions such as:
Approximately what is the Ex-Risk at a life barely worth living, utility wise?
Presumably, less Ex-Risk would make the life more worth living, and More Ex-Risk would make the life less worth living? Is that still the case here?
Can it flip the sign? Can an increase to Ex-Risk and nothing else make a life which is currently worth living not worth living?
Maybe I need to answer those questions, although I’m not sure where to start. Or maybe I just need to separate out multiplicative utility and additive utility?
That’s a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn’t do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out.
This criticism has been made before. I think the standard reply was that it may indeed be the case that we would need to have a life somewhat above the level of “barely worth living” in order to guard against the possibility that some sort of disaster would lower the quality of the people’s lives to such an extent that they were no longer worth living. However, such a standard of living would likely still be low enough for the Repugnant Conclusion to remain repugnant.
I find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that’s the condition.
Intentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty?
Intentionally creating people less happy than I am.
That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better.
It’s not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren’t, reaching out with friendly concern seems appropriate.
I would be willing to have children that are significantly less happy than I am
Remember, not “provided I already have children, I’m OK with them being significantly less happy than I am”, but “Knowing for sure that my children will be significantly less happy than I am, I will still have children”. May not give you pause, but probably will to most (first-world) people.
I suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better.
I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing.
I suspect that generalizes, though it’s really hard to have data about other people’s happiness.
It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy.
That said, it also seems to me that there’s a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I’m much more inclined to treat that level as the baseline.
I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence.
This does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don’t want to have children less happy than you are, but you won’t kill your own unhappy children.
There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, “less happy than I am” does not describe those situations.
Intentionally creating people less happy than I am
This probably isn’t the same as “creating people with lives worse than the current average”.
your personal happiness level being the baseline
Why would that be the baseline? I’m lucky enough to have a high happiness set point, but that doesn’t mean I think everyone else has lives that are not worth living.
Would you want to bring unhappy children into the world?
Unhappy as in net negative for their life? No. Unhappy as in “less happy than average”? Depends what the average is, but quite possibly.
One argument that’s occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are.
Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge
Instrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another.
it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
I’m not sure about that. I don’t think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts).
Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair’s existence, and the cheater doesn’t spend less money or time on the spouse because of it.
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
How do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative.
There is no need to be unhappy over situations I can’t control. I know that awful things are happening in other countries that I have no control over, but I don’t let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn’t make me unhappy doesn’t change the fact that it’s not what I’d prefer.
Indeed, I immediately thought “what’s the difference between the government giving you $50 that you can only spend on cables, and it just giving you cables?”.
Indeed, I immediately thought “what’s the difference between the government giving you $50 that you can only spend on cables, and it just giving you cables?”.
There isn’t one. The reason I phrased it that way was to help keep the link between the various steps in the thought experiment as clear as possible.
also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn’t completely dead...
I think a button redistribution all existing resources equally among existing people is one I’d almost certainly not press.
This might be getting into semantics, but I don’t think your proposed dilemma really qualifies as the RC anymore. The RC was interesting because it seemed to derive an obviously unacceptable conclusion (a world full of people whose lives are barely worth living) from premises / steps that were all individually obviously acceptable. Yours employs a step (create people whose lives are barely worth living, without getting enough extra resources to make up for it) that’s already ethically ambiguous, due to clearly leading to a world with a population dominated by people whose lives are barely worth living.
Point taken, though that’s still a more morally ambiguous step than the equivalent in the original RC. There are already plenty of people today who think that people shouldn’t have more children due to the Earth’s resources being limited. That’s not an exact mapping to “creating new people that gave us some small amount of extra resources”, but it’s close and brings to mind the same arguments.
Okay, so Parfit’s paradox doesn’t prove that we should make more people if our resources are constant. And it doesn’t prove that we should make more people when we get more resources. But it might still prove that we should agree to make more people and more resources if it’s a package deal.
More concretely, if you had a button that created (or made accessible) one additional unit of resource and a million people using that resource to live lives barely worth living, would you press that button? Grabbing only the resources and skipping the people isn’t on the menu of the thought experiment. It seems to me that if you would press that button, and also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn’t completely dead...
It does, but by definition.
Let X and Y be populations. Each population has a number of people and an amount of resources. Resources are distributed evenly, so the average utility of a population and each individual’s utility is given by: resources over people. We will say the “standard of living”, the level at which a life is ‘barely worth living’, is a utility of 1. And we will say that Z is when the utility is below the standard of living. These are our definitions.
For numbers, let’s say X and Y start out with 100 people and 500 resources, giving each a utility of 5. This is good!
In X, we will perform the false method: simply adding people. In one step, we go to 105 people (utility 4.7, still good), then 110 (utility 4.5) and in 80 steps we will have reached our repugnant Z, with 505 people and 500 resources giving us a utility of 0.99.
Now in Y, we will perform the strengthened method: absorb a small population with bare minimum living standards, thus bringing everyone down slightly. In one step, we got to 105 people and 505 resources (4.8 utility, still good) then 110 and 510 (4.6, still good) and then Z arrives ….
No, it doesn’t. Utility in Y will asymptotically approach 1 from above and we will never reach Z. Thus, the repugnant conclusion is dead.
You may argue that “just barely above the absolute bare minimum” is not worth living, but you won’t get very far: previously, we defined any life above the minimum standard as worth living. So if you say that, instead, 2 utility is the minimum worth living for, Y will asymptotically approach 2. And you can hardly argue that “just above 2″ isn’t worth living for, because you just said before that 2 is the minimum! So yes, the repugnant conclusion is truly dead.
(An analogy for this population Y is colonising new planets: the older planets will be affluent, but the frontier new colonies will be hardscrabble and just barely worth it. But this is not a repugnant conclusion! This is like Firefly, and that would be badass!)
Or you may argue that comparing our original Y to a Y++ after many steps, it’s obvious that Y is better. But this won’t get you far either, because in what way is Y better than Y++? If you tell me this comparison beforehand, I will no longer desire to add people when it would reverse that comparison, and if you don’t tell me, well, that’s unfair—it’s no surprise that optimising for one criterion might abandon other criteria, especially ones it didn’t know about.
Footnote:
I tried this:
and it didn’t terminate, thus the student became enlightened.
I used almost this exact line in a discussion with my girlfriend about a week ago (talking about Everything Matters!.
I dislike this post. I don’t mean this to be a personal attack and I don’t want to come off as hostile, but I do want to make my objections known. I am choosing to state my reasons in lieu of downvoting.
First, “It does, but by definition.” is clearly false, otherwise you wouldn’t spend 6 paragraphs explaining it. This is something of a pet peeve of mine from grading homework, but whatever, it’s not important.
More importantly, its not really addressing the problems being discussed here. The discussion is whether 100 people at 500 resources is better than your asymptotically-worthless massive population, which is something that you don’t mention at all. Instead, you argue that if we have N+400 resources and N people and each person needs 1 resource to barely survive, then everyone survives when resources are evenly distributed, no matter what N you pick. Okay, but the conclusion is somehow “the repugnant conclusion is dead”? To be honest, I thought you were trying to argue in favor of the repugnant conclusion, at least in the specialized case of a universe that offers you N resources for every additional N people. But the only conclusion I see you really reaching is that a lot of people at a better-than-dead state is better than a world where there aren’t people—this doesn’t strike me as very exciting.
It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility.
That said, I think most of my dislike for this post is caused by the tone and manner of expression. It was fairly disorganized and overly long. The tone was demeaning and combative: assuming the reader will disagree with basic premises and the use of phrases like “thus the student became enlightened”. Note how the top-level post gives the opposing voice to a fictional character rather than forcing it upon the reader—this is a much friendlier approach.
Lastly, can you tell me where you bought your Halting Machine? I wouldn’t mind one for myself… ;)
Yeah, on reflection the post is very unclear. I agree with the quoted sentiment, but the point I should have made was that we get to Y+ by a process that reduces average utility (redistributing resources evenly), so it doesn’t seem surprising or confusing that Y has greater average utility.
As in, “human resources”.
Or any scenario where adding more people increases our capacity to take advantage of available resources. (such as most agricultural communities throughout history)
You’re right, my argument does not prohibit the particular hypothetical you offered up. The one quibble I have is that I’m not sure how much resources “one unit” is, but it would have to be a sizable amount for a million people to live lives barely worth living on it.
In fact, your hypothetical is pretty much structurally identical to the cable bill hypothetical that Bob offers up. And, if you recall, Alice does not disagree that buying Package A+ would be irrational if the government really was going to give her $50 if she did it.
So I might have only killed the repugnant conclusion 99.9% dead. For now I’m content with that, I’ve eliminated it as a possibility from any situation that is remotely likely to happen in real life, and that’s good enough for now.
As for whether I’d push the button? I probably wouldn’t, even though my argument doesn’t exclude it. However, I don’t know if that’s because there is some other moral objection to the repugnant conclusion that I haven’t articulated yet, or if it’s just because I can be kind of selfish sometimes.
Hmm, I can imagine situations where you can’t extract the resources without adding people. For example, should humans settle a place if it can support life, but only at a low level of comfort, and exporting resources from there isn’t economically viable?
It seems to me that if the settlement is done voluntarily that it must fulfill some preference that the settlers value more than comfort. Freedom, adventure, or the feeling that you’re part of something bigger, to name three possibilities. For that reason their lives couldn’t really be said to have lowered in quality. If it’s done involuntarily my first instinct is to say that no, we shouldn’t do it, although you could probably get me to say yes by introducing some extenuating circumstance, like it being the only way to prevent extinction.
Of course, this then brings up the issue of whether or not the settlers should have children who might not feel the same way they do. I’m much less sure about the morality of doing that.
Yes, the scenario involves adding people, not just moving them around. That’s what makes population ethics tricky.
Such as, for example, the Moon or Mars?
I would say yes, to the extent that it reduces species ex-risk to have those extra people. (For instance, having a Mars colony as per RichardKennaway’s example would reduce Ex-Risk) However, it is possible that adding extra people in some cases might instead increase Ex-Risk (say, a slum outside of a city which might breed disease that spreads to the city) and in that case I might say No.
That’s a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn’t do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out.
So if you have “Potatoes, Carrots and Muzak” in Pop Y, and “Potatoes” in Pop Z, averaging it out to “Potatoes and Muzak” for everyone might increase average happiness, and Pop Z wouldn’t mind, but it wouldn’t be safe for Pop Y and Z together as a species, because they lose the safety of being able to come back from a potato famine.
That also seems to come with a built in idea of what kind of averaging is acceptable and where there are limits on averaging. Taking from a richer populations status to improve a poorer populations health would be fine. Taking from a richer populations health or safety to improve a lower populations safety would be unreasonable.
And a life where your health and safety are well guaranteed certainly sounds a hell of a lot better than “barely worth living.” so it doesn’t descend down into repugnance.
Basically, if instead of just looking at Population and Utility, you look at Population, Utility, and Ex-risk, the problem seems to vanish. It seems to say “Yes, add” and “Yes average” when I want it to add and average and say “no, don’t add” and “No don’t average” when I want it to now add and not average.
You could also just say “Well, Ex-Risk is part of my utility function” but that seems to lead to tricky calculation questions such as:
Approximately what is the Ex-Risk at a life barely worth living, utility wise? Presumably, less Ex-Risk would make the life more worth living, and More Ex-Risk would make the life less worth living? Is that still the case here? Can it flip the sign? Can an increase to Ex-Risk and nothing else make a life which is currently worth living not worth living?
Maybe I need to answer those questions, although I’m not sure where to start. Or maybe I just need to separate out multiplicative utility and additive utility?
This criticism has been made before. I think the standard reply was that it may indeed be the case that we would need to have a life somewhat above the level of “barely worth living” in order to guard against the possibility that some sort of disaster would lower the quality of the people’s lives to such an extent that they were no longer worth living. However, such a standard of living would likely still be low enough for the Repugnant Conclusion to remain repugnant.
I find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that’s the condition.
What do you find repugnant about it?
Intentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty?
That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better.
This, in conjunction with some other stuff I’ve been working on, prompted me to rethink some things about my priorities in life. Thanks!
Again, a misunderstanding. See my other reply.
It’s not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren’t, reaching out with friendly concern seems appropriate.
Remember, not “provided I already have children, I’m OK with them being significantly less happy than I am”, but “Knowing for sure that my children will be significantly less happy than I am, I will still have children”. May not give you pause, but probably will to most (first-world) people.
I suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better.
Well… hrm.
I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing.
I suspect that generalizes, though it’s really hard to have data about other people’s happiness.
It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy.
That said, it also seems to me that there’s a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I’m much more inclined to treat that level as the baseline.
This does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don’t want to have children less happy than you are, but you won’t kill your own unhappy children.
Agreed that creation != continued existence.
There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, “less happy than I am” does not describe those situations.
Looks like we agree, then.
This probably isn’t the same as “creating people with lives worse than the current average”.
Why would that be the baseline? I’m lucky enough to have a high happiness set point, but that doesn’t mean I think everyone else has lives that are not worth living.
Unhappy as in net negative for their life? No. Unhappy as in “less happy than average”? Depends what the average is, but quite possibly.
I’ve considered this possibility as well.
One argument that’s occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others’ existence, but it’s possible to harm someone without their knowledge, if I spread dirty rumors about someone I’d say that I harmed them even if they never find out about it.
However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are.
Instrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another.
True, but only because rumors can harm people, so the “no interaction” rule is broken.
I’m not sure about that. I don’t think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts).
Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair’s existence, and the cheater doesn’t spend less money or time on the spouse because of it.
Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven’t my preferences been thwarted in some sense?
How do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative.
There is no need to be unhappy over situations I can’t control. I know that awful things are happening in other countries that I have no control over, but I don’t let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn’t make me unhappy doesn’t change the fact that it’s not what I’d prefer.
Indeed, I immediately thought “what’s the difference between the government giving you $50 that you can only spend on cables, and it just giving you cables?”.
There isn’t one. The reason I phrased it that way was to help keep the link between the various steps in the thought experiment as clear as possible.
I think a button redistribution all existing resources equally among existing people is one I’d almost certainly not press.
This might be getting into semantics, but I don’t think your proposed dilemma really qualifies as the RC anymore. The RC was interesting because it seemed to derive an obviously unacceptable conclusion (a world full of people whose lives are barely worth living) from premises / steps that were all individually obviously acceptable. Yours employs a step (create people whose lives are barely worth living, without getting enough extra resources to make up for it) that’s already ethically ambiguous, due to clearly leading to a world with a population dominated by people whose lives are barely worth living.
In my argument the button could create people and resources leading to a standard of living just below the current average, like in the original RC.
Point taken, though that’s still a more morally ambiguous step than the equivalent in the original RC. There are already plenty of people today who think that people shouldn’t have more children due to the Earth’s resources being limited. That’s not an exact mapping to “creating new people that gave us some small amount of extra resources”, but it’s close and brings to mind the same arguments.