We can deal with this with a thought experiment that engages our intuitions more clearly, since it doesn’t involve futuristic technology: Is it okay to kill a fifteen year old person who is destined to live a good life if doing so will allow you to replace them with someone who will live a life that is as good, or better, as the fifteen year old’s remaining years would have been? What if the fifteen year old in question is disabled, so their life is a little more difficult, but still worth living, while their replacement would be an able person? Would it be okay then?
The answers are obvious. No and no. Once someone exists keeping them alive and happy is much more important than creating new people. It isn’t infinitely more important, it would be wrong to sterilize the entire human race to prevent one existing person from getting a dust speck in their eye. But it is much, much, much more important.
Life extension isn’t just better than replacement, it is better by far, even if the utility of the person with an extended life is much lower than the utility their replacement would have.
I suspect that the reason for this is that population ethics isn’t about maximizing utility. If it was we wouldn’t be trying to create more people, we’d be trying to figure out how to kill the human race and replace it with another species whose preferences are easier to satisfy.* I believe that the main reason to create new people because having the human race continue to exist helps fulfill certain ideals, such as Fun Theory and the various complex human values. If you try to do population ethics just by adding the utility of the creatures being created, you’re doing it wrong.
Now, once we’ve created someone we do have a responsibility to make sure they have high utility (you can’t unbirth a child). If we know they are going to exist we should definitely take steps to improve their utility even before they come into existence. And if you’re trying to decide between creating two people who fulfill our ideals equally well, which one has a higher level of utility is definitely a good tiebreaker. But a person’s utility isn’t the main reason we create them. If I had a choice between making a human with positive utility, and making a kiloton of orgasmium, all other things being equal I’d pick the human, because the complex values of a human being furthers my moral ideals far better than orgasmium does.
*Anyone who disagrees and believes that creating human beings (or nonhuman creatures with human-like values) is the most efficient way to maximize utility should consider the Friendly AI problem. Imagine that someone has just created an AI programmed to “maximize preference satisfaction.” The AI is extremely intelligent and has access to immense resources. All that needs to be done is switch it on. What is your honest, Bayesian probability that, if you switch on the AI, it will not eventually try to exterminate the human race and replace it with creatures who have cheaper, easier to satisfy preferences?
We can deal with this with a thought experiment that engages our intuitions more clearly, since it doesn’t involve futuristic technology: Is it okay to kill a fifteen year old person who is destined to live a good life if doing so will allow you to replace them with someone who will live a life that is as good, or better, as the fifteen year old’s remaining years would have been? What if the fifteen year old in question is disabled, so their life is a little more difficult, but still worth living, while their replacement would be an able person? Would it be okay then?
The answers are obvious. No and no. Once someone exists keeping them alive and happy is much more important than creating new people. It isn’t infinitely more important, it would be wrong to sterilize the entire human race to prevent one existing person from getting a dust speck in their eye. But it is much, much, much more important.
Life extension isn’t just better than replacement, it is better by far, even if the utility of the person with an extended life is much lower than the utility their replacement would have.
I suspect that the reason for this is that population ethics isn’t about maximizing utility. If it was we wouldn’t be trying to create more people, we’d be trying to figure out how to kill the human race and replace it with another species whose preferences are easier to satisfy.* I believe that the main reason to create new people because having the human race continue to exist helps fulfill certain ideals, such as Fun Theory and the various complex human values. If you try to do population ethics just by adding the utility of the creatures being created, you’re doing it wrong.
Now, once we’ve created someone we do have a responsibility to make sure they have high utility (you can’t unbirth a child). If we know they are going to exist we should definitely take steps to improve their utility even before they come into existence. And if you’re trying to decide between creating two people who fulfill our ideals equally well, which one has a higher level of utility is definitely a good tiebreaker. But a person’s utility isn’t the main reason we create them. If I had a choice between making a human with positive utility, and making a kiloton of orgasmium, all other things being equal I’d pick the human, because the complex values of a human being furthers my moral ideals far better than orgasmium does.
*Anyone who disagrees and believes that creating human beings (or nonhuman creatures with human-like values) is the most efficient way to maximize utility should consider the Friendly AI problem. Imagine that someone has just created an AI programmed to “maximize preference satisfaction.” The AI is extremely intelligent and has access to immense resources. All that needs to be done is switch it on. What is your honest, Bayesian probability that, if you switch on the AI, it will not eventually try to exterminate the human race and replace it with creatures who have cheaper, easier to satisfy preferences?