Utilitarianism doesn’t say that I have to value potential people at anything approaching the level of value I assign to living persons.
In ten years time, you see a nine year old child fall into a pond. Do you save her from drowning? If so, you, in 2023, place value on people who aren’t born in 2013. If you don’t value those people now, in 2013, you’re temporally inconsistent.
Obviously this isn’t utilitarianism, but I think many people are unaware of this argument, despite its being from very common intuitions.
Valuing potential people without an extremely high discount rate also leads one to be strongly pro-life, to be against birth control programs in developing nations, etc.
Are these programs’ net desirability so self-evident that they constitute evidence against caring about future people? Yes, you could say “but they’re good for economic growth and the autonomy of women etc.”, those are reasons that would support supporting the programs even if we cared about future people. I think in general the desirability of contraception should be an output, rather than an input, to our expected value calculations.
On the other hand, if you’re the sort of person who doesn’t care about people far away in time, it might be sensible not to care about people far away in space.
In ten years time, you see a nine year old child fall into a pond. Do you save her from drowning? If so, you, in 2023, place value on people who aren’t born in 2013. If you don’t value those people now, in 2013, you’re temporally inconsistent.
What do you mean by “place value on people”? Your example is explained by placing value on the non-occurrence (or lateness) of their death. This is quite independent from placing value on the existence of people, and is therefore irrelevant to contraception, the continuation of humanity, etc.
You care about the deaths of people without caring about people?
What if I changed the example—and it’s about whether or not to help educate the child, or comport her, or feed her. Do we are about the education, hunger and happiness of the child also, without caring about the child?
You can say that a death averted or delayed is a good thing without being committed to saying that a birth is a good thing. That’s the point I was trying to make.
Similarly, you can “care about people” in the sense that you think that, given that a person exists, they should have a good life, without thinking that a world with people who have good lives is better than a world with no people at all.
No you can’t. Consider three worlds, only differing with regards person A.
In world 1, U(A) = 20.
In world 2, U(A) = 10.
In world 3, U(A) = undefined, as A does not exist.
Which world is best? As we agree that people who exist should have a good life, U(1) > U(2). Assume U(2)=U(3), as per your suggest that we’re unconcerned about people’s existence/non-existence. Therefore, by transitivity of preference, U(1) > U(3). So we do care about A’s existence or non-existence.
But U(3) = U(2) doesn’t reflect what I was suggesting. There’s nothing wrong with assuming U(3) ≥ U(1). You can care about A even though you think that it would have been better if they hadn’t been born. You’re right, though, about the conclusion that it’s difficult to be unconcerned with a person’s existence. Cases of true indifference about a person’s birth will be rare.
Personally, I can imagine a world with arbitrarily happy people and it doesn’t feel better to me than a world where those people are never been born; and this doesn’t feel inconsistent. And as long as the utility I can derive from people’s happiness is bounded, it isn’t.
I agree; this is excelent.
In ten years time, you see a nine year old child fall into a pond. Do you save her from drowning? If so, you, in 2023, place value on people who aren’t born in 2013. If you don’t value those people now, in 2013, you’re temporally inconsistent.
Obviously this isn’t utilitarianism, but I think many people are unaware of this argument, despite its being from very common intuitions.
Are these programs’ net desirability so self-evident that they constitute evidence against caring about future people? Yes, you could say “but they’re good for economic growth and the autonomy of women etc.”, those are reasons that would support supporting the programs even if we cared about future people. I think in general the desirability of contraception should be an output, rather than an input, to our expected value calculations.
On the other hand, if you’re the sort of person who doesn’t care about people far away in time, it might be sensible not to care about people far away in space.
What do you mean by “place value on people”? Your example is explained by placing value on the non-occurrence (or lateness) of their death. This is quite independent from placing value on the existence of people, and is therefore irrelevant to contraception, the continuation of humanity, etc.
You care about the deaths of people without caring about people?
What if I changed the example—and it’s about whether or not to help educate the child, or comport her, or feed her. Do we are about the education, hunger and happiness of the child also, without caring about the child?
You can say that a death averted or delayed is a good thing without being committed to saying that a birth is a good thing. That’s the point I was trying to make.
Similarly, you can “care about people” in the sense that you think that, given that a person exists, they should have a good life, without thinking that a world with people who have good lives is better than a world with no people at all.
No you can’t. Consider three worlds, only differing with regards person A.
In world 1, U(A) = 20.
In world 2, U(A) = 10.
In world 3, U(A) = undefined, as A does not exist.
Which world is best? As we agree that people who exist should have a good life, U(1) > U(2). Assume U(2)=U(3), as per your suggest that we’re unconcerned about people’s existence/non-existence. Therefore, by transitivity of preference, U(1) > U(3). So we do care about A’s existence or non-existence.
But U(3) = U(2) doesn’t reflect what I was suggesting. There’s nothing wrong with assuming U(3) ≥ U(1). You can care about A even though you think that it would have been better if they hadn’t been born. You’re right, though, about the conclusion that it’s difficult to be unconcerned with a person’s existence. Cases of true indifference about a person’s birth will be rare.
Personally, I can imagine a world with arbitrarily happy people and it doesn’t feel better to me than a world where those people are never been born; and this doesn’t feel inconsistent. And as long as the utility I can derive from people’s happiness is bounded, it isn’t.
U(2)=U(3) isn’t “a world with people who have good lives is not better than a world with no people at all”. That would be U(1)=U(3).