Your examples rely too heavily on “intuitively right” and ceteris paribus conditioning. It is not always the case that five are more important than four
If there is literally nothing distinguishing the two scenarios except for the number of people—you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else—then it matters not whether it’s 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
and the mere idea has been debunked several times.
The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting “human” (a thing whose utility I care about) for “monster” (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist.
I’ll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes—contrary to the utility monster, I note—that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
(Or a constant value for human life but with positive utility assigned to the probability of extinction from independent, chance deaths that follows an even more arbitrary somewhat bizarre function.)
If there is literally nothing distinguishing the two scenarios except for the number of people—you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else—then it matters not whether it’s 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting “human” (a thing whose utility I care about) for “monster” (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist.
I’ll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes—contrary to the utility monster, I note—that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
(Or a constant value for human life but with positive utility assigned to the probability of extinction from independent, chance deaths that follows an even more arbitrary somewhat bizarre function.)