I think “the very repugnant conclusion is actually fine” does pretty well against its alternatives. It’s totally possible that our intuitive aversion to it comes from just not being able to wrap our brains around some aspect of (a) how huge the numbers of “barely worth living” lives would have to be, in order to make the very repugnant conclusion work; (b) something that is just confusing about the idea of “making it possible for additional people to exist.”
While this doesn’t sound crazy to me, I’m skeptical that my anti-VRC intuitions can be explained by these factors. I think you can get something “very repugnant” on scales that our minds can comprehend (and not involving lives that are “barely worth living” by classical utilitarian standards). Suppose you can populate* some twin-Earth planet with either a) 10 people with lives equivalent to the happiest person on real Earth, or b) one person with a life equivalent to the most miserable person on real Earth plus 8 billion people with lives equivalent to the average resident of a modern industrialized nation.
I’d be surprised if a classical utilitarian thought the total happiness minus suffering in (b) was less than in (a). Heck, 8 billion might be pretty generous. But I would definitely choose (a).
To me the very-repugnance just gets much worse the more you scale things up. I also find that basically every suffering-focused EA I know is not scope-neglectful about the badness of suffering (at least, when it’s sufficiently intense), or in any area other than population ethics. So it would be pretty strange if we just happened to be falling prey to that error in thought experiments where there’s another explanation—i.e., we consider suffering especially important—which is consistent with our intuitions about cases that don’t involve large numbers.
* As usual, ignore the flow-through effects on other lives.
While this doesn’t sound crazy to me, I’m skeptical that my anti-VRC intuitions can be explained by these factors. I think you can get something “very repugnant” on scales that our minds can comprehend (and not involving lives that are “barely worth living” by classical utilitarian standards). Suppose you can populate* some twin-Earth planet with either a) 10 people with lives equivalent to the happiest person on real Earth, or b) one person with a life equivalent to the most miserable person on real Earth plus 8 billion people with lives equivalent to the average resident of a modern industrialized nation.
I’d be surprised if a classical utilitarian thought the total happiness minus suffering in (b) was less than in (a). Heck, 8 billion might be pretty generous. But I would definitely choose (a).
To me the very-repugnance just gets much worse the more you scale things up. I also find that basically every suffering-focused EA I know is not scope-neglectful about the badness of suffering (at least, when it’s sufficiently intense), or in any area other than population ethics. So it would be pretty strange if we just happened to be falling prey to that error in thought experiments where there’s another explanation—i.e., we consider suffering especially important—which is consistent with our intuitions about cases that don’t involve large numbers.
* As usual, ignore the flow-through effects on other lives.