There is an idea, or maybe an assumption, that I’ve seen mentioned in many Lesswrong posts. This is the idea that human life should be maximized: that one of our goals should be to create as many humans as possible. Or perhaps even: that preventing humans from being born is as bad as killing living humans.
I’ve seen this idea used to argue for a larger point, but I haven’t yet seen arguments to justify the idea itself. I only have some directional notions:
I understand not wanting human life, and possibly the various human cultures, to die out, and we should make sure that there are enough humans to prevent that. This in no way necessitates maximization, though.
If you accept the grabby aliens model, this necessitates that humans should be grabby, because otherwise, grabby aliens will cause us to die out. This would also imply maximization of human life across the galaxy. However, I get the idea that this isn’t the main reason for people to want maximization, as the other implication — that we need to be grabby — is almost never mentioned in the relevant posts.
I could see an argument for maximizing utility, under the utilitarian framework, where you argue that creating more life would create more potential utility. However, this means that
you should also actually create the utility for all these new lives, or they will not add to (or even subtract from) your utility calculation; simply wanting to create lives without considering living conditions does not seem to take this into account
it is possible that maximizing animal life, or perhaps alien or artificial life, would create more utility, as these lives might be optimized with way less effort
So I would like to hear, from people who actually hold the “maximizing human life” position, some of your explanations for why. (Or pointers to a source or a framework that explains it.)