If it were morally correct to kill everyone on earth, would you do it?

First consider the following question to make sure we’re on the same page in terms of moral reasoning: social consequences aside, is it morally correct to kill one person to create a million people who would not have otherwise existed? Let’s suppose these people are whisked into existence on a spaceship travelling away from earth at light speed, and they live healthy, happy lives, but eventually die.

I’d argue that anyone who adheres to “shut up and multiply” (i.e. total utilitarianism) has to say yes. Is it better to create one such person than to donate 200 dollars to Oxfam? Is one life worth more than a 200 million dollar donation to Oxfam? Seems pretty clear that the answers are “yes” and “no”.

Now, suppose we have a newly created superintelligent FAI that’s planning out how to fill the universe with human value. Should it first record everyone’s brain, thus saving them, or should do whatever it takes to explode as quickly as possible? It’s hard to estimate how much it would slow things down to get everyone’s brain recorded, but it’s certainly some sort of constraint. Depending on the power of the FAI, my guess is somewhere between a second and a few hours. If the FAI is going to be filling the universe with computronium simulating happy, fulfilled humans at extremely high speeds, that’s a big deal! A second’s delay across the future light-cone of earth could easily add up to more than the value of every currently living human’s life. It may sound bad to kill everyone on earth just to save a second (or maybe scan only a few thousand people for “research”), but that’s only because of scope insensitivity. If only we understood just how good saving that second would be, maybe we would all agree that it is not only right but downright heroic to do so!

A related scenario: a FAI that we are very, very sure correctly implements CEV sets up a universe in which everyone gets 20 years to live, starting from a adult transhuman state. It turns out that there are diminishing returns in terms of value to longer and longer life spans, and this is the best way to use the computational power. The transhumans have been modified not to have any anxiety or fear about death, and agree this is the best way to do things. Their human ancestors’ desire for immortality is viewed as deeply wrong, even barbaric. In short, all signs point to this really being the coherent extrapolated volition of humanity.


Besides opinions on whether or not either of these scenarios are plausible, I’d like hear reactions to these scenarios as thought experiments. Is this a problem for total utilitarianism or for CEV? Is this an argument for “grabbing the banana” as a species and if necessary knowingly making an AI that does something other than the morally correct thing? Anyone care to bite the bullet?