Specifically, while in the preferred world the huge population is glad to have been born, you’re still left with a horribly suffering population.
This conclusion seems absolutely fine to me. The above-h0 population has positive value that is greater than the negative value of the horribly suffering population. If someone’s intuition is against that, I suppose it’s a situation similar to torture vs. dust specks: failure to accept that a very bad thing can be compensated by a lot of small good things. I know that, purely selfishly, I would prefer a small improvement with high probability over something terrible with sufficiently tiny probability. Scaling that to a population, we go from probabilities to quantities.
With fixed, limited energy, killing-and-replacing-by-an-equivalent is already going to be a slight negative: you’ve wasted energy to accomplish an otherwise morally neutral act. It’s not clear to me that it needs to be more negative than that (maybe).
I strongly disagree (it is not morally neutral at all) but now sure how to convince you if you don’t already have this intuition.
Oh sure—agreed on both counts. If you’re fine with the very repugnant conclusion after raising the bar on h a little, then it’s no real problem. Similar to dust specks, as you say.
On killing-and-replacement I meant it’s morally neutral in standard total utilitarianism’s terms.
I had been thinking that this wouldn’t be an issue in practice, since there’d be an energy opportunity cost… but of course this isn’t true in general: there’d be scenarios where a kill-and-replace action saved energy. Something like DNT would be helpful in such cases.
This conclusion seems absolutely fine to me. The above-h0 population has positive value that is greater than the negative value of the horribly suffering population. If someone’s intuition is against that, I suppose it’s a situation similar to torture vs. dust specks: failure to accept that a very bad thing can be compensated by a lot of small good things. I know that, purely selfishly, I would prefer a small improvement with high probability over something terrible with sufficiently tiny probability. Scaling that to a population, we go from probabilities to quantities.
I strongly disagree (it is not morally neutral at all) but now sure how to convince you if you don’t already have this intuition.
Oh sure—agreed on both counts. If you’re fine with the very repugnant conclusion after raising the bar on h a little, then it’s no real problem. Similar to dust specks, as you say.
On killing-and-replacement I meant it’s morally neutral in standard total utilitarianism’s terms.
I had been thinking that this wouldn’t be an issue in practice, since there’d be an energy opportunity cost… but of course this isn’t true in general: there’d be scenarios where a kill-and-replace action saved energy. Something like DNT would be helpful in such cases.