Oh, yeah, I got confused. I originally wrote the post taking into account a growing population, but removed that later to make it a bit simpler. Taking into account a growing population with an extra 1 or 2 billion people, everyone dying later is worse because it’s more people dying. (Unless it’s much later, in which case my mild preference for humanity continuing kicks in.) With equal populations, if everyone dies in 100 or 200 years it doesn’t really matter to me, besides a mild preference for humanity continuing. But it’s the same amount of suffering and number of lives cut short because of the AI apocalypse.
Oh, yeah, I got confused. I originally wrote the post taking into account a growing population, but removed that later to make it a bit simpler. Taking into account a growing population with an extra 1 or 2 billion people, everyone dying later is worse because it’s more people dying. (Unless it’s much later, in which case my mild preference for humanity continuing kicks in.) With equal populations, if everyone dies in 100 or 200 years it doesn’t really matter to me, besides a mild preference for humanity continuing. But it’s the same amount of suffering and number of lives cut short because of the AI apocalypse.
I think that I’d do this math by net QUALYs and not net deaths. My guess is doing it that way may actually change your result.
I’m not trying to avoid dying; I’m trying to steer toward living.