There were plenty of assumptions here to simplify things, including: I assumed the population won’t increase, that the number of deaths per year will be relatively constant until AGI
So if you’re still biting the bullet under these conditions, then I don’t really get why—unless you’re a full-on negative utilitarian, but then the post could just have said “I think I’m e/acc because that’s the fastest way of ending this whole mess”. :P
I don’t want anyone to think I’m trying to publish an objectively correct AI pause calculator. I’m just trying to express my own values on paper and nudge others to do the same.
I mean, that’s fine and all, but if your values truly imply you prefer ending the world now rather than later, when these are the two options in front of you, then that does some pretty heavy lifting. Because without this view, I don’t think your other premises would lead to the same conclusion.
More people experiencing some horrible apocalypse and having their lives cut short sounds bad to me.
If we assume roughly constant population size (or even moderate ongoing growth) and your assumption holds that a pause reduces p(doom) from 10 to 5%, then far fewer people will die in a fiery apocalypse. So however we turn it, I find it hard to see how your conclusion follows from your napkin math, unless I’m missing something. (edit: I notice I jumped back from my hypothetical scenario to the AGI pause scenario; bit premature here, but eventually I’d still like to make this transition, because again, your fiery apocalypse claim above would suggest you should rather be in favor of a pause, and not against it)
(I’d also argue that even if the math checks out somehow, the numbers you end up with are pretty close while all the input values (like the 40 year timeline) surely have large error bars, where even small deviations might lead to the opposite outcome. But I notice this was discussed already in another comment thread)
Oh, yeah, I got confused. I originally wrote the post taking into account a growing population, but removed that later to make it a bit simpler. Taking into account a growing population with an extra 1 or 2 billion people, everyone dying later is worse because it’s more people dying. (Unless it’s much later, in which case my mild preference for humanity continuing kicks in.) With equal populations, if everyone dies in 100 or 200 years it doesn’t really matter to me, besides a mild preference for humanity continuing. But it’s the same amount of suffering and number of lives cut short because of the AI apocalypse.
In the post though, you wrote:
So if you’re still biting the bullet under these conditions, then I don’t really get why—unless you’re a full-on negative utilitarian, but then the post could just have said “I think I’m e/acc because that’s the fastest way of ending this whole mess”. :P
I mean, that’s fine and all, but if your values truly imply you prefer ending the world now rather than later, when these are the two options in front of you, then that does some pretty heavy lifting. Because without this view, I don’t think your other premises would lead to the same conclusion.
If we assume roughly constant population size (or even moderate ongoing growth) and your assumption holds that a pause reduces p(doom) from 10 to 5%, then far fewer people will die in a fiery apocalypse. So however we turn it, I find it hard to see how your conclusion follows from your napkin math, unless I’m missing something. (edit: I notice I jumped back from my hypothetical scenario to the AGI pause scenario; bit premature here, but eventually I’d still like to make this transition, because again, your fiery apocalypse claim above would suggest you should rather be in favor of a pause, and not against it)
(I’d also argue that even if the math checks out somehow, the numbers you end up with are pretty close while all the input values (like the 40 year timeline) surely have large error bars, where even small deviations might lead to the opposite outcome. But I notice this was discussed already in another comment thread)
Oh, yeah, I got confused. I originally wrote the post taking into account a growing population, but removed that later to make it a bit simpler. Taking into account a growing population with an extra 1 or 2 billion people, everyone dying later is worse because it’s more people dying. (Unless it’s much later, in which case my mild preference for humanity continuing kicks in.) With equal populations, if everyone dies in 100 or 200 years it doesn’t really matter to me, besides a mild preference for humanity continuing. But it’s the same amount of suffering and number of lives cut short because of the AI apocalypse.
I think that I’d do this math by net QUALYs and not net deaths. My guess is doing it that way may actually change your result.
I’m not trying to avoid dying; I’m trying to steer toward living.