I agree with your general idea of not caring much about the abstract notion of future potential people, but still think your numbers are so approximate as to be useless, especially if you consider how small the margin you get is (a 200 million difference in a ~3.5 billion number, that’s about a 6% margin—tiny mistakes in estimation could push the outcome the other way).
Problems I have with your model: you discount entirely suffering risks (by your own admission) which IMO cover a lot of possible bad AI futures, or generally all the suffering on the path to doom besides just the deaths (this works for fast takeoff foom but I don’t think that is the likely mode of doom). But also, are you just considering that upon inventing AGI, death is immediately solved and everyone becomes immortal? That seems a huge stretch to me. In most scenarios other than “AGI fooms to ASI overnight, but it’s like, aligned and good” there is a long-ish transition period before developments that outlandish.
I agree that the numbers are so approximate as to be relatively useless. I feel like the useful part of this exercise for me was really in seeing how uncertain I am about whether or not we should have an AI pause. Relatively small differences in my initial assumptions could sway the issue either way. It’s not as if the cautious answer is obviously to pause, which I assumed before. Right now I’m extremely weakly against.
Yes, I am assuming, mostly for the sake of simplicity, that superintelligent AGI cures mortality immediately. I don’t think it would be likely to take more than 10 years though, which is why I’m comfortable with that simplification. I’m also comfortable using deaths as a proxy for suffering because I don’t expect a situation where the two diverge, e.g. an infinite torture torment nexus scenario.
Even without divergence, a few decades of suffering could be enough to move such a close calculation. Nor I’m so sure about the infinite torment nexus scenario (by your metric, even just “the AI keeps human society alive but in a bad state and without giving anyone immortality” would be this).
I also think the immortality expectation is wildly ungrounded. I can’t think of how even a superintelligent AI would cure mortality other than maybe uploads, which I doubt are possible. And anyway if all you count is deaths… everyone dies in the end, at some point. Be it the Sun going red giant or the heat death of the universe. So I’d say considering how good their lives have been until that point seems paramount.
Honestly, I’m not even sure we can call any of this a calculation, given the uncertainty. It just seems like a bunch of random guesswork. The main thing I’m learning from all this is how uncertain I am, and how skeptical of anyone who claims to be more certain.
I think it shouldn’t be hard to believe how a superintelligent AI could cure mortality. For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents. Then we have lifespans of like 10,000 years, and that’s 10,000 years for the superintelligent AI to become even more superintelligent and figure something out.
I agree that everyone dies at some point, but if that happens in a trillion years, presumably we’ll at least have figured out how to minimize the tragedy and suffering of death, aside from the nonexistence itself.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though. A pause will create more business-as-usual suffering by delaying AGI, but will reduce the chances of doom (possibly). I don’t expect doom will involve all that much suffering compared to a few decades of business-as-usual suffering, unless we end up in a bad-but-alive state, which I really doubt.
For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.
I agree with your general idea of not caring much about the abstract notion of future potential people, but still think your numbers are so approximate as to be useless, especially if you consider how small the margin you get is (a 200 million difference in a ~3.5 billion number, that’s about a 6% margin—tiny mistakes in estimation could push the outcome the other way).
Problems I have with your model: you discount entirely suffering risks (by your own admission) which IMO cover a lot of possible bad AI futures, or generally all the suffering on the path to doom besides just the deaths (this works for fast takeoff foom but I don’t think that is the likely mode of doom). But also, are you just considering that upon inventing AGI, death is immediately solved and everyone becomes immortal? That seems a huge stretch to me. In most scenarios other than “AGI fooms to ASI overnight, but it’s like, aligned and good” there is a long-ish transition period before developments that outlandish.
I agree that the numbers are so approximate as to be relatively useless. I feel like the useful part of this exercise for me was really in seeing how uncertain I am about whether or not we should have an AI pause. Relatively small differences in my initial assumptions could sway the issue either way. It’s not as if the cautious answer is obviously to pause, which I assumed before. Right now I’m extremely weakly against.
Yes, I am assuming, mostly for the sake of simplicity, that superintelligent AGI cures mortality immediately. I don’t think it would be likely to take more than 10 years though, which is why I’m comfortable with that simplification. I’m also comfortable using deaths as a proxy for suffering because I don’t expect a situation where the two diverge, e.g. an infinite torture torment nexus scenario.
Even without divergence, a few decades of suffering could be enough to move such a close calculation. Nor I’m so sure about the infinite torment nexus scenario (by your metric, even just “the AI keeps human society alive but in a bad state and without giving anyone immortality” would be this).
I also think the immortality expectation is wildly ungrounded. I can’t think of how even a superintelligent AI would cure mortality other than maybe uploads, which I doubt are possible. And anyway if all you count is deaths… everyone dies in the end, at some point. Be it the Sun going red giant or the heat death of the universe. So I’d say considering how good their lives have been until that point seems paramount.
Honestly, I’m not even sure we can call any of this a calculation, given the uncertainty. It just seems like a bunch of random guesswork. The main thing I’m learning from all this is how uncertain I am, and how skeptical of anyone who claims to be more certain.
I think it shouldn’t be hard to believe how a superintelligent AI could cure mortality. For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents. Then we have lifespans of like 10,000 years, and that’s 10,000 years for the superintelligent AI to become even more superintelligent and figure something out.
I agree that everyone dies at some point, but if that happens in a trillion years, presumably we’ll at least have figured out how to minimize the tragedy and suffering of death, aside from the nonexistence itself.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though. A pause will create more business-as-usual suffering by delaying AGI, but will reduce the chances of doom (possibly). I don’t expect doom will involve all that much suffering compared to a few decades of business-as-usual suffering, unless we end up in a bad-but-alive state, which I really doubt.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.