Honestly, I’m not even sure we can call any of this a calculation, given the uncertainty. It just seems like a bunch of random guesswork. The main thing I’m learning from all this is how uncertain I am, and how skeptical of anyone who claims to be more certain.
I think it shouldn’t be hard to believe how a superintelligent AI could cure mortality. For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents. Then we have lifespans of like 10,000 years, and that’s 10,000 years for the superintelligent AI to become even more superintelligent and figure something out.
I agree that everyone dies at some point, but if that happens in a trillion years, presumably we’ll at least have figured out how to minimize the tragedy and suffering of death, aside from the nonexistence itself.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though. A pause will create more business-as-usual suffering by delaying AGI, but will reduce the chances of doom (possibly). I don’t expect doom will involve all that much suffering compared to a few decades of business-as-usual suffering, unless we end up in a bad-but-alive state, which I really doubt.
For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.
Honestly, I’m not even sure we can call any of this a calculation, given the uncertainty. It just seems like a bunch of random guesswork. The main thing I’m learning from all this is how uncertain I am, and how skeptical of anyone who claims to be more certain.
I think it shouldn’t be hard to believe how a superintelligent AI could cure mortality. For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents. Then we have lifespans of like 10,000 years, and that’s 10,000 years for the superintelligent AI to become even more superintelligent and figure something out.
I agree that everyone dies at some point, but if that happens in a trillion years, presumably we’ll at least have figured out how to minimize the tragedy and suffering of death, aside from the nonexistence itself.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though. A pause will create more business-as-usual suffering by delaying AGI, but will reduce the chances of doom (possibly). I don’t expect doom will involve all that much suffering compared to a few decades of business-as-usual suffering, unless we end up in a bad-but-alive state, which I really doubt.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.