For example, it could quickly cure all diseases and biological aging, and dramatically reduce the incidence of accidents.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I agree that accounting for suffering could possibly make a difference, but that sounds harder than just estimating deaths and I’m not sure how to do it. I’m pretty sure it will shift me further against a pause though.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.
That’s mostly just life extension. There would still be plenty of potential for death, and I’m not sure whether e.g. stopping aging would also save your brain from all forms of decay. Besides, that kind of knowledge takes experimentation—even an ASI can’t possibly work everything out purely from first principles. And human experimentation being ethical (which hopefully an aligned ASI would be worried about, otherwise we’re well and screwed) is a big bottleneck in finding such things out. It would at least slow the discovery a bit.
I don’t see why that would be the case. I think you’re too focused on an ASI singleton fooming and destroying everyone overnight as your doom scenario. A more likely doom scenario is: AGI gets invented. Via regular economic incentive, it slowly prices all humans out of labour, leading to widespread misery only partially mitigated by measures such as UBI, if even those are passed at all. Power and control gets centralised enormously in the hands of those who own the AIs (AI CEOs and such). The economy gets automated, and eventually more and more executive decisions are delegated to ever smarter AGIs. At some point this completely spins out of control—the AIs aren’t well-aligned, so they start e.g. causing more and more environmental degradation, building their own defences, and so on so forth. Then humanity mostly ends either by the environment not supporting life any more, or in a last fight to try and desperately gain control back. A few leftovers may survive (the descendants of the original AI owners), if they managed to at least align the AIs that much, within protected environments, completely disempowered.
What would you rate such a future at? Lots of deaths, not necessarily complete extinction, but also lots of suffering on the road. And I would honestly say this is my most likely bad outcome right now.
Honestly, the more I engage with this thread, the less certain I become that any of this conversation is productive. Yeah, that’s one way the future could go. It feels less like discussing whether a potential drug will be safe or not, and more like discussing how many different types of angels there will turn out to be in heaven. There’s just such little information going into this discussion that maybe the conclusion from all this is that I am just unsure.