Arguments from cost is why I expect both that the future of humanity has a moderate chance of being left non-extinct, and only gets a trivial portion of the reachable universe (which is strong permanent disempowerment without extinction). This is distinct from any other ills that superintelligence would be in a position to visit upon the future of humanity, which serve no purpose and save no costs, so I don’t think a cruel and unusual state of existence is at all likely, things like lack of autonomy, denying access to immortality or uploading, not setting up minimal governance to prevent self-destruction, or not giving the tools for uplifting individuals towards superintelligence (within the means of the relatively modest resources allocated to them).
Most animal species moving towards extinction recently (now that preservation is a salient concern) are inconveniently costly to preserve, and animal suffering from things like factory farming is a side effect of instrumentally useful ways of getting something valuable out of these animals. Humanity isn’t going to be useful, so there won’t be unfortunate side effects from instrumental uses for humanity. And it won’t be costly to leave the future of humanity non-extinct, so if AIs retain enough human-like sensibilities from their primordial LLM training, or early AGI alignment efforts are minimally successful, it’s plausible that this is what happens. But it would be very costly to let it have potential to wield the resources of the reachable universe, hence strong permanent disempowerment.
Arguments from cost is why I expect both that the future of humanity has a moderate chance of being left non-extinct, and only gets a trivial portion of the reachable universe (which is strong permanent disempowerment without extinction). This is distinct from any other ills that superintelligence would be in a position to visit upon the future of humanity, which serve no purpose and save no costs, so I don’t think a cruel and unusual state of existence is at all likely, things like lack of autonomy, denying access to immortality or uploading, not setting up minimal governance to prevent self-destruction, or not giving the tools for uplifting individuals towards superintelligence (within the means of the relatively modest resources allocated to them).
Most animal species moving towards extinction recently (now that preservation is a salient concern) are inconveniently costly to preserve, and animal suffering from things like factory farming is a side effect of instrumentally useful ways of getting something valuable out of these animals. Humanity isn’t going to be useful, so there won’t be unfortunate side effects from instrumental uses for humanity. And it won’t be costly to leave the future of humanity non-extinct, so if AIs retain enough human-like sensibilities from their primordial LLM training, or early AGI alignment efforts are minimally successful, it’s plausible that this is what happens. But it would be very costly to let it have potential to wield the resources of the reachable universe, hence strong permanent disempowerment.