Humans have a very questionable relationship with other species. We have not willingly spared other species. We have exterminated or enslaved them to the extent of our limited power for thousands of years, driving several to extinction, reducing others to entirely thrall species genetically warped to serve our needs, and only “sparing” the ones we either were not in direct competition with over any resource of worth, or did not have the ability to completely erase. Only recently have we actually begun caring about animal preservation for its own sake; even so, we are still fairly awful at it, we still ordinarily destroy species purely as an accidental side effect of our activities, and we are impacting yet others in unknown ways, even when doing so may come back to severely harm us as a second order effect (see insects). And if you weigh by sheer biomass, the vast, vast majority of non-human animal lives on Earth today are the most miserable they’ve ever been. If you weigh by individual count that’s still true of at least mammals and birds, though overall insects outnumber everything else.
So, no, that doesn’t fill me with confidence, and I think all such arguments are fuelled by hopium.
Arguments from cost is why I expect both that the future of humanity has a moderate chance of being left non-extinct, and only gets a trivial portion of the reachable universe (which is strong permanent disempowerment without extinction). This is distinct from any other ills that superintelligence would be in a position to visit upon the future of humanity, which serve no purpose and save no costs, so I don’t think a cruel and unusual state of existence is at all likely, things like lack of autonomy, denying access to immortality or uploading, not setting up minimal governance to prevent self-destruction, or not giving the tools for uplifting individuals towards superintelligence (within the means of the relatively modest resources allocated to them).
Most animal species moving towards extinction recently (now that preservation is a salient concern) are inconveniently costly to preserve, and animal suffering from things like factory farming is a side effect of instrumentally useful ways of getting something valuable out of these animals. Humanity isn’t going to be useful, so there won’t be unfortunate side effects from instrumental uses for humanity. And it won’t be costly to leave the future of humanity non-extinct, so if AIs retain enough human-like sensibilities from their primordial LLM training, or early AGI alignment efforts are minimally successful, it’s plausible that this is what happens. But it would be very costly to let it have potential to wield the resources of the reachable universe, hence strong permanent disempowerment.
Humans have a very questionable relationship with other species. We have not willingly spared other species. We have exterminated or enslaved them to the extent of our limited power for thousands of years, driving several to extinction, reducing others to entirely thrall species genetically warped to serve our needs, and only “sparing” the ones we either were not in direct competition with over any resource of worth, or did not have the ability to completely erase. Only recently have we actually begun caring about animal preservation for its own sake; even so, we are still fairly awful at it, we still ordinarily destroy species purely as an accidental side effect of our activities, and we are impacting yet others in unknown ways, even when doing so may come back to severely harm us as a second order effect (see insects). And if you weigh by sheer biomass, the vast, vast majority of non-human animal lives on Earth today are the most miserable they’ve ever been. If you weigh by individual count that’s still true of at least mammals and birds, though overall insects outnumber everything else.
So, no, that doesn’t fill me with confidence, and I think all such arguments are fuelled by hopium.
Arguments from cost is why I expect both that the future of humanity has a moderate chance of being left non-extinct, and only gets a trivial portion of the reachable universe (which is strong permanent disempowerment without extinction). This is distinct from any other ills that superintelligence would be in a position to visit upon the future of humanity, which serve no purpose and save no costs, so I don’t think a cruel and unusual state of existence is at all likely, things like lack of autonomy, denying access to immortality or uploading, not setting up minimal governance to prevent self-destruction, or not giving the tools for uplifting individuals towards superintelligence (within the means of the relatively modest resources allocated to them).
Most animal species moving towards extinction recently (now that preservation is a salient concern) are inconveniently costly to preserve, and animal suffering from things like factory farming is a side effect of instrumentally useful ways of getting something valuable out of these animals. Humanity isn’t going to be useful, so there won’t be unfortunate side effects from instrumental uses for humanity. And it won’t be costly to leave the future of humanity non-extinct, so if AIs retain enough human-like sensibilities from their primordial LLM training, or early AGI alignment efforts are minimally successful, it’s plausible that this is what happens. But it would be very costly to let it have potential to wield the resources of the reachable universe, hence strong permanent disempowerment.