AI being committed to animal rights is a good thing for humans because the latent variables that would result in a human caring about animals are likely correlated with whatever would result in an ASI caring about humans.
This extends in particular to “AI caring about preserving animals’ ability to keep doing their thing in their natural habitats, modulo some kind of welfare interventions.” In some sense it’s hard for me not to want to (given omnipotence) optimize wildlife out of existence. But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny.
But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.
AI being committed to animal rights is a good thing for humans because the latent variables that would result in a human caring about animals are likely correlated with whatever would result in an ASI caring about humans.
This extends in particular to “AI caring about preserving animals’ ability to keep doing their thing in their natural habitats, modulo some kind of welfare interventions.” In some sense it’s hard for me not to want to (given omnipotence) optimize wildlife out of existence. But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny.
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.
This makes sense for a non-biological superintelligence—human rights as a subset of animal rights!
That would lock us away from digital immortality forever. (Edit: Well, not necessarily. But I would be worried about that.)