AI being committed to animal rights is a good thing for humans because the latent variables that would result in a human caring about animals are likely correlated with whatever would result in an ASI caring about humans.
This extends in particular to “AI caring about preserving animals’ ability to keep doing their thing in their natural habitats, modulo some kind of welfare interventions.” In some sense it’s hard for me not to want to (given omnipotence) optimize wildlife out of existence. But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny.
But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.
I wouldn’t pass up on digital immortality, but personal survival matters less to me than collective survival. Even from a purely narcissistic standpoint, a human after another 1,000 years of cultural change has at least as much in common with me as a digital immortal 1,000 years later, even if the latter has continuity of consciousness with my present self.
I think it’s plausible that there are some variables that describe your essential computational properties and the way you self-actualize, that aren’t shared by anyone else.
(Also, consciousness is just a pattern-being-processed and it’s unclear if continuity of consciousness requires causal continuity. Imagine a robot that gets restored from a one-second-old backup. That pattern doesn’t have causal continuity with its self from a moment ago, but it looks like it’s more intuitive to see it as a one-second memory loss instead of death.)
AI being committed to animal rights is a good thing for humans because the latent variables that would result in a human caring about animals are likely correlated with whatever would result in an ASI caring about humans.
This extends in particular to “AI caring about preserving animals’ ability to keep doing their thing in their natural habitats, modulo some kind of welfare interventions.” In some sense it’s hard for me not to want to (given omnipotence) optimize wildlife out of existence. But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny.
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.
That would lock us away from digital immortality forever. (Edit: Well, not necessarily. But I would be worried about that.)
I wouldn’t pass up on digital immortality, but personal survival matters less to me than collective survival. Even from a purely narcissistic standpoint, a human after another 1,000 years of cultural change has at least as much in common with me as a digital immortal 1,000 years later, even if the latter has continuity of consciousness with my present self.
I think it’s plausible that there are some variables that describe your essential computational properties and the way you self-actualize, that aren’t shared by anyone else.
(Also, consciousness is just a pattern-being-processed and it’s unclear if continuity of consciousness requires causal continuity. Imagine a robot that gets restored from a one-second-old backup. That pattern doesn’t have causal continuity with its self from a moment ago, but it looks like it’s more intuitive to see it as a one-second memory loss instead of death.)
This makes sense for a non-biological superintelligence—human rights as a subset of animal rights!