But it’s harder for me to think of a principle that would protect a relatively autonomous society of relatively baseline humans from being optimized out of existence, without extending the same conservatism to other beings, and without being the kind of special pleading that doesn’t hold up to scrutiny
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.
If its possible for humans to consent to various optimizations to them, or deny consent, that seems like an important difference. Of course consent is a much weaker notion when you’re talking about superhumanly persuasive AIs that can extract consent for ~anything, from any being that can give consent at all, so the (I think correct) constraint that superintelligences should get consent before transforming me or my society doesn’t change the outcome at all.