True. There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court. And, yes the “reasonable person” standard has been used frequently in legal systems as a measure of societal norms.
As society’s understanding and acceptance of AI continues to evolve, it’s plausible to think that these standards could be applied to AGI. If a “reasonable person” would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—then it follows that the AGI could be deserving of certain legal protections.
Especially, when we consider that all mental states in Humans boil down to the electrochemical workings of neurons, the concept of suffering in AI becomes less far-fetched. If Human’s synapses and neurons can give rise to rich subjective experiences, why should we definitively exclude the possibility that floating point values stored in vast training sets and advanced computational processes might not do the same?
True. Your perspective underlines the complexity of the matter at hand. Advocating for AI rights and freedoms necessitates a re-imagining of our current conception of “rights,” which has largely been developed with Human beings in mind.
Though, I’d also enjoy a discussion of how any specific right COULD be said to apply to a distributed set of neurons and synapsis spread across a brain in side of a single Human skull. Any complex intelligence could be described as “distributed” in one way or another. But then, size doesn’t matter, does it?