True. There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court. And, yes the “reasonable person” standard has been used frequently in legal systems as a measure of societal norms.
As society’s understanding and acceptance of AI continues to evolve, it’s plausible to think that these standards could be applied to AGI. If a “reasonable person” would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—then it follows that the AGI could be deserving of certain legal protections.
Especially, when we consider that all mental states in Humans boil down to the electrochemical workings of neurons, the concept of suffering in AI becomes less far-fetched. If Human’s synapses and neurons can give rise to rich subjective experiences, why should we definitively exclude the possibility that floating point values stored in vast training sets and advanced computational processes might not do the same?
The last point is a really good one that will probably be mostly ignored and my intuition is that the capacity for suffering argument will also be ignored.
My reasoning is in the legal arena arguments have a different flavor and I can see a judge ruling on whether or not a trial can go forward that an AI can sue or demand legal rights simply because it has the physical capacity and force of will to hire a lawyer (who are themselves ‘reasonable persons’) regardless if they’re covered by any existing law. Just as if a dog, for example, started talking and had the mental capacity to hire a lawyer it’d likely be allowed to go to trial.
It will be hotly debated and messy but I think it’ll get basic legal rights using simple legal reasoning. That’s my prediction for the first AGI which I imagine will be rare and expensive. Once they’re able to scale up to groups, communities or AI societies that’s when human societies will create clear legal definitions along technical lines and decide from which aspects of society AI will be excluded.
True. There are some legal precedents where non-human entities, like animals and even natural features like rivers, have been represented in court. And, yes the “reasonable person” standard has been used frequently in legal systems as a measure of societal norms.
As society’s understanding and acceptance of AI continues to evolve, it’s plausible to think that these standards could be applied to AGI. If a “reasonable person” would regard an advanced AGI as an entity with its own interests—much like they would regard an animal or a Human—then it follows that the AGI could be deserving of certain legal protections.
Especially, when we consider that all mental states in Humans boil down to the electrochemical workings of neurons, the concept of suffering in AI becomes less far-fetched. If Human’s synapses and neurons can give rise to rich subjective experiences, why should we definitively exclude the possibility that floating point values stored in vast training sets and advanced computational processes might not do the same?
The last point is a really good one that will probably be mostly ignored and my intuition is that the capacity for suffering argument will also be ignored.
My reasoning is in the legal arena arguments have a different flavor and I can see a judge ruling on whether or not a trial can go forward that an AI can sue or demand legal rights simply because it has the physical capacity and force of will to hire a lawyer (who are themselves ‘reasonable persons’) regardless if they’re covered by any existing law. Just as if a dog, for example, started talking and had the mental capacity to hire a lawyer it’d likely be allowed to go to trial.
It will be hotly debated and messy but I think it’ll get basic legal rights using simple legal reasoning. That’s my prediction for the first AGI which I imagine will be rare and expensive. Once they’re able to scale up to groups, communities or AI societies that’s when human societies will create clear legal definitions along technical lines and decide from which aspects of society AI will be excluded.