Even if the question of AI moral status is somehow solved, in a definitive way, what about all of the follow-up questions? If current or future AIs are moral patients, what are the implications of that in terms of e.g. what we concretely owe them as far as rights and welfare considerations? How to allocate votes to AI copies?
These questions are entangled with the concept of “legal personhood” which also deals with issues such as tort liability, ability to enter contracts, sue/be sued, etc. While the question of “legal personhood” is separate from that of “moral status”, anyone who wants a being with moral status to be protected from unethical treatment will at some point find themselves dealing with the question of legal personhood.
There is a still niche but increasing field of legal scholarship dealing with the issue of personhood for digital intelligences. This issue is IMO imminent, as there are already laws on the books in two states (Idaho and Utah) precluding “artificial intelligences” from being granted legal personhood. Much like capabilities research is not waiting around for safety/model welfare research to catch up, neither is the legislative system waiting for legal scholarship.
There is no objective test for legal personhood under the law today. Cases around corporate personhood, the personhood of fetuses, and the like, have generally ruled on such narrow grounds that they failed to directly address the question of how it is determined that an entity is/isn’t a “person”. As a result, US law does not have a clearly supported way to examine or evaluate a new form of intelligence and determine whether it is a person, or to what degree it is endowed with personhood.
That said it is not tied on a precedential level to qualities like consciousness or intelligence. More often it operates from a “bundle” framework of rights and duties, where once an agent is capable of exercising a certain right and being bound by corresponding duties, it gains a certain amount of “personhood”. However even this rather popular “Bundle” theory of personhood seems more academic than jurisprudential at this point.
Despite the lack of objective testing mechanisms I believe that when it comes to avoiding horrific moral atrocities in our near future, there is value in examining legal history and precedent. And there are concrete actions that can be taken in both the short and long term which can be informed by said history and precedent. We may be able to “Muddle Through” the question of “moral status” by answering the pragmatic question of “legal personhood” with a sufficiently flexible and well thought out framework. After all it wasn’t any moral intuition which undid the damage of Dredd Scott, it was a legislative change brought about in response to his court case.
Some of the more recent publications on the topic:
“Degrees of AI Personhood” by Diana Mocanu a postdoctoral researcher at the University of Helsinki, I am currently chatting with her to clear some things up and will drop a similar summary when I’m done.
The first two (by Batenka and Mocanu) are notable for actually proposing frameworks for how to treat the issue of legal personhood which is ultimately what would stand between any digital intelligence and unethical treatment.
These questions are entangled with the concept of “legal personhood” which also deals with issues such as tort liability, ability to enter contracts, sue/be sued, etc. While the question of “legal personhood” is separate from that of “moral status”, anyone who wants a being with moral status to be protected from unethical treatment will at some point find themselves dealing with the question of legal personhood.
There is a still niche but increasing field of legal scholarship dealing with the issue of personhood for digital intelligences. This issue is IMO imminent, as there are already laws on the books in two states (Idaho and Utah) precluding “artificial intelligences” from being granted legal personhood. Much like capabilities research is not waiting around for safety/model welfare research to catch up, neither is the legislative system waiting for legal scholarship.
There is no objective test for legal personhood under the law today. Cases around corporate personhood, the personhood of fetuses, and the like, have generally ruled on such narrow grounds that they failed to directly address the question of how it is determined that an entity is/isn’t a “person”. As a result, US law does not have a clearly supported way to examine or evaluate a new form of intelligence and determine whether it is a person, or to what degree it is endowed with personhood.
That said it is not tied on a precedential level to qualities like consciousness or intelligence. More often it operates from a “bundle” framework of rights and duties, where once an agent is capable of exercising a certain right and being bound by corresponding duties, it gains a certain amount of “personhood”. However even this rather popular “Bundle” theory of personhood seems more academic than jurisprudential at this point.
Despite the lack of objective testing mechanisms I believe that when it comes to avoiding horrific moral atrocities in our near future, there is value in examining legal history and precedent. And there are concrete actions that can be taken in both the short and long term which can be informed by said history and precedent. We may be able to “Muddle Through” the question of “moral status” by answering the pragmatic question of “legal personhood” with a sufficiently flexible and well thought out framework. After all it wasn’t any moral intuition which undid the damage of Dredd Scott, it was a legislative change brought about in response to his court case.
Some of the more recent publications on the topic:
“AI Personhood on a Sliding Scale” by FSU Law Professor Nadia Batenka, which I wrote a summary/critique of here.
“Degrees of AI Personhood” by Diana Mocanu a postdoctoral researcher at the University of Helsinki, I am currently chatting with her to clear some things up and will drop a similar summary when I’m done.
“The Legal Personhood of Artificial Intelligence” by Visa A.J. Kurki an Associate Professor at the University of Helsinki (I guess Finland is ahead of the curve here)
“The Ethics and Challenges of Legal Personhood for AI” by ex-NY judge Katherine Forrest
The first two (by Batenka and Mocanu) are notable for actually proposing frameworks for how to treat the issue of legal personhood which is ultimately what would stand between any digital intelligence and unethical treatment.