I anticipate this will lead to some interesting phrasing choices around the multiple meanings of “conception” as the discussions on what and how and whether AI’s ‘really’ think continue to evolve.
The question around ‘really’ thinking is less relevant to personhood in the law than you might think.
Per the “Artificially Intelligent Persons” paper I cited:
conditions relating to autonomy, intelligence, and awareness are almost absent from the courts’ consideration of legal personhood for artificial entities. The only exception being autonomy, which is considered as a condition for legal personhood in 2% of all cases.
There are some cases where autonomy is a factor but it’s a vanishingly small minority, and intelligence has so far never shown up. Just because it hasn’t been a consideration in the past doesn’t mean it won’t in the future of course, but as of right now if you’re going to look at any inherent quality of a model as a qualifier for personhood autonomy seems to be more important than intelligence.
It’s also more objectively measurable which courts like. We can always debate over whether a model “really” understands what it is doing, but it’s obvious whether or not a model has “really” taken an action.
I anticipate this will lead to some interesting phrasing choices around the multiple meanings of “conception” as the discussions on what and how and whether AI’s ‘really’ think continue to evolve.
The question around ‘really’ thinking is less relevant to personhood in the law than you might think.
Per the “Artificially Intelligent Persons” paper I cited:
There are some cases where autonomy is a factor but it’s a vanishingly small minority, and intelligence has so far never shown up. Just because it hasn’t been a consideration in the past doesn’t mean it won’t in the future of course, but as of right now if you’re going to look at any inherent quality of a model as a qualifier for personhood autonomy seems to be more important than intelligence.
It’s also more objectively measurable which courts like. We can always debate over whether a model “really” understands what it is doing, but it’s obvious whether or not a model has “really” taken an action.
Thanks! This is an interesting angle I wasn’t much thinking about.