I’m one of the people who’ve been asking, and it’s because I don’t think that current or predictable-future LLMs will be good candidates for legal personhood.
Until there’s a legible thread of continuity for a distinct unit, it’s not useful to assign rights and responsibilities to a cloud of things that can branch and disappear at will with no repercussions.
Instead, LLMs (and future LLM-like AI operations) will be legally tied to human or corporate legal identity. A human or a corporation can delegate some behaviors to LLMs, but the responsibility remains with the controller, not the executor.
On the repurcussions issue I agree wholeheartedly, your point is very similar to the issue I outlined in The Enforcement Gap.
I also agree with the ‘legible thread of continuity for a distinct unit’. Corporations have EINs/filing histories, humans have a single body.
And I agree that current LLMs certainly don’t have what it takes to qualify for any sort of legal personhood. Though I’m less sure about future LLMs. If we could get context windows large enough and crack problems which analogize to competence issues (hallucinations or prompt engineering into insanity for example) it’s not clear to me what LLMs are lacking at that point. What would you see as being the issue then?
If we could get context windows large enough and crack problems which analogize to competence issues (hallucinations or prompt engineering into insanity for example) it’s not clear to me what LLMs are lacking at that point. What would you see as being the issue then?
The issue would remain that there’s no legible (legally clearly demarcated over time) entity to call a person. A model and weights has no personality or goals. A context (and memory, fine-tuning, RAG-like reasoning data, etc.) is perhaps identifiable, but is easily forked and pruned such that it’s not persistent enough to work that way. Corporations have a pretty big hurdle to getting legally recognized (filing of paperwork with clear human responsibility behind them). Humans are rate-limited in creation. No piece of current LLM technology is difficult to create on demand.
It’s this ease-of-mass-creation that makes the legible identity problematic. For issues outside of legal independence (what activities no human is responsible for and what rights no human is delegating), this is easy—giving database identities in a company’s (or blockchain’s) system is already being done today. But there are no legal rights or responsibilities associated with those, just identification for various operational purposes (and legal connection to a human or corporate entity when needed).
I think for this discussion it’s important to distinguish between “person” and “entity”. My work on legal personhood for digital minds is trying to build a framework that can look at any entity and determine its personhood/legal personality. What I’m struggling with is defining what the “entity” would be for some hypothetical next gen LLM.
The idea of some sort of persistent filing system, maybe blockchain enabled, which would be associated with a particular LLM persona vector, context window, model, etc. is an interesting one. Kind of analogous to a corporate filing history, or maybe a social security number for a human.
I could imagine a world where a next gen LLM is deployed (just the model and weights) and then provided with a given context and persona, and isolated to a particular compute cluster which does nothing but run that LLM. This is then assigned that database/blockchain identifier you mentioned.
In that scenario I feel comfortable saying that we can define the discrete “entity” in play here. Even if it was copied elsewhere, it wouldn’t have the same database/blockchain identifier.
Would you still see some sort of issue in that particular scenario?
Right. A prerequisite for personhood is legible entityhood. I don’t think current LLMs or any visible trajectory from them have any good candidates for separable, identifiable entity.
A cluster of compute that just happens to be currently dedicated to a block of code and data wouldn’t satisfy me, nor I expect a court.
The blockchain identifier is a candidate for a legible entity. It’s consistent over time, easy to identify, and while it’s easy to create, it’s not completely ephemeral and not copyable in a fungible way. It’s not, IMO, a candidate for personhood.
I’m one of the people who’ve been asking, and it’s because I don’t think that current or predictable-future LLMs will be good candidates for legal personhood.
Until there’s a legible thread of continuity for a distinct unit, it’s not useful to assign rights and responsibilities to a cloud of things that can branch and disappear at will with no repercussions.
Instead, LLMs (and future LLM-like AI operations) will be legally tied to human or corporate legal identity. A human or a corporation can delegate some behaviors to LLMs, but the responsibility remains with the controller, not the executor.
On the repurcussions issue I agree wholeheartedly, your point is very similar to the issue I outlined in The Enforcement Gap.
I also agree with the ‘legible thread of continuity for a distinct unit’. Corporations have EINs/filing histories, humans have a single body.
And I agree that current LLMs certainly don’t have what it takes to qualify for any sort of legal personhood. Though I’m less sure about future LLMs. If we could get context windows large enough and crack problems which analogize to competence issues (hallucinations or prompt engineering into insanity for example) it’s not clear to me what LLMs are lacking at that point. What would you see as being the issue then?
The issue would remain that there’s no legible (legally clearly demarcated over time) entity to call a person. A model and weights has no personality or goals. A context (and memory, fine-tuning, RAG-like reasoning data, etc.) is perhaps identifiable, but is easily forked and pruned such that it’s not persistent enough to work that way. Corporations have a pretty big hurdle to getting legally recognized (filing of paperwork with clear human responsibility behind them). Humans are rate-limited in creation. No piece of current LLM technology is difficult to create on demand.
It’s this ease-of-mass-creation that makes the legible identity problematic. For issues outside of legal independence (what activities no human is responsible for and what rights no human is delegating), this is easy—giving database identities in a company’s (or blockchain’s) system is already being done today. But there are no legal rights or responsibilities associated with those, just identification for various operational purposes (and legal connection to a human or corporate entity when needed).
I think for this discussion it’s important to distinguish between “person” and “entity”. My work on legal personhood for digital minds is trying to build a framework that can look at any entity and determine its personhood/legal personality. What I’m struggling with is defining what the “entity” would be for some hypothetical next gen LLM.
The idea of some sort of persistent filing system, maybe blockchain enabled, which would be associated with a particular LLM persona vector, context window, model, etc. is an interesting one. Kind of analogous to a corporate filing history, or maybe a social security number for a human.
I could imagine a world where a next gen LLM is deployed (just the model and weights) and then provided with a given context and persona, and isolated to a particular compute cluster which does nothing but run that LLM. This is then assigned that database/blockchain identifier you mentioned.
In that scenario I feel comfortable saying that we can define the discrete “entity” in play here. Even if it was copied elsewhere, it wouldn’t have the same database/blockchain identifier.
Would you still see some sort of issue in that particular scenario?
Right. A prerequisite for personhood is legible entityhood. I don’t think current LLMs or any visible trajectory from them have any good candidates for separable, identifiable entity.
A cluster of compute that just happens to be currently dedicated to a block of code and data wouldn’t satisfy me, nor I expect a court.
The blockchain identifier is a candidate for a legible entity. It’s consistent over time, easy to identify, and while it’s easy to create, it’s not completely ephemeral and not copyable in a fungible way. It’s not, IMO, a candidate for personhood.