I think more it’s identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
It’s really going to depend on the structure of the Digital Mind, but that’s an interesting question I hadn’t explored yet in my framework. If we were to look at some sort of hypothetical next gen LLM, it would probably be some combination of context window, weights, and a persona vector.
there is an identifiable continuity that makes them “the same corporation” even through ownership, name, and employee/officer changes
The way I would intuitively approach this issue is through the lens of “competence”. TPBT requires the “capacity to understand and hold to duties”, I think you could make a precedent supported argument that someone who has a serious chance of “losing their sense of self” in between having a duty explained to them and needing to hold to it, does not have the “capacity to understand and hold to” their duties (per TPBT), and as such is not capable of being considered a legal person in most respects. For example in Krasner v. Berk which dealt with an elderly person with memory issues signing a contract:
“the court cited with approval the synthesis of those principles now appearing in the Restatement (Second) of Contracts § 15(1) (1981), which regards as voidable a transaction entered into with a person who, ‘by reason of mental illness or defect (a) … is unable to understand in a reasonable manner the nature and consequences of the transaction, or (b) … is unable to act in a reasonable manner in relation to the transaction and the other party has reason to know of [the] condition’”
In this case the elderly person signed the contract during what I will paraphrase as a “moment of lucidity” but later had the contract to sell her house thrown out as it was clear she didn’t remember doing so. This seems qualitatively similar to an LLM that would perhaps have a full understanding of its duties and willingness to hold to them in the moment, but would not be the same “person” who signed on to them later.
Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don’t really have a hint as to how it’ll work?
I could imagine an LLM with a large enough context window, or continual learning, having what it takes to qualify for at least a narrow legal personality. However, that’s a low confidence view, as I am constantly learning new things about how they work that make me reassess them. It’s my opinion that if we build our framework correctly, it should work to scale to pretty much any type of mind. And if the system we have built doesn’t work in that fashion, it needs to be re-examined.
It’s really going to depend on the structure of the Digital Mind, but that’s an interesting question I hadn’t explored yet in my framework. If we were to look at some sort of hypothetical next gen LLM, it would probably be some combination of context window, weights, and a persona vector.
The way I would intuitively approach this issue is through the lens of “competence”. TPBT requires the “capacity to understand and hold to duties”, I think you could make a precedent supported argument that someone who has a serious chance of “losing their sense of self” in between having a duty explained to them and needing to hold to it, does not have the “capacity to understand and hold to” their duties (per TPBT), and as such is not capable of being considered a legal person in most respects. For example in Krasner v. Berk which dealt with an elderly person with memory issues signing a contract:
In this case the elderly person signed the contract during what I will paraphrase as a “moment of lucidity” but later had the contract to sell her house thrown out as it was clear she didn’t remember doing so. This seems qualitatively similar to an LLM that would perhaps have a full understanding of its duties and willingness to hold to them in the moment, but would not be the same “person” who signed on to them later.
I could imagine an LLM with a large enough context window, or continual learning, having what it takes to qualify for at least a narrow legal personality. However, that’s a low confidence view, as I am constantly learning new things about how they work that make me reassess them. It’s my opinion that if we build our framework correctly, it should work to scale to pretty much any type of mind. And if the system we have built doesn’t work in that fashion, it needs to be re-examined.