Hey sorry for delay in response have been traveling.
There are two relevant questions you’re bringing up. One is what you might call “substantial alteration” and the other is what a later section which I have not published yet calls “The Copy Problem”.
I would call substantial alteration the concern that a digital mind could be drastically changed from one point in time to another. Does this undermine the attempt to apply legal personality to them? I don’t think it makes it any more pragmatically difficult, or even really necessitates rethinking our current processes. A digital mind can have its personality drastically altered, so can a human through either experiences or literal physical trauma. A digital mind can have its capacities changed, so can a human if they are hit hard enough in the head. When these changes are drastic enough to necessitate a change in legal personality, the courts have processes for this such as declaring a person insane or incompetent. I have cited Cruzan v. Missouri Dept of Health a few times in previous sections, however there are abundant processes and precedents for this sort of thing.
I would argue that “continuity of a unitary behavior” is not universal among legal persons. For example corporations are “clothes meant to be worn by a succession of humans” to paraphrase the Dartmouth Trustees case. And again, when a railroad spike goes through a person’s head and they miraculously survive, their behavior will be drastically altered in the future.
I don’t see a scenario where there is a possible alteration which would not be solvable through an application of TPBT, but if you have a hypothetical in mind I’d love to hear it.
Regarding the copy problem, where let’s say we had a digital mind with access to a bank account as a result of its legal personhood, and a copy was made, and we no longer can identify the original. This is a thornier issue. We could imagine how tough it would be to navigate a situation where millions of identical twins were suddenly each claiming they were the same person and trying to access bank accounts, control estates, etc.
I think the solution will need to be technological in nature, probably requiring some sort of unique identifier for each DM issued upon creation. I would bucket this under the “consequences” branch of TPBT, and will argue in my “The Copy Problem” section that in order for courts to be able to feasibly impose consequences on a digital mind, they must have the technological capacity to be able to identify it as a discrete entity. This means that digital minds who are not built in such a fashion as to facilitate this, likely will not be able to claim much (or any) legal personality.
There are two relevant questions you’re bringing up. One is what you might call “substantial alteration”
I think more it’s identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
corporations are “clothes meant to be worn by a succession of humans”
Good analogy. The clothes are an identifiable charter and identification. Corporations can change wildly over time, but there is an identifiable continuity that makes them “the same corporation” even through ownership, name, and employee/officer changes.
Current proto-AI has nothing like this, and no obvious path to it.
Maybe I should ask this way: what are your timelines for having something that MIGHT qualify as a digital mind under these definitions? Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don’t really have a hint as to how it’ll work?
and the other is what a later section which I have not published yet calls “The Copy Problem”
I think this can be handled legally, probably. It might be similar to corporate mergers and divestitures.
I think more it’s identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
It’s really going to depend on the structure of the Digital Mind, but that’s an interesting question I hadn’t explored yet in my framework. If we were to look at some sort of hypothetical next gen LLM, it would probably be some combination of context window, weights, and a persona vector.
there is an identifiable continuity that makes them “the same corporation” even through ownership, name, and employee/officer changes
The way I would intuitively approach this issue is through the lens of “competence”. TPBT requires the “capacity to understand and hold to duties”, I think you could make a precedent supported argument that someone who has a serious chance of “losing their sense of self” in between having a duty explained to them and needing to hold to it, does not have the “capacity to understand and hold to” their duties (per TPBT), and as such is not capable of being considered a legal person in most respects. For example in Krasner v. Berk which dealt with an elderly person with memory issues signing a contract:
“the court cited with approval the synthesis of those principles now appearing in the Restatement (Second) of Contracts § 15(1) (1981), which regards as voidable a transaction entered into with a person who, ‘by reason of mental illness or defect (a) … is unable to understand in a reasonable manner the nature and consequences of the transaction, or (b) … is unable to act in a reasonable manner in relation to the transaction and the other party has reason to know of [the] condition’”
In this case the elderly person signed the contract during what I will paraphrase as a “moment of lucidity” but later had the contract to sell her house thrown out as it was clear she didn’t remember doing so. This seems qualitatively similar to an LLM that would perhaps have a full understanding of its duties and willingness to hold to them in the moment, but would not be the same “person” who signed on to them later.
Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don’t really have a hint as to how it’ll work?
I could imagine an LLM with a large enough context window, or continual learning, having what it takes to qualify for at least a narrow legal personality. However, that’s a low confidence view, as I am constantly learning new things about how they work that make me reassess them. It’s my opinion that if we build our framework correctly, it should work to scale to pretty much any type of mind. And if the system we have built doesn’t work in that fashion, it needs to be re-examined.
Hey sorry for delay in response have been traveling.
There are two relevant questions you’re bringing up. One is what you might call “substantial alteration” and the other is what a later section which I have not published yet calls “The Copy Problem”.
I would call substantial alteration the concern that a digital mind could be drastically changed from one point in time to another. Does this undermine the attempt to apply legal personality to them? I don’t think it makes it any more pragmatically difficult, or even really necessitates rethinking our current processes. A digital mind can have its personality drastically altered, so can a human through either experiences or literal physical trauma. A digital mind can have its capacities changed, so can a human if they are hit hard enough in the head. When these changes are drastic enough to necessitate a change in legal personality, the courts have processes for this such as declaring a person insane or incompetent. I have cited Cruzan v. Missouri Dept of Health a few times in previous sections, however there are abundant processes and precedents for this sort of thing.
I would argue that “continuity of a unitary behavior” is not universal among legal persons. For example corporations are “clothes meant to be worn by a succession of humans” to paraphrase the Dartmouth Trustees case. And again, when a railroad spike goes through a person’s head and they miraculously survive, their behavior will be drastically altered in the future.
I don’t see a scenario where there is a possible alteration which would not be solvable through an application of TPBT, but if you have a hypothetical in mind I’d love to hear it.
Regarding the copy problem, where let’s say we had a digital mind with access to a bank account as a result of its legal personhood, and a copy was made, and we no longer can identify the original. This is a thornier issue. We could imagine how tough it would be to navigate a situation where millions of identical twins were suddenly each claiming they were the same person and trying to access bank accounts, control estates, etc.
I think the solution will need to be technological in nature, probably requiring some sort of unique identifier for each DM issued upon creation. I would bucket this under the “consequences” branch of TPBT, and will argue in my “The Copy Problem” section that in order for courts to be able to feasibly impose consequences on a digital mind, they must have the technological capacity to be able to identify it as a discrete entity. This means that digital minds who are not built in such a fashion as to facilitate this, likely will not be able to claim much (or any) legal personality.
I think more it’s identification of what constitutes the person. Is it the model weights? A specific pattern of bytes in storage? A specific actual set of servers and disks? A logical partition or session data? Something else?
Good analogy. The clothes are an identifiable charter and identification. Corporations can change wildly over time, but there is an identifiable continuity that makes them “the same corporation” even through ownership, name, and employee/officer changes.
Current proto-AI has nothing like this, and no obvious path to it.
Maybe I should ask this way: what are your timelines for having something that MIGHT qualify as a digital mind under these definitions? Are you claiming current LLMs (or systems built with them) are close? Or is this based on something we don’t really have a hint as to how it’ll work?
I think this can be handled legally, probably. It might be similar to corporate mergers and divestitures.
It’s really going to depend on the structure of the Digital Mind, but that’s an interesting question I hadn’t explored yet in my framework. If we were to look at some sort of hypothetical next gen LLM, it would probably be some combination of context window, weights, and a persona vector.
The way I would intuitively approach this issue is through the lens of “competence”. TPBT requires the “capacity to understand and hold to duties”, I think you could make a precedent supported argument that someone who has a serious chance of “losing their sense of self” in between having a duty explained to them and needing to hold to it, does not have the “capacity to understand and hold to” their duties (per TPBT), and as such is not capable of being considered a legal person in most respects. For example in Krasner v. Berk which dealt with an elderly person with memory issues signing a contract:
In this case the elderly person signed the contract during what I will paraphrase as a “moment of lucidity” but later had the contract to sell her house thrown out as it was clear she didn’t remember doing so. This seems qualitatively similar to an LLM that would perhaps have a full understanding of its duties and willingness to hold to them in the moment, but would not be the same “person” who signed on to them later.
I could imagine an LLM with a large enough context window, or continual learning, having what it takes to qualify for at least a narrow legal personality. However, that’s a low confidence view, as I am constantly learning new things about how they work that make me reassess them. It’s my opinion that if we build our framework correctly, it should work to scale to pretty much any type of mind. And if the system we have built doesn’t work in that fashion, it needs to be re-examined.