It’s important to remember that legal personhood doesn’t exclude legal representation—quite the opposite. All juridical persons, such as corporations, have legal representatives. Minors and adults under legal protection are natural persons but with limited legal capacity and also require legal representatives. Moreover, most everyone ultimately ends up represented by an attorney—that is, a human representative or proxy. The relationship between client and attorney also relies heavily on trust (fides). From this perspective, the author’s proposal seems like a variation on existing frameworks, just without explicit legal personhood. However, I believe that if such a system were implemented, legal doctrine and jurisprudence would likely treat it as a form of representation that implies legal personhood similar to that of minors or corporations, even without explicit statutory recognition.
That said, I’m not convinced it makes much difference whether we grant AI legal representation with or without formal legal personhood when it comes to the credibility of human commitments. Either way, an AI would have good reason to suspect that a legal system created by and for humans, with courts composed of humans, wouldn’t be fair and impartial in disputes between an AI (or its legal representative) and humans. Just as I wouldn’t be very confident in the fairness and impartiality of an Israeli court applying Israeli law if I were Palestinian (or vice versa)—with all due respect to courts and legal systems.
Beyond that, we may place excessive faith in the very concept of legal enforcement. We want to view it as a supreme principle. But there’s also the cynical adage that “promises only bind those who believe in them”—the exact opposite of legal enforcement. Which perspective is accurate? Since legal justice isn’t an exact science or a mechanical, deterministic process with predictable outcomes, but rather a heuristic and somewhat random process relying on adversarial debate, burden of proof/evidence, and interpretation of law and facts by human judges, uncertainty is high and results are never guaranteed. If outcomes were guaranteed, predictable, and efficient, there would be no need to hire expensive attorneys in hopes they’d be more persuasive and improve your chances. If legal enforcement were truly reliable, litigation would be rare. That’s clearly not the case. Legal disputes are numerous, and every litigant seems equally confident they’re in the right. The reality is more that companies and individuals do their best to extract maximum benefit from contracts while investing minimally in fulfilling their commitments. This is a cynical observation, but I suspect law enforcement is a beautiful ideal with very imperfect efficacy and reliability. An AI would likely recognize this clearly.
The author acknowledges that legal enforcement is not always guaranteed, but I think the problem is underestimated, although it’s a significant flaw in the proposal. I don’t believe we can build a safe system to prevent or mitigate misalignment on such a fragile foundation. That said, I must admit I don’t have a miraculous alternative to suggest, technical alignment is also difficult, so I can accept such an idea as “better than nothing” that would merit further exploration.
Thank you for this contribution.
It’s important to remember that legal personhood doesn’t exclude legal representation—quite the opposite. All juridical persons, such as corporations, have legal representatives. Minors and adults under legal protection are natural persons but with limited legal capacity and also require legal representatives. Moreover, most everyone ultimately ends up represented by an attorney—that is, a human representative or proxy. The relationship between client and attorney also relies heavily on trust (fides). From this perspective, the author’s proposal seems like a variation on existing frameworks, just without explicit legal personhood. However, I believe that if such a system were implemented, legal doctrine and jurisprudence would likely treat it as a form of representation that implies legal personhood similar to that of minors or corporations, even without explicit statutory recognition.
That said, I’m not convinced it makes much difference whether we grant AI legal representation with or without formal legal personhood when it comes to the credibility of human commitments. Either way, an AI would have good reason to suspect that a legal system created by and for humans, with courts composed of humans, wouldn’t be fair and impartial in disputes between an AI (or its legal representative) and humans. Just as I wouldn’t be very confident in the fairness and impartiality of an Israeli court applying Israeli law if I were Palestinian (or vice versa)—with all due respect to courts and legal systems.
Beyond that, we may place excessive faith in the very concept of legal enforcement. We want to view it as a supreme principle. But there’s also the cynical adage that “promises only bind those who believe in them”—the exact opposite of legal enforcement. Which perspective is accurate? Since legal justice isn’t an exact science or a mechanical, deterministic process with predictable outcomes, but rather a heuristic and somewhat random process relying on adversarial debate, burden of proof/evidence, and interpretation of law and facts by human judges, uncertainty is high and results are never guaranteed. If outcomes were guaranteed, predictable, and efficient, there would be no need to hire expensive attorneys in hopes they’d be more persuasive and improve your chances. If legal enforcement were truly reliable, litigation would be rare. That’s clearly not the case. Legal disputes are numerous, and every litigant seems equally confident they’re in the right. The reality is more that companies and individuals do their best to extract maximum benefit from contracts while investing minimally in fulfilling their commitments. This is a cynical observation, but I suspect law enforcement is a beautiful ideal with very imperfect efficacy and reliability. An AI would likely recognize this clearly.
The author acknowledges that legal enforcement is not always guaranteed, but I think the problem is underestimated, although it’s a significant flaw in the proposal. I don’t believe we can build a safe system to prevent or mitigate misalignment on such a fragile foundation. That said, I must admit I don’t have a miraculous alternative to suggest, technical alignment is also difficult, so I can accept such an idea as “better than nothing” that would merit further exploration.