I’m not sure exactly how many people are working on it, but I have the impression that it is more than a dozen, since I’ve met some of them without trying.
Glad to hear it. I hope to find and follow such work. The people I’m aware of are listed on pp. 3-5 of the paper. Was happy to see O’Keefe, Bai et al. (Anthropic), and Nay leaning this way.
It seems to me like you are somewhat shrugging off those concerns, since the technological interventions (eg smart contracts, LLMs understanding laws, whatever self-driving-car people get up to) are very “light” in the face of those “heavy” concerns. But a legal approach need not shrug off those concerns. For example, law could require the kind of verification we can now apply to airplane autopilot be applied to self-driving-cars as well. This would make self-driving illegal in effect until a large breakthrough in ML verification takes place, but it would work!
Yes. I’m definitely being glib about implementation details. First things first. :)
I agree with you that if self-driving-cars can’t be “programmed” (instilled) to be adequately law-abiding, their future isn’t bright. Per above, I’m heartened by Anthropic’s Constitutional AI (priming LLMs with basic “laws”) having some success getting AIs to behave. Ditto for anecdotes I’ve heard about “asking an LLM to come up with a money-making plan that doesn’t violate any laws.” Seems too easy right?
One final comment about implementation details. In the appendix I note:
We suspect emergence of instrumental values is not inevitable for any “sufficiently advanced AI system.” Rather, whether such values emerge depends on what cognitive architecture and environmental conditions (training regimens) are used.
Broadly speaking, implementing AIs using safe architectures (ones not prone to law-breaking) is another implementation direction. Drexler’s CAIS may be an example.
Sure. Getting appropriate new laws enacted is an important element. From the paper:
Initially, in addition to adopting existing bodies of law to implement AISVL, existing processes for how laws are drafted, enacted, enforced, litigated, and maintained would be preserved.
Thereafter, new laws and improvements to existing laws and processes must continually be introduced to make the systems more robust, fair, nimble, efficient, consistent, understandable, accepted, complied with, and enforced.
I’d say the EU AI Act (and similar) work addresses the “new laws” imperative. (I won’t comment (much) on pros and cons of its content. In general, it seems pretty good. I wonder if they considered adding Etzioni’s first law to the mix, “An AI system must be subject to the full gamut of laws that apply to humans”? That is what I meant by “adopting existing bodies of law to implement AISVL.” The item in the EU AI Act about designing generative AIs to not generate illegal content is related.)
The more interesting work will be on improving legal processes along the dimensions listed above. And really interesting will be, as AIs get more autonomous and agentic, the “instilling” part where AIs must dynamically recognize and comply with the legal-moral corpora appropriate to the contexts they find themselves in.
Glad to hear it. I hope to find and follow such work. The people I’m aware of are listed on pp. 3-5 of the paper. Was happy to see O’Keefe, Bai et al. (Anthropic), and Nay leaning this way.
Yes. I’m definitely being glib about implementation details. First things first. :)
I agree with you that if self-driving-cars can’t be “programmed” (instilled) to be adequately law-abiding, their future isn’t bright. Per above, I’m heartened by Anthropic’s Constitutional AI (priming LLMs with basic “laws”) having some success getting AIs to behave. Ditto for anecdotes I’ve heard about “asking an LLM to come up with a money-making plan that doesn’t violate any laws.” Seems too easy right?
One final comment about implementation details. In the appendix I note:
Broadly speaking, implementing AIs using safe architectures (ones not prone to law-breaking) is another implementation direction. Drexler’s CAIS may be an example.
Would you count all the people who worked on the EU AI act?
Sure. Getting appropriate new laws enacted is an important element. From the paper:
I’d say the EU AI Act (and similar) work addresses the “new laws” imperative. (I won’t comment (much) on pros and cons of its content. In general, it seems pretty good. I wonder if they considered adding Etzioni’s first law to the mix, “An AI system must be subject to the full gamut of laws that apply to humans”? That is what I meant by “adopting existing bodies of law to implement AISVL.” The item in the EU AI Act about designing generative AIs to not generate illegal content is related.)
The more interesting work will be on improving legal processes along the dimensions listed above. And really interesting will be, as AIs get more autonomous and agentic, the “instilling” part where AIs must dynamically recognize and comply with the legal-moral corpora appropriate to the contexts they find themselves in.