AI Safety Lawyer (or trying as hard as possible to make AI Safety Law a legal practice field).
▶Supporting Equistamp with Legal Ops & Research◀
What you’ll find on my CV: Responsible AI Officer, Legal Practice LLM, Data Privacy Laws, Compliance, AI Governance @ Multinationals, startups, non-profits.
Substack: Stress-Testing Reality
→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive).
→Book a free slot: https://www.aisafety.com/advisors
My passion is: Legal research for AI Safety orgs that also inform Governance, and promoting safety literacy among legal and compliance professionals.
I work on tractable legal mechanisms that map alignment, interpretability, and control research to concrete compliance obligations (EU AI Act, GDPR, PLD, NIST/ISO) and propose implementation plans.
Current projects
Law-Following AI (LFAI): released a preprint (in prep for submission to the Cambridge Journal for Computational Legal Studies) on whether legal standards can serve as alignment anchors and how law-alignment relates to value alignment. Building on the original framework proposed by Cullen O’Keefe and the Institute of Law and AI.
Regulating downstream modifiers: writing “Regulating Downstream Modifiers in the EU: Federated Compliance and the Causality–Liability Gap”.
Open problems in regulatory AI governance: co-developing with ENAIS members a tractable list where AI Safety work can close governance gaps (deceptive alignment, oversight loss, evaluations).
AI-safety literacy for tech lawyers: building a syllabus used by serious institutions; focuses on translating alignment/interpretability/control into audits, documentation, and enforcement-ready duties.
Regardless of how users may feel about the changes introduced, I applaud the significant improvement on clarity and transparency (compared to the previous policy).
Thank you very much! I think this is at least fairer to users- like you said, especially to new users who may end up confused as to what they did wrong.
I really do not mind disclosing how and much much LLM-assistance I used to write a post or comment. In fact, being “”forced”″ to think how much of a sentence was purely mine vs Claude-written is helping me a lot with clear thinking.
Like others mentioned, I also find it very useful to use dication mode, and it’s true that the distinction can get blurry when you spent 30 minutes talking into the mic, and it’s very different from doing bare minimum thinking. But I appreciate LW keeping me accountable on LLM-reliance- I mean, thinking better is why I am here ❤.