AI Safety Lawyer (or trying as hard as possible to make AI Safety Law a legal practice field).
▶Supporting Equistamp with Legal Ops & Research◀
What you’ll find on my CV: Responsible AI Officer, Legal Practice LLM, Data Privacy Laws, Compliance, AI Governance @ Multinationals, startups, non-profits.
Substack: Stress-Testing Reality
→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive).
→Book a free slot: https://www.aisafety.com/advisors
My passion is: Legal research for AI Safety orgs that also inform Governance, and promoting safety literacy among legal and compliance professionals.
I work on tractable legal mechanisms that map alignment, interpretability, and control research to concrete compliance obligations (EU AI Act, GDPR, PLD, NIST/ISO) and propose implementation plans.
Current projects
Law-Following AI (LFAI): released a preprint (in prep for submission to the Cambridge Journal for Computational Legal Studies) on whether legal standards can serve as alignment anchors and how law-alignment relates to value alignment. Building on the original framework proposed by Cullen O’Keefe and the Institute of Law and AI.
Regulating downstream modifiers: writing “Regulating Downstream Modifiers in the EU: Federated Compliance and the Causality–Liability Gap”.
Open problems in regulatory AI governance: co-developing with ENAIS members a tractable list where AI Safety work can close governance gaps (deceptive alignment, oversight loss, evaluations).
AI-safety literacy for tech lawyers: building a syllabus used by serious institutions; focuses on translating alignment/interpretability/control into audits, documentation, and enforcement-ready duties.