Our participants will receive feedback on their work from four exceptional experts bridging AI safety research, legal practice, and governance:
Charbel-Raphaรซl SegerieโExecutive Director of the French Center for AI Safety (Centre pour la Sรฉcuritรฉ de lโIAโCeSIA), OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Chiara Gallese, Ph.D.- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
Yelena AmbartsumianโFounder of AMBART LAW PLLC, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPPโs New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.
James KavanaghโFounder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWSโs Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, Jamesโs team was the first to achieve ISO 42001 certification of any global cloud provider.
These advisors will review the legal strategies and technical risk assessments our teams produce, providing feedback on practical applicability to AI policy, litigation, and engineering decisions.
As you can see, these are people representing the exact key areas of change that we are tackling with the AI Safety Law-a-thon:
โCo-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.
๐ง๐ต๐ฟ๐ถ๐น๐น๐ฒ๐ฑ ๐๐ผ ๐ฎ๐ป๐ป๐ผ๐๐ป๐ฐ๐ฒ ๐๐ต๐ฒ ๐ฎ๐ฑ๐๐ถ๐๐ผ๐ฟ๐ ๐ฝ๐ฎ๐ป๐ฒ๐น ๐ณ๐ผ๐ฟ ๐๐ต๐ฒ ๐๐ ๐ฆ๐ฎ๐ณ๐ฒ๐๐ ๐๐ฎ๐-๐ฎ-๐๐ต๐ผ๐ป (๐ข๐ฐ๐ ๐ฎ๐ฑ-๐ฎ๐ฒ)!
Our participants will receive feedback on their work from four exceptional experts bridging AI safety research, legal practice, and governance:
Charbel-Raphaรซl SegerieโExecutive Director of the French Center for AI Safety (Centre pour la Sรฉcuritรฉ de lโIAโCeSIA), OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Chiara Gallese, Ph.D.- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
Yelena AmbartsumianโFounder of AMBART LAW PLLC, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPPโs New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.
James KavanaghโFounder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWSโs Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, Jamesโs team was the first to achieve ISO 42001 certification of any global cloud provider.
These advisors will review the legal strategies and technical risk assessments our teams produce, providing feedback on practical applicability to AI policy, litigation, and engineering decisions.
As you can see, these are people representing the exact key areas of change that we are tackling with the AI Safety Law-a-thon:
Industry Governance Engineering practices
BigLaw Litigation
Policy and legal research that informs Regulators
International cooperation on AI Governance (Charbel initiated the Global Call for AI Red LinesโSigned by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures).
Canโt wait to see the results of this legal hackathon. See you there!
Closing our Advisory panel with one last amazing addition!
โZe Shen Chin
โCo-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.