The European AI Office is finalizing Codes of Practice that will define how general-purpose AI (GPAI) models are governed under the EU AI Act.
They are explicitly asking for global expert input, and feedback is open to anyone, not just EU citizens.
The guidelines in development will shape:
The definition of “systemic risk”
How training compute triggers obligations
When fine-tuners or downstream actors become legally responsible
What counts as sufficient transparency, evaluation, and risk mitigation
Major labs (OpenAI, Anthropic, Google DeepMind) have already expressed willingness to sign the upcoming Codes of Practice. These codes will likely become the default enforcement standard across the EU and possibly beyond.
I believe that more feedback from alignment and interpretability researchers is needed.
Without strong input from AI Safety researchers and technical AI Governance experts, these rules could lock in shallow compliance norms (mostly centered on copyright or reputational risk) while missing core challenges around interpretability, loss of control, and emergent capabilities.
I’ve written a detailed Longform post breaking down exactly what’s being proposed, where input is most needed, and how you can engage.
Even if you don’t have policy experience, your technical insight could shape how safety is operationalized at scale.
The European AI Office is finalizing Codes of Practice that will define how general-purpose AI (GPAI) models are governed under the EU AI Act.
They are explicitly asking for global expert input, and feedback is open to anyone, not just EU citizens.
The guidelines in development will shape:
The definition of “systemic risk”
How training compute triggers obligations
When fine-tuners or downstream actors become legally responsible
What counts as sufficient transparency, evaluation, and risk mitigation
I believe that more feedback from alignment and interpretability researchers is needed.
Without strong input from AI Safety researchers and technical AI Governance experts, these rules could lock in shallow compliance norms (mostly centered on copyright or reputational risk) while missing core challenges around interpretability, loss of control, and emergent capabilities.
I’ve written a detailed Longform post breaking down exactly what’s being proposed, where input is most needed, and how you can engage.
Even if you don’t have policy experience, your technical insight could shape how safety is operationalized at scale.
📅 Feedback is open until 22 May 2025, 12:00 CET
🗳️ Submit your response here
Happy to connect with anyone individually for help drafting meaningful feedback.
Thanks for posting and bringing attention to this! I have forwarded to my friend who works in AI safety.
Thank you!!