The AI alignment community had a major victory in the regulatory landscape, and it went unnoticed by many.
The EU AI Act explicitly mentions “alignment with human intent” as a key focus area in relation to regulation of systemic risks.
As far as I know, this is the first time “alignment” has been mentioned by a law, or major regulatory text.
It’s buried in Recital 110, but it’s there. And it also makes research on AI Control relevant:
“International approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent”.
The EU AI Act also mentions alignment as part of the Technical documentation that AI developers must make publicly available.
This means that alignment is now part of the EU’s regulatory vocabulary.
But here’s the issue: most AI governance professionals and policymakers still don’t know what it really means, or how your research connects to it.
I’m trying to build a space where AI Safety and AI Governance communities can actually talk to each other.
If you’re curious, I wrote an article about this, aimed at the corporate decision-makers that lack literacy on your area.
Would love any feedback, especially from folks thinking about how alignment ideas can scale into the policy domain.
Here is the Substack link (I also posted it on LinkedIn):
I don’t think it’s been widely discussed within AI Safety forums. Do you have any other comments, though? Epistemic pessimism is welcomed XD. But I did think that this was at least update-worthy.
I’m not very involved with the Effective Altruism community myself, though I did post the same Quick Take on the EA Forum today, but I haven’t received any responses there yet. So I can’t really say for sure how widely known this is.
For context: I’m a lawyer working in AI governance and data protection, and I’ve also been doing independent AI safety research from a policy angle. That’s how I came across this, just by going through the full text of the AI Act as part of my research.
My guess is that some of the EAs working closely on policy probably do know about it, and influenced this text too! But it doesn’t seem to have been broadly highlighted or discussed in alignment forums so far. Which is why I thought it might be worth flagging.
Happy to share more if helpful, or to connect further on this.
The EU AI Act explicitly mentions “alignment with human intent” as a key focus area in relation to regulation of systemic risks.
As far as I know, this is the first time “alignment” has been mentioned by a law, or major regulatory text.
It’s buried in Recital 110, but it’s there. And it also makes research on AI Control relevant:
“International approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent”.
The EU AI Act also mentions alignment as part of the Technical documentation that AI developers must make publicly available.
This means that alignment is now part of the EU’s regulatory vocabulary.
But here’s the issue: most AI governance professionals and policymakers still don’t know what it really means, or how your research connects to it.
I’m trying to build a space where AI Safety and AI Governance communities can actually talk to each other.
If you’re curious, I wrote an article about this, aimed at the corporate decision-makers that lack literacy on your area.
Would love any feedback, especially from folks thinking about how alignment ideas can scale into the policy domain.
Here is the Substack link (I also posted it on LinkedIn):
https://open.substack.com/pub/katalinahernandez/p/why-should-ai-governance-professionals?utm_source=share&utm_medium=android&r=1j2joa
My intuition says that this was a push from Future of Life Institute.
Thoughts? Did you know about this already?
I did not know about this already.
I don’t think it’s been widely discussed within AI Safety forums. Do you have any other comments, though? Epistemic pessimism is welcomed XD. But I did think that this was at least update-worthy.
I did not know about this either. Do you know whether the EAs in the EU Commission know about it?
Hi Lucie, thanks so much for your comment!
I’m not very involved with the Effective Altruism community myself, though I did post the same Quick Take on the EA Forum today, but I haven’t received any responses there yet. So I can’t really say for sure how widely known this is.
For context: I’m a lawyer working in AI governance and data protection, and I’ve also been doing independent AI safety research from a policy angle. That’s how I came across this, just by going through the full text of the AI Act as part of my research.
My guess is that some of the EAs working closely on policy probably do know about it, and influenced this text too! But it doesn’t seem to have been broadly highlighted or discussed in alignment forums so far. Which is why I thought it might be worth flagging.
Happy to share more if helpful, or to connect further on this.