In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an “AGI-soon” world as follows:
Alignment Hard EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.
Alignment Easy EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological “stone age.”
Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy—not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
Full integration with US AI efforts, securing a guarantee of equal benefits from aligned superintelligence. This could also give EU AI safety labs a seat at the table for alignment discussions.
Develop an autonomous EU AI leader, excelling in capabilities and alignment research to negotiate with the US and China as an equal. This would demand a drastic policy shift, massive investment in data centers and nuclear power, and deregulation, likely unrealistic in the short term.
OpenAI, Anthropic and Google DeepMind are the main signatories already to these Codes of Practice.
So, whatever is agreed / negotiated is what will impact frontier AI companies. That is the problem.
I’d love to see specific criticisms from you on sections 3, 4 or 5 of this post! I am happy to provide feedback myself based on useful suggestions that come up in this thread.
Do you have any public evidence that OpenAI, Anthropic and Google DeepMind will sign?
From my perspective, this remains uncertain and will likely depend on several factors, including the position of the US government on this, and the final code’s content (particularly regarding unpopular measures among companies like the independent third-party assessment in measure 11).
In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an “AGI-soon” world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.
Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological “stone age.”
Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy—not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
Full integration with US AI efforts, securing a guarantee of equal benefits from aligned superintelligence. This could also give EU AI safety labs a seat at the table for alignment discussions.
Develop an autonomous EU AI leader, excelling in capabilities and alignment research to negotiate with the US and China as an equal. This would demand a drastic policy shift, massive investment in data centers and nuclear power, and deregulation, likely unrealistic in the short term.
OpenAI, Anthropic and Google DeepMind are the main signatories already to these Codes of Practice.
So, whatever is agreed / negotiated is what will impact frontier AI companies. That is the problem.
I’d love to see specific criticisms from you on sections 3, 4 or 5 of this post! I am happy to provide feedback myself based on useful suggestions that come up in this thread.
Do you have any public evidence that OpenAI, Anthropic and Google DeepMind will sign?
From my perspective, this remains uncertain and will likely depend on several factors, including the position of the US government on this, and the final code’s content (particularly regarding unpopular measures among companies like the independent third-party assessment in measure 11).
My understanding is that they expressed willingness to sign, but lobbying efforts on their side are still ongoing, as is the entire negotiation.
The only big provider I’ve heard that explicitly refused to sign is Meta: EIPA in Conversation WIth—Preparing for the EU GPAI Codes of Practice (somewhere from minute 34 to 38).