In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:
Camp A) “Race to superintelligence safely”: People in this group typically argue that “superintelligence is inevitable because of X”, and it’s therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO’s to sign, and none chose to do so. However, it would be oversimplified to claim that frontier AI corporate funding predicts camp membership – for example, someone from one of the top companies recently told me that he’d sign the 2025 statement were it not for fear of how it would impact him professionally.
The distinction between Camps A and B is also interesting because it correlates with policy recommendations: Camp A tends to support corporate self-regulation and voluntary commitments without strong and legally binding safety standards akin to those in force for pharmaceuticals, aircraft, restaurants and most other industries. In contrast, Camp B tends to support such binding standards, akin to those of the FDA (which can be viewed as a strict ban on releasing medicines that haven’t yet undergone clinical trials and been safety-approved by independent experts). Combined with market forces, this would naturally lead to new powerful yet controllable AI tools, to do science, cure diseases, increase productivity and even aspire for dominance (economic and military) if that’s desired – but not full superintelligence until it can be devised to meet the agreed-upon safety standards – and it remains controversial whether this is even possible.
In my experience, most people (including top decision-makers) are currently unaware of the distinction between A and B and have an oversimplified view: You’re either for AI or against it. I’m often asked: “Do you want to accelerate or decelerate? Are you a boomer or a doomer?” To facilitate a meaningful and constructive societal conversation about AI policy, I believe that it will be hugely helpful to increase public awareness of the differing visions of Camps A and B. Creating such awareness was a key goal of the 2025 superintelligence statement. So if you’ve read this far, I’d strongly encourage you to read it and, if you agree with it, sign it and share it. If you work for a company and worry about blowback from signing, please email me at mtegmark@gmail.com and say “I’ll sign this if N others from my company do”, where N=5, 10 or whatever number you’re comfortable with.
Finally, please let me provide an important clarification about the 2025 statement. Many have asked me why it doesn’t define its terms as carefully as a law would require. Our idea is that detailed questions about how to word laws and safety standards should be tackled later, once the political will has formed to ban unsafe/unwanted superintelligence. This is analogous to how detailed wording of laws against child pornography (who counts as a child, what counts as pornography, etc.) got worked out by experts and legislators only after there was broad agreement that we needed some sort of ban on child pornography.
Which side of the AI safety community are you in?
In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:
Camp A) “Race to superintelligence safely”: People in this group typically argue that “superintelligence is inevitable because of X”, and it’s therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO’s to sign, and none chose to do so. However, it would be oversimplified to claim that frontier AI corporate funding predicts camp membership – for example, someone from one of the top companies recently told me that he’d sign the 2025 statement were it not for fear of how it would impact him professionally.
The distinction between Camps A and B is also interesting because it correlates with policy recommendations: Camp A tends to support corporate self-regulation and voluntary commitments without strong and legally binding safety standards akin to those in force for pharmaceuticals, aircraft, restaurants and most other industries. In contrast, Camp B tends to support such binding standards, akin to those of the FDA (which can be viewed as a strict ban on releasing medicines that haven’t yet undergone clinical trials and been safety-approved by independent experts). Combined with market forces, this would naturally lead to new powerful yet controllable AI tools, to do science, cure diseases, increase productivity and even aspire for dominance (economic and military) if that’s desired – but not full superintelligence until it can be devised to meet the agreed-upon safety standards – and it remains controversial whether this is even possible.
In my experience, most people (including top decision-makers) are currently unaware of the distinction between A and B and have an oversimplified view: You’re either for AI or against it. I’m often asked: “Do you want to accelerate or decelerate? Are you a boomer or a doomer?” To facilitate a meaningful and constructive societal conversation about AI policy, I believe that it will be hugely helpful to increase public awareness of the differing visions of Camps A and B. Creating such awareness was a key goal of the 2025 superintelligence statement. So if you’ve read this far, I’d strongly encourage you to read it and, if you agree with it, sign it and share it. If you work for a company and worry about blowback from signing, please email me at mtegmark@gmail.com and say “I’ll sign this if N others from my company do”, where N=5, 10 or whatever number you’re comfortable with.
Finally, please let me provide an important clarification about the 2025 statement. Many have asked me why it doesn’t define its terms as carefully as a law would require. Our idea is that detailed questions about how to word laws and safety standards should be tackled later, once the political will has formed to ban unsafe/unwanted superintelligence. This is analogous to how detailed wording of laws against child pornography (who counts as a child, what counts as pornography, etc.) got worked out by experts and legislators only after there was broad agreement that we needed some sort of ban on child pornography.