Founder and CEO of ControlAI.
Andrea_Miotti
Nice post, strong upvoted. You might be interested in a related post also dealing with the illusion of “escaping the permanent underclass”: https://cognition.cafe/p/the-realpolitik-of-the-permanent
Yup, he’s already there! Connor moved to America recently and he’s based in Washington, DC, leading our work in the US!
See: https://www.ettf.land/p/conjecture-a-retrospective
Thanks Michael!
I expect the third, fourth, fifth etc. person we hire to be at least as effective at getting those meetings as current staff were at equivalent tenure, and probably more so. I also expect the team in aggregate to get more effective over time: this is what has happened over and over for each of our workstreams.
This is for a few reasons, also listed in the post:
-
ControlAI’s success relies on strong processes and infrastructure, not on pre-existing political insider networks or rare and obscure talent. Our lawmaker efforts all started with staff members (who are great!) with no insider networks, and between 0 and a few years of experience in policy. We don’t succeed based on insider connections: we succeed because of our direct, scalable method. New hires also inherit advantages the first hire didn’t have: a refined playbook and existing campaign traction to point to.
-
Success compounds vertically in each country, and horizontally across countries. a. Vertically, within a single country, the more lawmakers support a campaign as a result of our meetings, the more other lawmakers learn about extinction risk from AI and the need to tackle the threat from superintelligent AI, making it marginally easier to get more meetings and more lawmakers on board. This is a micro-example of our macro-strategy: the bottleneck right now is lack of awareness, and building common knowledge leads to faster and faster change. b. Horizontally, across countries, the more lawmakers in a major country support the campaign, the (slightly) easier it is for lawmakers in another country to take the topic seriously, meet us, and support. In the UK, we started from 0. In Canada, Germany and the US, we started with “Dozens of UK lawmakers, across parties, recognize extinction risk from AI and superintelligence as a national security threat”.
-
Our approach compounds across ControlAI as a whole. Every week, our briefing gets better: our materials, processes, arguments and pitch are improved continuously based on real-world feedback from the meetings we have. Each iteration is propagated across the org.
Given this, the main scaling risk here for me is not that hire 3 underperforms hires 1 and 2. It’s the normal risk of hiring and onboarding well, which we’re actively investing in right now with more onboarding materials, internal and external writeups of our theory of change and our approach, and even more streamlined processes.
-
My estimate accounts for increased, adversarial lobbying by frontier AI corporations regardless as they get closer and closer to superintelligence.
I do not expect us being heavily funded to change that by a very significant degree.
We can already see right now that AI corporations are ramping up their lobbying and influence operations immensely, across the board. To name just a few examples:
DC lobbyists received nearly $130M for work related to AI in 2025, an increase of 370% since the release of ChatGPT https://news.bgov.com/bloomberg-government-news/ai-influence-spending-booms-signaling-monumental-clashes-ahead
Anti AI-regulation SuperPAC coordinated by Chris Lehane (chief OpenAI lobbyist), funded by OpenAI’s Greg Brockman and A16Z, raising more than $125M to spend on electoral races against pro-AI regulation candidates. https://www.cnbc.com/2026/01/30/ai-industry-super-pac-raises-campaign-money.html
Anthropic ramping up its own SuperPAC contributions. https://builtin.com/articles/super-pacs-ai-regulation-2026-midterms
It’s a little hard for me to understand whether you agree or disagree with the plan in the post for me.
My reading of the piece you posted is that it was correctly identifying (23 years ago!) the importance of the political process in avoiding bad outcomes from AI, and was particularly prescient.
To highlight the key part that I’d like people to take away, “only a strong public movement driving government regulation” has a chance of solving the problem.
I completely agree, and our plan at ControlAI and the post above explains how our plan focuses on informing both the public and lawmakers, at scale, about the extinction risk posed by superintelligence and the way to prevent these risks.
Preventing extinction from ASI on a $50M yearly budget
Thanks Charlotte! Yes, a big part of our approach is based on building common knowledge of the risk of extinction from superintelligence, so people know that other people know about this risk.
The more people we reach, the faster things can compound.
ControlAI 2025 Impact Report: our progress toward an international ban on ASI
How middle powers may prevent the development of artificial superintelligence
Modeling the geopolitics of AI development
Three main views on the future of AI
Anthropic CEO calls for RSI
The Compendium, A full argument about extinction risk from AGI
Thanks! Do you still think the “No AIs improving other AIs” criterion is too onerous after reading the policy enforcing it in Phase 0?
In that policy, we developed the definition of “found systems” to have this measure only apply to AI systems found via mathematical optimization, rather than AIs (or any other code) written by humans.
This reduces the cost of the policy significantly, as it applies only to a very small subset of all AI activities, and leaves most innocuous software untouched.
A Narrow Path: a plan to deal with AI extinction risk
In terms of explicit claims:
“So one extreme side of the spectrum is build things as fast as possible, release things as much as possible, maximize technological progress [...].
The other extreme position, which I also have some sympathy for, despite it being the absolutely opposite position, is you know, Oh my god this stuff is really scary.
The most extreme version of it was, you know, we should just pause, we should just stop, we should just stop building the technology for, indefinitely, or for some specified period of time. [...] And you know, that extreme position doesn’t make much sense to me either.”
Dario Amodei, Anthropic CEO, explaining his company’s “Responsible Scaling Policy” on the Logan Bartlett Podcast on Oct 6, 2023.
Starts at around 49:40.
- AI #35: Responsible Scaling Policies by (26 Oct 2023 13:30 UTC; 66 points)
- It will cost you nothing to “bribe” a Utilitarian by (15 Oct 2025 15:51 UTC; 45 points)
- Lying is Cowardice, not Strategy by (24 Oct 2023 13:24 UTC; 14 points)
- Lying is Cowardice, not Strategy by (EA Forum; 25 Oct 2023 5:59 UTC; -5 points)
Thanks for the kind feedback! Any suggestions for a more interesting title?
If we win: first order of business is to celebrate and give the team a long holiday! The most likely cause of death of the authors has been averted.
After that, the work is building the institutions and societal infrastructure so humanity can survive and thrive alongside very powerful AI (and increasingly powerful technology in general). Even with a global ASI ban, powerful AI systems will still exist and society will still go through radical disruption. All versions of the future are wild, even conditional on a ban.
And a ban isn’t a one-and-done. Humanity will be protected from self-annihilation only if there is continued vigilance, and the institutions and people that enforce the ASI regime are maintained and updated as technological progress continues.
ControlAI winding down after winning would be a completely fine outcome. Whether phase two I describe above is ControlAI’s focus or a new venture’s is a good question for after we’ve prevented extinction risk from ASI.
Whether that work eventually entails finding ways to safely develop ASI under operationally adequate institutions and just processes, or not, is unclear to us. And in any case, a globally enforced ASI ban is the precondition for ever doing a “safe ASI” plan.
Personally, there are many other causes I care about, such as extending human lifespan and updating political institutions to deal with 21st century technology, that will become higher priority once the largest threat to the continued existence of humanity is averted.
You’re right that this could happen, and we’d consider it a bad outcome, although a better one than extinction. We don’t support a universally anti-technology coalition, which is one of the many reasons we take care to keep our messaging focused specifically on ASI risk rather than omnicause anti-technologism.