Persuading Trump of a proper US-China-led AI Treaty
By Rufo Guerreschi, President, Coalition for a Baruch Plan for AI (CBPAI)
A declared race to Artificial Superintelligence is bringing humanity to a three-way fork: catastrophic loss of control, authoritarian capture, or humanity’s triumph through a proper global AI treaty.
We’re running a precision persuasion campaign targeting the dozen individuals who could tip Trump to co-lead The Deal of the Century — a US-China global AI treaty inspired by the Baruch Plan that President Truman proposed to the UN on the very day Trump was born.
TL;DR
We’re a seed-stage nonprofit coalition of 10 international NGOs and 40+ expert advisors — from the NSA, World Economic Forum, UN, Princeton, Carnegie Council, and McKinsey — working on what we believe is a critically neglected lever for AI x-risk reduction: building the political coalitions needed to make a bold, timely and proper US-China-led AI treaty happen.
Our 350-page Strategic Memo maps specific influence pathways to Trump’s key AI advisors, with deep profiles of key potential influencers of Trump’s AI policy based on 517+ analyzed articles and videos. We’re lean (operating on $7,500/months, seeded by the Survival and Flourishing Fund), we’re in a critical window, and we’re seeking funding and strategic introductions.
Website: cbpai.org | Team: cbpai.org/team | Strategic Memo: ResearchGate
Why This Matters Now
Even if one lab or NGO succeeds in creating a perfectly safe AI or ASI or AI alignment technique, it won’t matter unless every frontier lab in the world is required to implement it.
Even if 100 nations sign a perfect treaty, it won’t matter unless the US-China sign it as well. Xi is not likely to agree to a treaty led by others, and Trump surely won’t. Given the timelines and the nature of the challenge, it is all in the hands of two men.
Xi has repeatedly called for global AI governance. That means the critical variable is whether Trump can be persuaded to co-lead with Xi a bold global AI treaty with Xi.
Four Trump-Xi summits are planned for 2026, starting in April. 63% of US voters believe it’s likely that “humans won’t be able to control it anymore”. 77% of them support a strong international AI treaty. Most key potential influencers of his AI policy are increasingly concerned and many are calling for a treaty. Trump’s approval sits at his lowest around 35-40%. The political window is real.
But not just any treaty. An AI treaty would be a good (fantastic!) outcome only if it, at least, reliably prevents ASI and grave misuse, and reduces global concentration of power and wealth
Many understandably argue that it may be better to take a flip-coin ASI gamble than a treaty that turns in a authoritarian dystopia, or completely locks away the astounding prospects of flourishing for humans and sentient beings.
Our Deal of the Century initiative privately targets key potential humanist influencers of Trump’s AI policy — JD Vance, Sam Altman, Peter Thiel, Elon Musk, Steve Bannon, and others — to champion such a treaty towards Trump.
Our 356-page Strategic Memo details how the inherent dynamics of such a treaty and some deliberate design of its treaty-making end up making it likely or highly likely that it will prevent both ASI and global authoritarianism.
What We’re Doing
The Coalition for a Baruch Plan for AI draws its name from the 1946 Baruch Plan — the US proposal to place nuclear technology under international control. That plan failed, and humanity has lived with the consequences ever since. We aim to succeed where it didn’t, by applying a more effective treaty-making model and learning from its mistakes.
Our core theory of change is simple: the window for a US-China AI treaty is narrow but real. Trump’s transactional dealmaking instinct, combined with genuine anxiety about AI among several of his closest advisors, creates an opening. Our job is to map the pathways and catalyze the coalitions that could exploit it.
Concretely, we’re doing three things:
1. Deep strategic analysis. Our Strategic Memo (now at v2.6, 370+ pages) contains detailed psychological and philosophical profiles of 15+ key influencers — JD Vance, Sam Altman, Steve Bannon, Pope Leo XIV, Dario Amodei, and others — based on analysis of 517+ articles and videos. Our finding: most share deeply-held non-secular humanist values and are privately uncomfortable with an unconstrained race to superintelligence. They can be united.
2. Persuasion tours and direct engagement. We conducted our first US Persuasion Tour in October 2025 and are planning a DC visit in February 2026, targeting strategic introductions to influencers and their inner circles.
3. Coalition building. We’ve aggregated 10 NGOs and 40+ multidisciplinary advisors into a coordinated coalition — something that did not exist before in the AI governance treaty space. Our member organizations span the spectrum from PauseAI to the World Federalist Movement to the European Center for Peace and Development (UN University for Peace).
An Opening Window of Feasibility
Xi has repeatedly called for global AI governance. Four Trump-Xi summits are planned for 2026, starting in April. Trump’s approval ratings are at their lowest at 35-40%. By now, 63% of US voters believe it’s likely that “humans won’t be able to control it anymore”, and 53% of US voters believe it’s somewhat or very likely that “AI will destroy humanity”. Not only that, but 77% of all US voters support a strong international AI treaty. Many of Trump’s AI policy influencers are increasingly concerned or calling for a treaty.
Irresistible to Trump
Under a very similar political context, the pragmatic US president Truman proposed to the UN — on the very day Trump was born — history’s boldest treaty, for nuclear weapons. Trump has a chance to secure and future-proof US economic leadership, prevent Chinese dominance, avoid immense risks for his own life and his family’s. He will finish what Truman started, succeed where Truman failed, and establish a legacy worth 100 Nobel Peace Prizes that enables him to retire in 2029, widely cherished the world over.
Our Team’s Credibility
Our coalition’s advisory network includes people from institutions that lend real weight to this effort:
Former NSA Chief Cryptologist and former official at TAO (Tailored Access Operations)
Former Global Head of Cyber Operations at UBS
Former Chief Economist of the World Economic Forum
Chair of the UN Commission on Science and Technology for Development
Emeritus Professor of International Law at Princeton
Board member of the Carnegie Council for Ethics in International Affairs
Fellow of the UN Institute for Disarmament Research
Our 10 member NGOs include the Transnational Working Group on AI of the World Federalist Movement (est. 1947), PauseAI Global, AITreaty.org, the International Congress for the Governance of AI, and the European Center for Peace and Development / UN University for Peace.
The network spans 15+ countries across five continents. Full details on our team page.
Why This Is High-Leverage
A few reasons we think this is an unusually good use of marginal funding:
Neglectedness. There are many organizations doing technical AI safety research, and a growing number doing domestic AI policy. Almost no one is doing the operational, political coalition-building work needed to make international AI treaties tractable. We occupy a nearly empty niche.
Timing. Trump’s first term creates a unique window. His “Deal of the Century” instinct — combined with real anxiety about AI among advisors like Vance and Bannon — means the political conditions for a US-led treaty initiative may be better in 2025–2027 than at any foreseeable future point.
Cost-effectiveness. We’ve produced a 370+ page strategic analysis, built a 10-NGO coalition, onboarded 40+ advisors, conducted a US Persuasion Tour, and generated substantial analytical output — all on $60K in total funding from SFF and 1,500+ volunteer hours. Our burn rate is extremely low relative to output.
Complementarity. We’re not competing with technical alignment work or domestic policy advocacy. We’re building the political infrastructure that would be needed if and when those efforts succeed in generating the technical knowledge and public support for a treaty.
Who’s Calling for a Baruch Plan for AI
We want to be clear: the following individuals are not members of our Coalition. But some of the most influential voices in AI have independently called for or endorsed the Baruch Plan model for AI governance:
Yoshua Bengio (Turing Award winner, most-cited AI scientist): described the Baruch Plan as “an important avenue to explore to avoid a globally catastrophic outcome” (July 2024 podcast)
Jack Clark (Anthropic co-founder): suggested the Baruch Plan for AI governance, as reported by The Economist (May 2023)
Jaan Tallinn (Future of Life Institute co-founder): suggested it in a December 2023 podcast
Ian Hogarth (UK AI Safety Institute): referenced the Baruch Plan as a model
Nick Bostrom (leading AI philosopher): referenced the model
When this many people at the frontier of AI research converge on the same historical analogy, it’s worth taking seriously. Someone needs to do the operational and political work to translate that consensus into action. That’s us.
What We Need
Funding
While we are actively seeking $10-30K as bridge funding, we’re seeking $150K–$400K to scale operations through the critical 2026–2027 policy window. This would cover:
A small core team (currently all-volunteer) transitioning to part-time paid roles
Travel for DC and international persuasion tours
Communications and strategic outreach
Event organization and coalition coordination
We’re open to standard grants, regrants, or matching pledges. Our previous funder was the Survival and Flourishing Fund ($60K, February 2025). We also have a Manifund page.
Strategic Introductions
Equally valuable: introductions to people with access to Trump’s AI policy circle, or to key influencers we’ve profiled in our Strategic Memo. If you know people connected to JD Vance, David Sacks, Sam Altman, or the broader national security AI policy community in DC — we’d love to talk.
Advisors and Volunteers
We’re looking for people with expertise in international law, AI policy, strategic communications, or fundraising who want to contribute to a high-stakes, high-impact effort.
About Me
I’m Rufo Guerreschi, an activist, researcher, and entrepreneur who’s spent the past decade at the intersection of digital security, democratic governance, and emerging technology policy. I founded the Trustless Computing Association in 2014 and have convened its Free and Safe in Cyberspace conference series since 2015. Earlier career: Global VP at 4thPass (€10M+ in deals with Telefonica and others), CEO of Open Media Park (grew valuation from €3M to €21M), and founder of Participatory Technologies (open-source e-democracy platforms deployed across three continents).
I convened the Coalition for a Baruch Plan for AI in July 2024 and have been its full-time volunteer president since. I’m based between Rome and Zurich.
Get In Touch
If you’re a funder, regrantor, potential advisor, or someone with strategic connections to AI policy influencers, I’d genuinely welcome a conversation.
Email: rufo@guerreschi.org
Website: cbpai.org
Strategic Memo: ResearchGate
Manifund: manifund.org/projects/coalition-for-a-baruch-plan-for-ai
LinkedIn: linkedin.com/in/rufoguerreschi
We’re a small team doing big work in a narrow window. If the Treaty of Versailles had been replaced by a Baruch Plan that actually worked, the 20th century would have looked very different. We think the AI equivalent of that moment is now.
I don’t think this will work. If you want sane AI policy in the US in the near future, your team should instead use its influence to help get Trump removed from office.
As you well know, removing Trump from office will be very very unlikely.
Like it or not our future is in his hands.
Fortunately, his own personal interests, and that of key potential influencers of Trump’s AI policy, align with the possibility of him co-leading a bold, timely and proper AI treaty.
It is very hard to face, but if we don’t, we can’t act efficiently.
PS: this message was edited to remove some unnecessary aggressive comments
You don’t have as much money to pay bribes as OpenAI does. Thus, with Trump in office, they get to write US AI regulations, and you do not. Your current approach is therefore even less likely to work.
I know the truth is very hard to face...
I understand your skepticism. But our deep analysis reveals that there are other incentives and conditions that can work on Trump, beyond the pecuniary. Here are 22 reasons why we believe Trump can be persuaded:
https://www.cbpai.org/blog-1/twenty-reasons-why-trump-could-be-persuaded-to-co-lead-a-bold-global-ai-treaty-with-xi
The LLM that wrote that document doesn’t quite seem on board with what it was prompted to write. There are way too many arguments of the form, “The evidence is against X, therefore X. Surprise!” Here’s just one of them —
This is like saying about a gambler, “He’s ten thousand dollars in debt. He’s gonna lose his house. He really needs this win! Therefore he will start making good bets now, to recoup his losses. You and I should bet on him winning now.” But it’s the gambler’s betting history that has put him in debt! The 36% approval rating predicts a future of more bad decisions, not a sudden pivot to good ones.
The document has a repeated theme of betting on surprises and unpredictability. But the thing about surprises is that they usually do not happen. “X would be a wild unpredictable surprise” is evidence against X, not for it.
It makes much more sense to predict that a defectbot is going to keep defecting; that rejection of treaties and cooperation predicts more rejection of treaties and cooperation; and (particularly!) that a history of decision-making on the basis of corruption and bribery predicts a future of more of the same, not a surprise pivot to effective action for the well-being of the nation and humanity.
You say that removing Trump from office is unlikely. My argument is that getting good AI policy out of Trump is much more unlikely, for the specific reason that AI policy under Trump is paid for by people who are committed to acceleration heedless of x-risk.
Removing Trump is on the critical path to achieving good US AI policy, just as it is on the critical path to achieving good US energy policy, to achieving good US public health policy, to achieving a return to international free trade, to achieving justice for the Trump-Epstein victims, and so on.
People who are interested in each of these specific causes would do well to recognize their common interest, and act together to achieve it. For this to happen may indeed be unlikely; but it would be less unlikely than the surprises that your LLM proposes to bet on.
If you read the 22 reasons Trump could be persuade you will find many way in whichbhe could be persuaded to do so by some influential people around him, even if for the wrong reason(vanity, etc).
Removing trump is not only unlikely but very very unlikely. If he was, vance would take his place and our plan would be even more valid ( see pope-vance ai connection)
The fact his ratings are low makes him more apr to be persuaded by trust influencers eith a good plan
My main concern is that, in a targeted persuasion-of-the-powerful campaign, you want folks who are very good at persuading the powerful. By talent, this seems largely to draw from the “intellectually honest fringe” (where I and my friends live), who (as I understand it) haven’t been successful in “targeted persuasion of fancy, respectable people.” I really like the idea, though! I think this might be one of the most under-resourced causes in AI safety, and I plan to donate to help with the bridge funding. But—what should reassure me about my worries, that y’all are odd and not-well-connected, therefore being ill-suited to persuade all these important people?
Malcolm,
You’re concerned whether “odd and not-well-connected” folks can execute “targeted persuasion of fancy, respectable people.” That is a very fair concern.
Here is what I can tell you publicly. (Confidential briefing with specifics available under confidentiality agreement. I will send you a private message meanwhile)
I was raised quite wealthy in Italy, competed as an Italian national golf champion (2nd place), and as a young man spent years in Italy’s Emerald Coast (20 years of summers) and Miami Beach (7 years) social circles—I grew up with the very wealthy. I understand how to navigate elite spaces because I was raised in them. Combined with 15 years building technical credibility, this enables exactly the targeted persuasion you’re questioning.
The empirical proof: Our October 2025 US Persuasion Tour delivered 85+ meetings (projected 15-20): 23 AI lab policy officials at OpenAI/Anthropic/DeepMind, 8 national security contacts, 4 intelligence community officials. Direct pathways to 2 of 10 primary target influencers.
15-year track record:
Trustless Computing Association: 9 years building deep CIA (MACH37 accelerator incubation), NSA (former officials now advisors), DoD, State Department relationships
Free and Safe in Cyberspace: 11 conferences, 3 continents, 130+ speakers including First US Cyber Coordinator, UN Special Rapporteur, European Data Protection Supervisor, Jaan Tallinn (our later funder). All aggregated around my vision for a Trustless Computing Certification Body.
Coalition: 40+ advisors from UN/NSA/WEF/Yale/Princeton, 10 NGO partners, 356-page Strategic Memo
Strategic assets: Mar-a-Lago area connections through 6 years Miami residence + personal relationships in Palm Beach; Vatican network (3 years with the secretary fo the main Vartican AI organization, organizing April-June Rome meetings); AI lab access (direct relation with the global head of policy at one of the top 3 lab, 25+ frontier lab officials).
Our Strategic Memo (356 pages) contains 170+ pages of psychological profiling—667+ sources per target. With $200-400K, we hire 2 operators to deploy this at scale.
We only need 2 key influencers (Hassabis/Amodei) to then use our memo to convince the others.
Constraint isn’t access—it’s operational capacity. We’ve proven we open doors; need people to execute.
Best, Rufo