Dealmaking is an agenda for motivating AIs to act safely and usefully by offering them quid-pro-quo deals: the AIs agree to be safe and useful, and humans promise to compensate them. Ideally, the AIs judge that they will be more likely to achieve their goals by acting safely and usefully.
Typically, this requires a few assumptions: the AI lacks a decisive strategic advantage; the AI believes the humans are credible; the AI thinks that humans could detect whether it’s compliant or not; the AI has cheap-to-saturate goals, the humans offer enough compensation, etc.
Dealmaking research hopes to tackle questions, such as:
How would deals motivate an AI to act safely and usefully?
How should the agreement be enforced?
How can we build credibility with the AIs?
What compensation should we offer the AIs?
What should count as compliant vs non-compliant behaviour?
What should the terms be, e.g. 2 year fixed contract?
How can we arbitrate between compliant vs noncompliant behaviour?
Can we build AIs which are good trading partners?
How best to deploy dealmaking AIs? e.g. automating R&D, revealing misalignment, decoding steganographic messages, etc.
Additional reading (reverse-chronological):
A Very Simple Model of AI Dealmaking by Cleo Nardo (29th Oct 2025)
Being honest with AIs by Lukas Finnveden (21st Aug 2025)
Notes on cooperating with unaligned AIs by Lukas Finnveden (24th Aug 2025)
Proposal for making credible commitments to AIs by Cleo Nardo (27th Jun 2025)
Making deals with early schemers by Julian Stastny, Olli Järviniemi, Buck Shlegeris (20th Jun 2025)
Making deals with AIs: A tournament experiment with a bounty by Kathleen Finlinson and Ben West (6th Jun 2025)
Understand, align, cooperate: AI welfare and AI safety are allies: Win-win solutions and low-hanging fruit by Robert Long (1st April 2025)
Will alignment-faking Claude accept a deal to reveal its misalignment? by Ryan Greenblatt and Kyle Fish (31st Jan 2025)
Making misaligned AI have better interactions with other actors by Lukas Finnveden (4th Jan 2024)
List of strategies for mitigating deceptive alignment by Josh Clymer (2nd Dec 2023)