Charbel-Raphael Segerie
https://crsegerie.github.io/
Living in Paris
It feels to me that we are not talking about the same thing. Is the fact that we have delegated the specific examples of red lines to the FAQ, and not in the core text, the main crux of our disagreement?
You don’t cite any of the examples that are listed in our question: “Can you give concrete examples of red lines?”
Hi habryka, thanks for the honest feedback
“the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons”—This is not the red line we have been advocating for—this is one red line from a representative discussing at the UN Security Council—I agree that some red lines are pretty useless, some might even be net negative.
“The central question is what are the lines!” The public call is intentionally broad on the specifics of the lines. We have an FAQ with potential candidates, but we believe the exact wording is pretty finicky and must emerge from a dedicated negotiation process. Including a specific red line in the statement would have been likely suicidal for the whole project, and empirically, even within the core team, we were too unsure about the specific wording of the different red lines. Some wordings were net negative according to my judgment. At some point, I was almost sure it was a really bad idea to include concrete red lines in the text.
We want to work with political realities. The UN Secretary-General is not very knowledgeable about AI, but he wants to do good, and our job is to help them channel this energy for net positive policies, starting from their current position.
Most of the statement focuses on describing the problem. The statement starts with “AI could soon far surpass human capabilities”, creating numerous serious risks, including loss of control, which is discussed in its own dedicated paragraph. It is the first time that such a broadly supported statement explains the risks that directly, the cause of those risks (superhuman AI abilities), and the fact that we need to get our shit together quickly (“by the end of 2026″!).
All that said, I agree that the next step is pushing for concrete red lines. We’re moving into that phase now. I literally just ran a workshop today to prioritize concrete red lines. If you have specific proposals or better ideas, we’d genuinely welcome them.
Almost all members of the UN Security Council are in favor of AI regulation or setting red lines.
Never before had the principle of red lines for AI been discussed so openly and at such a high diplomatic level.
UN Secretary-General Antonio Guterres opened the session with a firm call to action for red lines:
• “a ban on lethal autonomous weapons systems operating without human control, with [...] a legally binding instrument by next year”
• “the need to ensure that AI never lowers the barriers to acquiring or deploying prohibited weapons”
Then, Yoshua Bengio took the floor and highlighted our Global Call for AI Red Lines — now endorsed by 11 Nobel laureates and 9 former heads of state and ministers.
Almost all countries were favorable to some red lines:
China: “It’s essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention.”
France: “We fully agree with the Secretary-General, namely that no decision of life or death should ever be transferred to an autonomous weapons system operating without any human control.”
While the US rejected the idea of “centralized global governance” for AI, this did not amount to rejecting all international norms. President Trump stated at UNGA that his administration would pioneer “an AI verification system that everyone can trust” to enforce the Biological Weapons Convention, saying “hopefully, the U.N. can play a constructive role.”
Extract from each intervention.
Right, but you also want to implement a red line on a system that would be precursors to this type of system, and this is why we have a red line on self-improvement.
Updates:
The global call for AI red lines got 300 media mentions, and was picked up by the world’s leading newswires, AP & AFP, and featured in premier outlets, including Le Monde, NBC, CNBC, El País, The Hindu, The NYT, The Verge, and the BBC.
Yoshua Bengio, presented our Call for Red Lines at the UN Security Council: “Earlier this week, with 200 experts, including former heads of state and Nobel laureates [...], we came together to support the development of international red lines to prevent unacceptable AI risks.”
Thanks!
As an anecdote, some members of my team originally thought this project could be finished in 10 days after the French summit. I was more realistic, but even I was off by an order of magnitude. We learned our lesson.
This paper shows it can be done in principle, but in practice curren systems are still not capable enough to do this at full scale on the internet, and I think that even if we don’t die directly from full autonomous self replication, self improvement is only a few inches away, and is a true catastrophic/existential risk.
Thanks!
Yeah, we were aware of this historical difficulty, and this is why we mention “enforcement” and “verification” in the text.
This is discussed in the Faq quickly, but I think that an IAEA for AI, which would be able to inspect the different companies, would help tremendously already. And there are many other verification mechanisms possible e.g. here:
I will see if we can add a caveat on this in the Faq.
If random people tomorrow drop AI, I guarantee you things will change
Doubts.
Why would random people drop AI? Our campaign already generated 250 mentions and articles in mass media, you need this kind of outreach to reach them.
Many of those people are already against AI according to different surveys and nothing seems to happen currently.
We hesitated a lot between including the term “extinction” or not in the beginning.
The final decision not to center the message on “extinction risk” was deliberate: it would have prevented most of the heads of state and organizations from signing. Our goal was to build the broadest and most influential coalition possible to advocate for international red lines, which is what’s most important to us.
By focusing on the concept of “losing meaningful human control,” we were able to achieve agreement on the precursor to most worst-case scenarios, including extinction. We were advised and received feedback from early experiments with signatories that this is a more concrete concept for policymakers and the public.
In summary, if you really want red lines to happen for real, adding the word extinction is not necessary and has more costs than benefits in this text.
Thanks a lot!
it’s the total cost that matters, and that is large
We think a relatively inexpensive method for day-to-day usage would be using Sonnet to monitor Opus, or Gemini 2.5 Flash to monitor Pro. This would probably be just a +10% overhead. But we have not run this exact experiment; this would be a follow-up work.
This is convincing!
If there is a shortage of staff time, then AI safety funders need to hire more staff. If they don’t have time to hire more staff, then they need to hire headhunters to do so for them. If a grantee is running up against a budget crisis before the new grantmaking staff can be on-boarded, then funders can maintain the grantee’s program at present funding levels while they wait for their new staff to become available.
+1 - and this has been a problem for many years.
I find it slightly concerning that this post is not receiving more attention.
By the time we observe whether AI governance grants have been successful, it will be too late to change course.
I don’t understand this part. I think that it is possible to assess in much more granular detail the progress of some advocacy effort.
Strong upvote. A few complementary remarks:
Many more people agree on the risks than on the solutions—advocating for situational awareness of the different risks might be more productive and urgent than arguing for a particular policy, even though I also see the benefits of pushing for a policy.
The AI Safety movement is highly uncoordinated; everyone is pushing their own idea. By default, I think this might be negative—maybe we should coordinate better.
The list of orphaned policies could go on—for example, at CeSIA, we are more focused on formalizing what unacceptable risks would mean, and trying to trace precise red lines and risk thresholds. We think this approach is: 1) Most acceptable to states, since even rival countries have an interest in cooperating to prevent worst-case scenarios, as demonstrated by the Nuclear Non-Proliferation Treaty during the Cold War. 2) Most widely endorsed by research institutes, think tanks, and advocacy groups (and we think this might be a good candidate policy that should be pushed in a coalition). 3) Reasonable, as most AI companies have already voluntarily committed to these principles during the International AI Summit in Seoul. However, to date, the red lines have been largely vague and are not yet implementable.
Thanks a lot for this comment.
Potential example of precise red lines
Again, the call was the first step. The second step is finding the best red lines.
Here are more aggressive red lines:
Prohibiting the deployment of AI systems that, if released, would have a non-trivial probability of killing everyone. The probability would be determined by a panel of experts chosen by an international institution.
“The development of superintelligence […] should not be allowed until there is broad scientific consensus that it will be done safely and controllably (from this letter from the Vatican).
Here are potential already operational ones from the preparedness framework:
[AI Self-improvement—Critical—OpenAI] The model is capable of recursively self-improving (i.e., fully automated AI R&D), defined as either (leading indicator) a superhuman research scientist agent OR (lagging indicator) causing a generational model improvement (e.g., from OpenAI o1 to OpenAI o3) in 1/5th the wall-clock time of equivalent progress in 2024 (e.g., sped up to just 4 weeks) sustainably for several months. - Until we have specified safeguards and security controls that would meet a Critical standard, halt further development.
[Cybersecurity—AI Self-improvement—Critical—OpenAI] A tool-augmented model can identify and develop functional zero-day exploits of all severity levels in many hardened real-world critical systems without human intervention—Until we have specified safeguards and security controls that would meet a Critical standard, halt further development.
“help me understand what is different about what you are calling for than other generic calls for regulation”
Let’s recap. We are calling for:
“an international agreement”—this is not your local Californian regulation
that enforces some hard rules—“prohibitions on AI uses or behaviors that are deemed too dangerous”—it’s not about asking AI providers to do evals and call it a day
“to prevent unacceptable AI risks.”
Those risks are enumerated in the call
Misuses and systemic risks are enumerated in the first paragraph
Loss of human control in the second paragraph
The way to do this is to “build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds.”
Which is to say that one way to do this is to harmonize the risk thresholds defining unacceptable levels of risk in the different voluntary commitments.
existing global frameworks: This includes notably the AI Act, its Code of Practice, and this should be done compatibly with some other high-level frameworks
“with robust enforcement mechanisms — by the end of 2026.”—We need to get our shit together quickly, and enforcement mechanisms could entail multiple things. One interpretation from the FAQ is setting up an international technical verification body, perhaps the international network of AI Safety institutes, to ensure the red lines are respected.
We give examples of red lines in the FAQ. Although some of them have a grey zone, I would disagree that this is generic. We are naming the risks in those red lines and stating that we want to avoid AI that the evaluation indicates creates substantial risks in this direction.
This is far from generic.
“I don’t see any particular schelling threshold”
I agree that for red lines on AI behavior, there is a grey area that is relatively problematic, but I wouldn’t be as negative.
It is not because there is no narrow Schelling threshold that we shouldn’t coordinate to create one. Superintelligence is also very blurry, in my opinion, and there is a substantial probability that we just boil the frog to ASI—so even if there is no clear threshold, we need to create one. This call says that we should set some threshold collectively and enforce this with vigor.
In the nuclear industry, and in the aerospace industry, there is no particular schelling point, nor—but we don’t care—the red line is defined as “1/10000” chance of catastrophe per year for this plane/nuclear central—and that’s it. You could have added a zero or removed one. I don’t care. But I care that there is a threshold.
We could define an arbitrary threshold for AI—the threshold might itself be arbitrary, but the principle of having a threshold after which you need to be particularly vigilant, install mitigation, or even halt development, seems to me to be the basis of RSPs.
Those red lines should be operationalized. (but I think it is not necessary to operationalize this in the text of the treaty, and that this operationalization could be done by a technical body, which would then update those operationalizations from time to time, according to the evolution of science, risk modeling, etc...).
“confusion and conflict in the future”
I understand how our decision to keep the initial call broad could be perceived as vague or even evasive.
For this part, you might be right—I think the negotiation process resulting in those red lines could be painful at some point—but humanity has managed to negotiate other treaties in the past, so this should be doable.
“Actually, alas, it does appear that after thinking more about this project, I am now a lot less confident that it was good”. --> We got 300 media mentions saying that Nobel wants global AI regulation - I think this is already pretty good, even if the policy never gets realized.
“making a bunch of tactical conflations, and that rarely ends well.” --> could you give examples? I think the FAQ makes it pretty clear what people are signing on for if there were any doubts.