This is a very valuable clarification, and I agree[1]. I really appreciate your focus on policy feasibility and concrete approaches.
From my own experience: most people outside AI Safety in the regulatory space, either: lack sufficient background knowledge about timelines and existential risk to meaningfully engage with these concerns and commit to enforceable measures[2] , or those with some familiarity become more skeptical due to the lack of consensus on probabilities, timelines, and definitions.
I will be following this initiative closely and promoting it to the best of my ability.
EDIT: I’ve signed with my institutional email and title.
For transparency: I knew about the red lines project before it was published. Furthermore, Charbel / CeSIA’s past work have shifted my own views on policy and international cooperation.
This is a very valuable clarification, and I agree[1]. I really appreciate your focus on policy feasibility and concrete approaches.
From my own experience: most people outside AI Safety in the regulatory space, either: lack sufficient background knowledge about timelines and existential risk to meaningfully engage with these concerns and commit to enforceable measures[2] , or those with some familiarity become more skeptical due to the lack of consensus on probabilities, timelines, and definitions.
I will be following this initiative closely and promoting it to the best of my ability.
EDIT: I’ve signed with my institutional email and title.
For transparency: I knew about the red lines project before it was published. Furthermore, Charbel / CeSIA’s past work have shifted my own views on policy and international cooperation.
I expect that the popularity of IABIED and more involvent from AI Safety figures in policy will shift the Overton window in this regard.