Many more people agree on the risks than on the solutions—advocating for situational awareness of the different risks might be more productive and urgent than arguing for a particular policy, even though I also see the benefits of pushing for a policy.
The AI Safety movement is highly uncoordinated; everyone is pushing their own idea. By default, I think this might be negative—maybe we should coordinate better.
The list of orphaned policies could go on—for example, at CeSIA, we are more focused on formalizing what unacceptable risks would mean, and trying to trace precise red lines and risk thresholds. We think this approach is: 1) Most acceptable to states, since even rival countries have an interest in cooperating to prevent worst-case scenarios, as demonstrated by the Nuclear Non-Proliferation Treaty during the Cold War. 2) Most widely endorsed by research institutes, think tanks, and advocacy groups (and we think this might be a good candidate policy that should be pushed in a coalition). 3) Reasonable, as most AI companies have already voluntarily committed to these principles during the International AI Summit in Seoul. However, to date, the red lines have been largely vague and are not yet implementable.
Interesting thoughts; thanks for sharing, and for your work at CeSIA.
I’ve put some work into building coordination among US AI safety advocates, and it’s been somewhat helpful, but there are limits to how much we can expect discussions about coordination to lead to unified action because different organizations have different funders, different principles, and different interests. Merely sharing information about what different groups are working on will not spontaneously cause those groups to pick a single task and pivot to supporting it.
Strong upvote. A few complementary remarks:
Many more people agree on the risks than on the solutions—advocating for situational awareness of the different risks might be more productive and urgent than arguing for a particular policy, even though I also see the benefits of pushing for a policy.
The AI Safety movement is highly uncoordinated; everyone is pushing their own idea. By default, I think this might be negative—maybe we should coordinate better.
The list of orphaned policies could go on—for example, at CeSIA, we are more focused on formalizing what unacceptable risks would mean, and trying to trace precise red lines and risk thresholds. We think this approach is: 1) Most acceptable to states, since even rival countries have an interest in cooperating to prevent worst-case scenarios, as demonstrated by the Nuclear Non-Proliferation Treaty during the Cold War. 2) Most widely endorsed by research institutes, think tanks, and advocacy groups (and we think this might be a good candidate policy that should be pushed in a coalition). 3) Reasonable, as most AI companies have already voluntarily committed to these principles during the International AI Summit in Seoul. However, to date, the red lines have been largely vague and are not yet implementable.
Interesting thoughts; thanks for sharing, and for your work at CeSIA.
I’ve put some work into building coordination among US AI safety advocates, and it’s been somewhat helpful, but there are limits to how much we can expect discussions about coordination to lead to unified action because different organizations have different funders, different principles, and different interests. Merely sharing information about what different groups are working on will not spontaneously cause those groups to pick a single task and pivot to supporting it.