[Question] What attempts have been made at global coordination around AI safety?

For in­stance, might there be a main­tained list of at­tempts at global agree­ment, whether they be pub­lic or pri­vate?

One un­en­force­able but highly en­dorsed ex­am­ple is the Fu­ture of Life’s AI Open Let­ter, which has now at­tracted ~8,000 sig­na­tures from AI safety re­searchers, AGI re­searchers and other AI-ad­ja­cent tech­nol­o­gists. It’s not im­me­di­ately clear what per­centage of AI safety con­cerned in­di­vi­d­u­als this rep­re­sents, but at a cur­sory glance it would ap­pear to be the largest con­sen­sus to date. The let­ter is merely an agree­ment to ad­dress AI safety sooner rather than later, so I am in­ter­ested to hear of any agree­ments that ad­dress AI safety policy it­self, even if the agree­ment is con­sid­ered largely un­suc­cess­ful.

Please feel free to an­swer with per­sonal views on global co­or­di­na­tion.

No answers.
No comments.