[Crosspost] Organizing a debate with experts and MPs to raise AI xrisk awareness: a possible blueprint

Link post

Thanks to @Leon Lang from University of Amsterdam for proposing this post and reviewing the draft. Any views expressed in this post are not necessarily his.

We (Existential Risk Observatory) have organized a public debate (recording here, photos here) in Amsterdam, the Netherlands with the purpose of creating more awareness of AI existential risk among policymakers and other leading voices of the societal debate. We want to share our experience in this post, because we think others might be able to follow a similar approach, and because we expect this to have significant xrisk-reducing net effects.

Goals

Our high-level goal is to reduce existential risk, especially from AGI, by informing the public debate. We think more awareness will be net positive because of increased support for risk-reducing regulation and increased attention for AI safety work (more talent, more funding, more institutes working on the topic, more diverse actors, and more priority).

For this debate, we focused on:

  1. Informing leaders of the societal debate (such as journalists, opinion makers, scientists, artists) about AI existential risk. This should help to widen the Overton window, increase the chance that this group of people will inform others, and therefore raise AI existential risk awareness in society.

  2. Informing politicians about AI existential risk, to increase the chance that risk-reducing policies will get enacted.

Strategy

We used a debate setup with AI existential risk authorities and Members of Parliament (MPs). The strategy was that the MPs would get influenced by the authorities in this setting. We already contacted MPs and MP assistants before and had meetings with several. In order to be invited to meetings with MPs and/​or MP assistants, we think that having published in mainstream media, having a good network, and having good policy proposals, are all helpful. If budget is available, hiring a lobbyist and a PR person is both helpful as well (we have a freelance senior PR person and a medior volunteer lobbyist).

Two Dutch MPs attended our debate. For them, advantages might include getting (media) attention and informing themselves. Stuart Russell agreed to be our keynote speaker (remote), and several other experts who all had good AI existential risk expertise attended as well. Additionally, we found a moderator with existing AI existential risk knowledge, which was important in making sure the debate went well. Finally, we found a leading debate center in Amsterdam (Pakhuis de Zwijger) willing to host the debate. We promoted our debate on our own social media, through the venue, we were mentioned in the prominent Dutch Future Affairs newsletter, and we promoted our event in EA whatsapp groups. This was sufficient to sell out the largest debate hall in Amsterdam (320 seats, tickets were free).

We organized this event in the Netherlands mainly because this is where we are most active. However, since the event was partially focused on getting policy implemented, it should be considered to organize events such as this debate especially where policy is most urgently needed (DC, but perhaps also Beijing, Brussels, or London).

Program

We started with an introductory talk, after which we played some documentary fragments. After this introduction, our main guest Stuart Russell gave a talk and audience Q&A on AI existential risk. We closed with the 5-person panel, consisting of Queeny Rajkowski (MP for VVD, the largest governing party, center-right, with a portfolio including Digitization and Cybersecurity), Lammert van Raan (MP for the Party for the Animals, a medium-sized left party focusing on animal rights and climate), Mark Brakel (Director of Policy FLI), Nandi Robijns (AI MSc, Ministry of the Interior and Kingdom Relations), and Tim Bakker (AI PhD candidate at University of Amsterdam). In the panel, our moderator asked questions that we prepared and shared with the panel members in advance, and gave the audience the chance to ask questions as well.

Costs

Our budget was below 2k euro for the whole event, of which most went to the venue. Costs in time were roughly four weeks full time work for two senior team members (spread out over about three months).

Effects

Our debate generally went well, and we feel it has effectively opened the Overton window for most who attended. The two MPs both took the initiative for a follow-up meeting with the other panelists and us, where one of them proposed as a topic “what politics should do.” We generally have the impression they were—at least directly after the debate—quite open to AI existential risks and talks about a policy solution. That does not mean, however, that they are now likely to implement solutions themselves: they are extremely busy, and for them, this is just one of fifty issues. They will probably not yet have a thorough understanding of AI existential risk. What they might do, however, is make time for more meetings with experts and use our input in drafting questions, motions, etc. in parliament about the topic, which could lead to the implementation of risk-reducing policy. In the Netherlands, a motion was passed already by parliament asking for more AI Safety and AI Alignment research (before our debate).

Besides policy, an important effect of such a debate could be to further open the Overton window in the media, resulting in more frequent and more accurate AI existential risk reporting. One journalist who attended the debate said that AI existential risk “is now suddenly a thing.” Another journalist interviewed Stuart Russell after our debate, and will probably publish this interview in a leading Dutch newspaper soon. On the EA whatsapp groups, the tone was very positive, with one person saying “it was surreal to see politicians take AIS seriously.” Another said the event “was a real motivation booster.”

Risks

If one attempts to organize a public debate and errors are made, the most likely thing to happen is that the impact will simply be smaller. Journalists will not report on it, speakers will not attend, the organizer will not be able to get a leading venue, the audience will be small, policy will not get passed, etc. This is bad, but not negative relative to the counterfactual of not having a debate at all. We therefore recommend to err on the side of taking too much action, rather than too little (as long as you cannot spend your effort elsewhere more effectively). General professionalism is however important when organizing a public event.

Multiple attendees gave us the feedback that they would have preferred more of a debate, while in our case most panelists agreed. While it is easy to facilitate such a debate by inviting an AI xrisk skeptic, we chose not to do that, since it might reduce the chance to achieve our goals and the opposing position can already be heard in many other debates, articles, etc. We think this was mostly the right decision in our case. A debate less focused on influencing policy might get more traction by also inviting prominent skeptics. This might however also increase risk.

Conclusion

We think that organizing an AI existential risk debate (recording) is a promising intervention to raise AI existential risk awareness among leaders of the societal debate. We also think this intervention is a good step towards getting existential risk-reducing policy passed. We are open to providing advice for others who want to work on this intervention.

No comments.