This is fantastic work. There’s also something about this post that feels deeply empathic and humble, in ways that are hard-to-articulate but seem important for (some forms of) effective policymaker engagement.
A few questions:
Are you planning to do any of this in the US?
What have your main policy proposals or “solutions” been? I think it’s becoming a lot more common for me to encounter policymakers who understand the problem (at least a bit) and are more confused about what kinds of solutions/interventions/proposals are needed (both in the short-term and the long-term).
Can you say more about what kinds of questions you encounter when describing loss of control, as well as what kinds of answers have been most helpful? I’m increasingly of the belief that getting people to understand “AI has big risks” is less important than getting people to understand “some of the most significant risks come from this unique thing called loss of control that you basically don’t really have to think about for other technologies, and this is one of the most critical ways in which AI is different than other major/dangerous/dual-use technologies.”
Did you notice any major differences between parties? Did you change your approach based on whether you were talking to conservatives or labour? Did they have different perspectives or questions? (My own view is that people on the outside probably overestimate the extent to which there are partisan splits on these concerns—they’re so novel that I don’t think the mainstream parties have really entrenched themselves in different positions. But would be curious if you disagree.)
Sub-question: Was there any sort of backlash against Rishi Sunak’s focus on existential risks? Or the UK AI Security Institute? In the US, it’s somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad). Have you noticed anything similar?
There’s a video version of AI2027 that is quite engaging/accessible. Over 1.5M views so far.
Seems great. My main critique is that the “good ending” seems to assume alignment is rather easy to figure out, though admittedly that might be more of a critique of AI2027 itself rather than the way the video portrays it.