This is fantastic work. There’s also something about this post that feels deeply empathic and humble, in ways that are hard-to-articulate but seem important for (some forms of) effective policymaker engagement.
A few questions:
Are you planning to do any of this in the US?
What have your main policy proposals or “solutions” been? I think it’s becoming a lot more common for me to encounter policymakers who understand the problem (at least a bit) and are more confused about what kinds of solutions/interventions/proposals are needed (both in the short-term and the long-term).
Can you say more about what kinds of questions you encounter when describing loss of control, as well as what kinds of answers have been most helpful? I’m increasingly of the belief that getting people to understand “AI has big risks” is less important than getting people to understand “some of the most significant risks come from this unique thing called loss of control that you basically don’t really have to think about for other technologies, and this is one of the most critical ways in which AI is different than other major/dangerous/dual-use technologies.”
Did you notice any major differences between parties? Did you change your approach based on whether you were talking to conservatives or labour? Did they have different perspectives or questions? (My own view is that people on the outside probably overestimate the extent to which there are partisan splits on these concerns—they’re so novel that I don’t think the mainstream parties have really entrenched themselves in different positions. But would be curious if you disagree.)
Sub-question: Was there any sort of backlash against Rishi Sunak’s focus on existential risks? Or the UK AI Security Institute? In the US, it’s somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad). Have you noticed anything similar?
Thank you for your kind words and thoughtful questions; I really appreciate it.
US advocacy: We’ve already had some meetings with US Congressional offices. We are currently growing our team in the US and expect to ramp up our efforts in the coming months.
Policy proposals, in a nutshell: We advocate for the establishment of an independent AI regulator to oversee, regulate, and enforce safety standards for frontier AI models. We support the introduction of a licensing regime for frontier AI development, comprising: a training license for models exceeding 10^25 FLOP; a compute license, which would introduce hardware tracking and know-your-customer (KYC) requirements for cloud service providers exceeding 10^17 FLOP/s; and an application license for developers building applications that enhance the capabilities of a licensed model. The regulator would have the authority to prohibit specific high-risk AI behaviours, including: the development of superintelligent AI systems; unbounded AI systems (i.e., those for which a robust safety case cannot be made); AI systems accessing external systems or networks; and recursive self-improvement.
Questions on loss of control: I completely agree on the importance of emphasising loss of control to explain why AI differs from other dual-use technologies, and why regulation must address not only the use of the technology but also its development. I wouldn’t say there’s a single, recurring question that arises in the same form whenever we discuss loss of control. However, I have sometimes observed confusion stemming from the underlying belief that: “Well, if the AI system behaves a certain way, it must be because an engineer programmed it to do that.” This shows the importance of explaining that AI systems are no longer traditional software coded line by line by engineers. The argument that this technology is “grown, not built”* helps lay the groundwork for understanding loss of control when it is introduced.
Differences between parties: Had I been asked to bet before the first meeting, I would certainly have expected significant differences between parties in their approaches (or at least that meetings would feel noticeably different depending on the party involved). In practice, that hasn’t generally been the case. Put simply, the main variations can arise from two factors: the individual’s party affiliation and their personal background (e.g. education, familiarity with technology, committee involvement, etc.). In my view, the latter has been the more important factor. Whether a parliamentarian has previously worked on regulation, contributed to legislation like the GDPR in the European Parliament, or has a technical background often makes a bigger difference. I believe this is very much in line with your view that we tend to overestimate the extent of partisan splits on new issues.
Labour v. Conservatives: Our view of the problem, the potential solutions, and our ask of parliamentarians remain consistent across parties. In meetings with both Labour and the Conservatives, we’ve noted their recognition of the risks posed by this technology. The Conservatives established the AI Safety Institute (renamed the AI Security Institute by the current government). Labour’s DSIT Secretary of State, Peter Kyle, acknowledged that a framework of voluntary commitments is insufficient and pledged to place the AISI on a statutory footing. The key difference in our conversations with them is the natural one. “The government/you have committed to putting these voluntary commitments on a statutory footing. We’d like to see the government/you deliver on this commitment.”
Did they have different perspectives or questions? The answer is the same as above: The main differences were led by individual background rather than party affiliation.
Was there any sort of backlash against Rishi Sunak’s focus on existential risks? Or the UK AI Security Institute? You mention that “in the US, it’s somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad).” This doesn’t apply to the UK in this specific context, and I was surprised to see it myself. It’s rare for the opposition to acknowledge a government initiative as positive and seek to build on it. Yet that’s exactly what happened with AISI: Labour’s DSIT Secretary of State, Peter Kyle, did not scrap the institute but instead pledged to empower it by placing it on a statutory footing during the campaign for the July 2024 elections. When it comes to extinction risk from AI, the Labour government is currently more focused on how AI can drive economic growth and improve public services. Loss of control is not at the core of their narrative at the moment. However, I believe this is different from a backlash (at least if we mean a strong negative reaction against the issue or an effort to bury it). Notably, Labour’s DSIT Secretary of State, Peter Kyle, referred to the risk of losing control of AI (particularly AGI) as “catastrophic” earlier this year. So, while there is currently more emphasis on how AI can drive growth than on mitigating risks from advanced AI, those risks are still acknowledged, and there is at least some common ground in recognising the problem.
Can you clarify what exactly is the argument you used? For why the extinction risk is much higher than most (all?) other things vying for their attention, such as asteroid impacts, WMDs, etc…
I think the AI critic community has a MASSIVE PR problem.
There has been a deluge of media coming from the AI world in the last month or so. It seems the head of every major lab has been on a full scale PR campaign, touting the benefits and massively playing down the risks of the future of AI development. (If I were a paranoid person I would say that this feels like a very intentional move to lay the groundwork for some major announcement coming in the foreseeable future…)
This has got me thinking a lot about what AI critics are doing, and what they need to be doing. The gap between how the community is communicating and how the average person receives and understands information is huge. The general tone I’ve noticed in the few media appearances I’ve seen recently from AI critics is simultaneously over technical and apocalyptic, which makes it very hard for the lay-person to digest and care about in any tangible way.
Taking the message directly to policy makers is all well and good, but as you mentioned they are extremely pressed for resources and if the constituency is not banging at their door iabout an issue, its unlikely they can justify the time/resource expenditure. Especially on a topic that they don’t fully understand.
There needs to be a sizeable public relations and education campaign aimed directly at the general public to help them understand the dangers of what is being built and what they can do about it. Because at the moment I can tell you that outside of certain circles, absolutely no one understands or cares.
The movement needs a handful of very well media-trained figure heads making the rounds of every possible media outlet, from traditional media to podcast to tiktok. The message needs to be simplified and honed into sound-bytes that anyone can understand And I think there needs to be a clear call to action that the average person can engage with.
IMO this should be a primary focus of any person or organization concerned with mitigating the risk of AI. A major amount of time and money needs to be put into this
It doesn’t matter how right you are, if no one understands or cares about what you’re talking about you’re no going to convince them of anything…
This is fantastic work. There’s also something about this post that feels deeply empathic and humble, in ways that are hard-to-articulate but seem important for (some forms of) effective policymaker engagement.
A few questions:
Are you planning to do any of this in the US?
What have your main policy proposals or “solutions” been? I think it’s becoming a lot more common for me to encounter policymakers who understand the problem (at least a bit) and are more confused about what kinds of solutions/interventions/proposals are needed (both in the short-term and the long-term).
Can you say more about what kinds of questions you encounter when describing loss of control, as well as what kinds of answers have been most helpful? I’m increasingly of the belief that getting people to understand “AI has big risks” is less important than getting people to understand “some of the most significant risks come from this unique thing called loss of control that you basically don’t really have to think about for other technologies, and this is one of the most critical ways in which AI is different than other major/dangerous/dual-use technologies.”
Did you notice any major differences between parties? Did you change your approach based on whether you were talking to conservatives or labour? Did they have different perspectives or questions? (My own view is that people on the outside probably overestimate the extent to which there are partisan splits on these concerns—they’re so novel that I don’t think the mainstream parties have really entrenched themselves in different positions. But would be curious if you disagree.)
Sub-question: Was there any sort of backlash against Rishi Sunak’s focus on existential risks? Or the UK AI Security Institute? In the US, it’s somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad). Have you noticed anything similar?
Thank you for your kind words and thoughtful questions; I really appreciate it.
US advocacy: We’ve already had some meetings with US Congressional offices. We are currently growing our team in the US and expect to ramp up our efforts in the coming months.
Policy proposals, in a nutshell: We advocate for the establishment of an independent AI regulator to oversee, regulate, and enforce safety standards for frontier AI models. We support the introduction of a licensing regime for frontier AI development, comprising: a training license for models exceeding 10^25 FLOP; a compute license, which would introduce hardware tracking and know-your-customer (KYC) requirements for cloud service providers exceeding 10^17 FLOP/s; and an application license for developers building applications that enhance the capabilities of a licensed model. The regulator would have the authority to prohibit specific high-risk AI behaviours, including: the development of superintelligent AI systems; unbounded AI systems (i.e., those for which a robust safety case cannot be made); AI systems accessing external systems or networks; and recursive self-improvement.
Questions on loss of control: I completely agree on the importance of emphasising loss of control to explain why AI differs from other dual-use technologies, and why regulation must address not only the use of the technology but also its development. I wouldn’t say there’s a single, recurring question that arises in the same form whenever we discuss loss of control. However, I have sometimes observed confusion stemming from the underlying belief that: “Well, if the AI system behaves a certain way, it must be because an engineer programmed it to do that.” This shows the importance of explaining that AI systems are no longer traditional software coded line by line by engineers. The argument that this technology is “grown, not built”* helps lay the groundwork for understanding loss of control when it is introduced.
Differences between parties: Had I been asked to bet before the first meeting, I would certainly have expected significant differences between parties in their approaches (or at least that meetings would feel noticeably different depending on the party involved). In practice, that hasn’t generally been the case. Put simply, the main variations can arise from two factors: the individual’s party affiliation and their personal background (e.g. education, familiarity with technology, committee involvement, etc.). In my view, the latter has been the more important factor. Whether a parliamentarian has previously worked on regulation, contributed to legislation like the GDPR in the European Parliament, or has a technical background often makes a bigger difference. I believe this is very much in line with your view that we tend to overestimate the extent of partisan splits on new issues.
Labour v. Conservatives: Our view of the problem, the potential solutions, and our ask of parliamentarians remain consistent across parties. In meetings with both Labour and the Conservatives, we’ve noted their recognition of the risks posed by this technology. The Conservatives established the AI Safety Institute (renamed the AI Security Institute by the current government). Labour’s DSIT Secretary of State, Peter Kyle, acknowledged that a framework of voluntary commitments is insufficient and pledged to place the AISI on a statutory footing. The key difference in our conversations with them is the natural one. “The government/you have committed to putting these voluntary commitments on a statutory footing. We’d like to see the government/you deliver on this commitment.”
Did they have different perspectives or questions? The answer is the same as above: The main differences were led by individual background rather than party affiliation.
Was there any sort of backlash against Rishi Sunak’s focus on existential risks? Or the UK AI Security Institute? You mention that “in the US, it’s somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad).” This doesn’t apply to the UK in this specific context, and I was surprised to see it myself. It’s rare for the opposition to acknowledge a government initiative as positive and seek to build on it. Yet that’s exactly what happened with AISI: Labour’s DSIT Secretary of State, Peter Kyle, did not scrap the institute but instead pledged to empower it by placing it on a statutory footing during the campaign for the July 2024 elections.
When it comes to extinction risk from AI, the Labour government is currently more focused on how AI can drive economic growth and improve public services. Loss of control is not at the core of their narrative at the moment. However, I believe this is different from a backlash (at least if we mean a strong negative reaction against the issue or an effort to bury it). Notably, Labour’s DSIT Secretary of State, Peter Kyle, referred to the risk of losing control of AI (particularly AGI) as “catastrophic” earlier this year. So, while there is currently more emphasis on how AI can drive growth than on mitigating risks from advanced AI, those risks are still acknowledged, and there is at least some common ground in recognising the problem.
*Typo corrected, thanks for spotting!
Can you clarify what exactly is the argument you used? For why the extinction risk is much higher than most (all?) other things vying for their attention, such as asteroid impacts, WMDs, etc…
I think the AI critic community has a MASSIVE PR problem.
There has been a deluge of media coming from the AI world in the last month or so. It seems the head of every major lab has been on a full scale PR campaign, touting the benefits and massively playing down the risks of the future of AI development. (If I were a paranoid person I would say that this feels like a very intentional move to lay the groundwork for some major announcement coming in the foreseeable future…)
This has got me thinking a lot about what AI critics are doing, and what they need to be doing. The gap between how the community is communicating and how the average person receives and understands information is huge. The general tone I’ve noticed in the few media appearances I’ve seen recently from AI critics is simultaneously over technical and apocalyptic, which makes it very hard for the lay-person to digest and care about in any tangible way.
Taking the message directly to policy makers is all well and good, but as you mentioned they are extremely pressed for resources and if the constituency is not banging at their door iabout an issue, its unlikely they can justify the time/resource expenditure. Especially on a topic that they don’t fully understand.
There needs to be a sizeable public relations and education campaign aimed directly at the general public to help them understand the dangers of what is being built and what they can do about it. Because at the moment I can tell you that outside of certain circles, absolutely no one understands or cares.
The movement needs a handful of very well media-trained figure heads making the rounds of every possible media outlet, from traditional media to podcast to tiktok. The message needs to be simplified and honed into sound-bytes that anyone can understand And I think there needs to be a clear call to action that the average person can engage with.
IMO this should be a primary focus of any person or organization concerned with mitigating the risk of AI. A major amount of time and money needs to be put into this
It doesn’t matter how right you are, if no one understands or cares about what you’re talking about you’re no going to convince them of anything…