For context on how your proposed 10^24 FLOPs threshold compares to existing regulation:
EU AI Act: Defines high systemic risk GPAI at 10^25 FLOPs
California SB 53: Defined “frontier model” at 10^26 FLOPs (including fine-tuning/RLHF)
Your proposal is way lower than what current regulations consider “frontier” or ” systemic risk.”
So, you may receive pushback in the form of “Current regulations don’t even consider this frontier—this is outside the policy Overton window”
Nevertheless, we’re also seeing some regulatory backtracking (EU Omnibus simplification, Trump’s EO undermining SB 53), and I genuinely think that this may lead to the opposite effect—for people to think “regulators don’t know what they’re doing. We need a stricter halt while they figure it out, not maintaining thresholds they’re already second-guessing.”
I think these thresholds were set by political feasibility rather than safety analysis, and I’d love to see this proposal succeed. Of course, this requires making the case that the risk is imminent enough to justify going well below where any regulator has dared to set a line.
Thanks for adding that context, I think it’s helpful!
In some sense, I hope that this post can be useful for current regulators who are thinking about where to put thresholds (in addition to future regulators who are doing so in the context of an international agreement).
I think these thresholds were set by political feasibility rather than safety analysis
Yep, I think this is part of where I hope this post can provide value: we focused largely on the safety analysis part.
By the way, has this been posted anywhere else apart from LW and arXiv? I’d circulate on LinkedIn too (yes, I’m serious, unfortunately) and tag key people in policy and AI Governance like Kevin Fumai or Luiza Jarowski (large following) or Kai Zenner / folks at the AI Office (I know some). Let me know if I can repost this elsewhere :).
For context on how your proposed 10^24 FLOPs threshold compares to existing regulation:
EU AI Act: Defines high systemic risk GPAI at 10^25 FLOPs
California SB 53: Defined “frontier model” at 10^26 FLOPs (including fine-tuning/RLHF)
Your proposal is way lower than what current regulations consider “frontier” or ” systemic risk.”
So, you may receive pushback in the form of “Current regulations don’t even consider this frontier—this is outside the policy Overton window”
Nevertheless, we’re also seeing some regulatory backtracking (EU Omnibus simplification, Trump’s EO undermining SB 53), and I genuinely think that this may lead to the opposite effect—for people to think “regulators don’t know what they’re doing. We need a stricter halt while they figure it out, not maintaining thresholds they’re already second-guessing.”
I think these thresholds were set by political feasibility rather than safety analysis, and I’d love to see this proposal succeed. Of course, this requires making the case that the risk is imminent enough to justify going well below where any regulator has dared to set a line.
Thanks for adding that context, I think it’s helpful!
In some sense, I hope that this post can be useful for current regulators who are thinking about where to put thresholds (in addition to future regulators who are doing so in the context of an international agreement).
Yep, I think this is part of where I hope this post can provide value: we focused largely on the safety analysis part.
By the way, has this been posted anywhere else apart from LW and arXiv? I’d circulate on LinkedIn too (yes, I’m serious, unfortunately) and tag key people in policy and AI Governance like Kevin Fumai or Luiza Jarowski (large following) or Kai Zenner / folks at the AI Office (I know some). Let me know if I can repost this elsewhere :).
Thanks for the nudge. Here’s a linkedin post that you are welcome to share! Thanks!