Thanks for adding that context, I think it’s helpful!
In some sense, I hope that this post can be useful for current regulators who are thinking about where to put thresholds (in addition to future regulators who are doing so in the context of an international agreement).
I think these thresholds were set by political feasibility rather than safety analysis
Yep, I think this is part of where I hope this post can provide value: we focused largely on the safety analysis part.
By the way, has this been posted anywhere else apart from LW and arXiv? I’d circulate on LinkedIn too (yes, I’m serious, unfortunately) and tag key people in policy and AI Governance like Kevin Fumai or Luiza Jarowski (large following) or Kai Zenner / folks at the AI Office (I know some). Let me know if I can repost this elsewhere :).
Thanks for adding that context, I think it’s helpful!
In some sense, I hope that this post can be useful for current regulators who are thinking about where to put thresholds (in addition to future regulators who are doing so in the context of an international agreement).
Yep, I think this is part of where I hope this post can provide value: we focused largely on the safety analysis part.
By the way, has this been posted anywhere else apart from LW and arXiv? I’d circulate on LinkedIn too (yes, I’m serious, unfortunately) and tag key people in policy and AI Governance like Kevin Fumai or Luiza Jarowski (large following) or Kai Zenner / folks at the AI Office (I know some). Let me know if I can repost this elsewhere :).
Thanks for the nudge. Here’s a linkedin post that you are welcome to share! Thanks!