Yeah, the thing the ‘scaling extrapolation’ view doesn’t take into account is that as soon as radical speed-ups to algorithmic research are made possible by AI R&D agents, suddenly the trendlines for algorithmic progress should be projected to steepen. How much and for how long before slow-downs are hit? That’s unclear. I think there is at least some substantial probability that no slow-downs are hit before full AGI, and some smaller but still considerable probability that the improvement cycle rushes forward at high speed past that point to ASI.
This should be assumed to potentially involve dramatic gains in both peak capabilities, and in efficiency and speed of training and inference. If so, then compute governance becomes completely irrelevant for blocking creation of dangerously powerful AI. It can still help put limits on the amount of inference used. Why? Because no matter how efficient the AI is, if you have more compute you have more parallel copies (and can run them faster up to the limits of the system, which is probably somewhere between 100x to 1000x human thought speed).
If we are going to head this off, we need new governance methods, and soon. Maybe really really soon, like, before the end of 2025. Hopefully we have until more like 2028, but we can’t count on that for sure.
I have very little faith in current governments to implement and enforce policies that are more complex than things on the order of governance compute and chip export controls. Much less to do so within the short timeframes we are facing.
I think the conclusion this points towards is that we need new forms of governance. Not to replace existing governments, but to complement them. Voluntary mutual inspection contracts with privacy-respecting technology using AI inspectors. Something of that sort.
Yeah, the thing the ‘scaling extrapolation’ view doesn’t take into account is that as soon as radical speed-ups to algorithmic research are made possible by AI R&D agents, suddenly the trendlines for algorithmic progress should be projected to steepen. How much and for how long before slow-downs are hit? That’s unclear. I think there is at least some substantial probability that no slow-downs are hit before full AGI, and some smaller but still considerable probability that the improvement cycle rushes forward at high speed past that point to ASI.
This should be assumed to potentially involve dramatic gains in both peak capabilities, and in efficiency and speed of training and inference. If so, then compute governance becomes completely irrelevant for blocking creation of dangerously powerful AI. It can still help put limits on the amount of inference used. Why? Because no matter how efficient the AI is, if you have more compute you have more parallel copies (and can run them faster up to the limits of the system, which is probably somewhere between 100x to 1000x human thought speed).
If we are going to head this off, we need new governance methods, and soon. Maybe really really soon, like, before the end of 2025. Hopefully we have until more like 2028, but we can’t count on that for sure.
I have very little faith in current governments to implement and enforce policies that are more complex than things on the order of governance compute and chip export controls. Much less to do so within the short timeframes we are facing.
I think the conclusion this points towards is that we need new forms of governance. Not to replace existing governments, but to complement them. Voluntary mutual inspection contracts with privacy-respecting technology using AI inspectors. Something of that sort.
Here’s some recent evidence of compute thresholds not being reliable: https://novasky-ai.github.io/posts/sky-t1/
Here’s some self-links to some of my thoughts on this (I recommend reading the posts these comments are on as well):
https://www.lesswrong.com/posts/DvHokvyr2cZiWJ55y/2-skim-the-manual-intelligent-voluntary-cooperation?commentId=BBjpfYXWywb2RKjz5
https://www.lesswrong.com/posts/FEcw6JQ8surwxvRfr/human-takeover-might-be-worse-than-ai-takeover?commentId=uSPR9svtuBaSCoJ5P
https://www.lesswrong.com/posts/tdrK7r4QA3ifbt2Ty/is-ai-alignment-enough?commentId=An6L68WETg3zCQrHT