Unfortunately I don’t think many people agree with me (outside of the LW bubble) and that what I’m proposing is still somewhat outside the Overton window. The cognitive steps that are needed are as follows:
Being aware of AGI as a concept and a real possibility in the near future.
Believing that AGI poses a significant existential risk.
Knowing about pausing AI progress as a potential solution to AGI risk and seeing it as a promising solution.
Having a detailed plan to implement the proposed pause in practice.
A lot of people are not even at step 1 and just think that AI is ChatGPT. People like Marc Andreessen and Yan LeCun are at step 1. Many people on LW are at step 2 or 3. But we need someone (ideally in the government like a president or prime minister) at step 4. My hope is that that could happen in the next several years if necessary. Maybe AI alignment will be easy and it won’t be necessary but I think we should be ready for all possible scenarios.
I don’t have any good ideas right now for how an AI pause might work in practice. The main purpose of my comment was to propose argument 3 conditional on the previous two arguments and maybe try to build some consensus.
Several years? I don’t think we have that long. I’m thinking mid to late 2026 for when we hit AGI.
I think 1,2,3 can change very quickly indeed, like with the covid lockdowns. People went from ‘doubt’ to ‘doing’ in a short amount of time, once evidence was overwhelmingly clear.
So having 4 in place at the time that occurs seems key.
Also, trying to have plans in place for adequately convincing demos which may convince people before disaster strikes seems highly useful.
Unfortunately I don’t think many people agree with me (outside of the LW bubble) and that what I’m proposing is still somewhat outside the Overton window. The cognitive steps that are needed are as follows:
Being aware of AGI as a concept and a real possibility in the near future.
Believing that AGI poses a significant existential risk.
Knowing about pausing AI progress as a potential solution to AGI risk and seeing it as a promising solution.
Having a detailed plan to implement the proposed pause in practice.
A lot of people are not even at step 1 and just think that AI is ChatGPT. People like Marc Andreessen and Yan LeCun are at step 1. Many people on LW are at step 2 or 3. But we need someone (ideally in the government like a president or prime minister) at step 4. My hope is that that could happen in the next several years if necessary. Maybe AI alignment will be easy and it won’t be necessary but I think we should be ready for all possible scenarios.
I don’t have any good ideas right now for how an AI pause might work in practice. The main purpose of my comment was to propose argument 3 conditional on the previous two arguments and maybe try to build some consensus.
Several years? I don’t think we have that long. I’m thinking mid to late 2026 for when we hit AGI. I think 1,2,3 can change very quickly indeed, like with the covid lockdowns. People went from ‘doubt’ to ‘doing’ in a short amount of time, once evidence was overwhelmingly clear.
So having 4 in place at the time that occurs seems key. Also, trying to have plans in place for adequately convincing demos which may convince people before disaster strikes seems highly useful.