I have an argument for halting AGI progress based on an analogy to the Covid-19 pandemic. Initially the government response to the pandemic was widespread lockdowns. This is a rational response given that at first, given a lack of testing infrastructure and so on, it wasn’t possible to determine whether someone had Covid-19 or not so the safest option was to just avoid all contact with all other people via lockdowns.
Eventually we figured out practices like testing and contact tracing and then infected individuals could self-isolate if they came into contact with an infected individual. This approach is smarter and less costly than blanket lockdowns.
In my opinion, regarding AGI, the state we are in is similar to the beginning of the Covid-19 pandemic where there is a lot of uncertainty regarding the risks and capabilities of AI and which alignment techniques would be useful so a rational response would be the equivalent of an ‘AI lockdown’ by halting progress on AI until we understand it better and can come up with better alignment techniques.
The most obvious rebuttal to this argument is that pausing AGI progress would have a high economic opportunity cost (no AGI). But Covid lockdowns did too and society was willing to pay a large economic price to avert Covid deaths.
The economic opportunity cost of pausing AGI progress might be larger than the covid lockdowns but the benefits would be larger too: averting existential risk from AGI is a much larger benefit than avoiding covid deaths.
So in summary I think the benefits of pausing AGI progress outweigh the costs.
Unfortunately I don’t think many people agree with me (outside of the LW bubble) and that what I’m proposing is still somewhat outside the Overton window. The cognitive steps that are needed are as follows:
Being aware of AGI as a concept and a real possibility in the near future.
Believing that AGI poses a significant existential risk.
Knowing about pausing AI progress as a potential solution to AGI risk and seeing it as a promising solution.
Having a detailed plan to implement the proposed pause in practice.
A lot of people are not even at step 1 and just think that AI is ChatGPT. People like Marc Andreessen and Yan LeCun are at step 1. Many people on LW are at step 2 or 3. But we need someone (ideally in the government like a president or prime minister) at step 4. My hope is that that could happen in the next several years if necessary. Maybe AI alignment will be easy and it won’t be necessary but I think we should be ready for all possible scenarios.
I don’t have any good ideas right now for how an AI pause might work in practice. The main purpose of my comment was to propose argument 3 conditional on the previous two arguments and maybe try to build some consensus.
Several years? I don’t think we have that long. I’m thinking mid to late 2026 for when we hit AGI.
I think 1,2,3 can change very quickly indeed, like with the covid lockdowns. People went from ‘doubt’ to ‘doing’ in a short amount of time, once evidence was overwhelmingly clear.
So having 4 in place at the time that occurs seems key.
Also, trying to have plans in place for adequately convincing demos which may convince people before disaster strikes seems highly useful.
I have an argument for halting AGI progress based on an analogy to the Covid-19 pandemic. Initially the government response to the pandemic was widespread lockdowns. This is a rational response given that at first, given a lack of testing infrastructure and so on, it wasn’t possible to determine whether someone had Covid-19 or not so the safest option was to just avoid all contact with all other people via lockdowns.
Eventually we figured out practices like testing and contact tracing and then infected individuals could self-isolate if they came into contact with an infected individual. This approach is smarter and less costly than blanket lockdowns.
In my opinion, regarding AGI, the state we are in is similar to the beginning of the Covid-19 pandemic where there is a lot of uncertainty regarding the risks and capabilities of AI and which alignment techniques would be useful so a rational response would be the equivalent of an ‘AI lockdown’ by halting progress on AI until we understand it better and can come up with better alignment techniques.
The most obvious rebuttal to this argument is that pausing AGI progress would have a high economic opportunity cost (no AGI). But Covid lockdowns did too and society was willing to pay a large economic price to avert Covid deaths.
The economic opportunity cost of pausing AGI progress might be larger than the covid lockdowns but the benefits would be larger too: averting existential risk from AGI is a much larger benefit than avoiding covid deaths.
So in summary I think the benefits of pausing AGI progress outweigh the costs.
I think many people agree with you here. Particularly, I like Max Tegmark’s post Entente Delusion
But the big question is “How?” What are the costs of your proposed mechanism of global pause?
I think there are better answers to how to implement a pause through designing better governance methods.
Unfortunately I don’t think many people agree with me (outside of the LW bubble) and that what I’m proposing is still somewhat outside the Overton window. The cognitive steps that are needed are as follows:
Being aware of AGI as a concept and a real possibility in the near future.
Believing that AGI poses a significant existential risk.
Knowing about pausing AI progress as a potential solution to AGI risk and seeing it as a promising solution.
Having a detailed plan to implement the proposed pause in practice.
A lot of people are not even at step 1 and just think that AI is ChatGPT. People like Marc Andreessen and Yan LeCun are at step 1. Many people on LW are at step 2 or 3. But we need someone (ideally in the government like a president or prime minister) at step 4. My hope is that that could happen in the next several years if necessary. Maybe AI alignment will be easy and it won’t be necessary but I think we should be ready for all possible scenarios.
I don’t have any good ideas right now for how an AI pause might work in practice. The main purpose of my comment was to propose argument 3 conditional on the previous two arguments and maybe try to build some consensus.
Several years? I don’t think we have that long. I’m thinking mid to late 2026 for when we hit AGI. I think 1,2,3 can change very quickly indeed, like with the covid lockdowns. People went from ‘doubt’ to ‘doing’ in a short amount of time, once evidence was overwhelmingly clear.
So having 4 in place at the time that occurs seems key. Also, trying to have plans in place for adequately convincing demos which may convince people before disaster strikes seems highly useful.