For those who believe that a global shutdown of AGI R&D is next to impossible, or much more difficult to succeed at than a different plan:
Something I think is very important, and which I would be very grateful for, is if you can consistently signal that you would prefer a shutdown if you thought it was feasible.
There are many people who are making an argument against a shutdown, who would genuinely prefer AGI/ASI to be created in the near future and who want to prevent it from being shut down. If that does not describe you, please make it clear in your various communications, especially external communications, that shutting it all down would be best, and you are merely pessimistic that that can be done.
The situation is not symmetrical. A citizen or policymaker hearing “shut it down” who would otherwise want AI to proceed with caution moves in the direction of more caution. A citizen or policymaker hearing “proceed with caution” who would otherwise want AI to be shut down moves in the direction of less caution. Nonetheless, many people who advocate for a shutdown do say that they would like for superintelligent AI to eventually be created, and they simply see other plans as woefully unlikely to succeed.
As an anecdote, a few months ago I met a former MIRI researcher while handing out flyers for PauseAI. We had a great conversation, and they were very concerned about x-risk.
When I asked if they would be willing to sign PauseAI US’s petition, they declined, stating that they don’t think a shutdown is feasible. I was very confused by this, because for those who are concerned about x-risk, I do not see a strong relationship between the feasibility of a shutdown and whether it is a good idea to advocate for one.
To expand on my point about asymmetry:
If the shutdown plan fails, then we are undertaking one of the other plans. If the other plans fail, we die (with unacceptably high probability). Accidentally getting a shutdown when you meant to proceed with caution is a win condition, at least temporarily, and it allows for many other plans to be improved and enacted. Accidentally proceeding with caution when you meant to get a shutdown is walking on thin an ice. The distance from shutdown to death is larger than the distance from proceeding with caution to death.
To address another viewpoint: If you are concerned about x-risk and you believe that all effort going toward advocating for a shutdown is wasted, and that the world would be better off if no one talked about a shutdown, I think you’re simply confused about normal social and political dynamics.
I fully agree and have been pleased to see this logic clarified in the recent discussions of IABIED. We must choose where to put our primary efforts, but those like me who think alignment might be achievable on the fast path should still say we’d prefer shutdown if at all possible. I will not only continue to say that, but try to share this logic broadly. Pushing hard for caution of any sort will on average improve our odds. I don’t think we can get a shutdown (see other comment) but I’ll still state clearly that we should shut it all down.
It only takes a moment to say “well of course I think we’d shut it all down if we were wise, but assuming we don’t, here are my plans and hopes....”
For those who believe that a global shutdown of AGI R&D is next to impossible, or much more difficult to succeed at than a different plan:
Something I think is very important, and which I would be very grateful for, is if you can consistently signal that you would prefer a shutdown if you thought it was feasible.
There are many people who are making an argument against a shutdown, who would genuinely prefer AGI/ASI to be created in the near future and who want to prevent it from being shut down. If that does not describe you, please make it clear in your various communications, especially external communications, that shutting it all down would be best, and you are merely pessimistic that that can be done.
The situation is not symmetrical. A citizen or policymaker hearing “shut it down” who would otherwise want AI to proceed with caution moves in the direction of more caution. A citizen or policymaker hearing “proceed with caution” who would otherwise want AI to be shut down moves in the direction of less caution. Nonetheless, many people who advocate for a shutdown do say that they would like for superintelligent AI to eventually be created, and they simply see other plans as woefully unlikely to succeed.
As an anecdote, a few months ago I met a former MIRI researcher while handing out flyers for PauseAI. We had a great conversation, and they were very concerned about x-risk.
When I asked if they would be willing to sign PauseAI US’s petition, they declined, stating that they don’t think a shutdown is feasible. I was very confused by this, because for those who are concerned about x-risk, I do not see a strong relationship between the feasibility of a shutdown and whether it is a good idea to advocate for one.
To expand on my point about asymmetry: If the shutdown plan fails, then we are undertaking one of the other plans. If the other plans fail, we die (with unacceptably high probability). Accidentally getting a shutdown when you meant to proceed with caution is a win condition, at least temporarily, and it allows for many other plans to be improved and enacted. Accidentally proceeding with caution when you meant to get a shutdown is walking on thin an ice. The distance from shutdown to death is larger than the distance from proceeding with caution to death.
To address another viewpoint: If you are concerned about x-risk and you believe that all effort going toward advocating for a shutdown is wasted, and that the world would be better off if no one talked about a shutdown, I think you’re simply confused about normal social and political dynamics.
I fully agree and have been pleased to see this logic clarified in the recent discussions of IABIED. We must choose where to put our primary efforts, but those like me who think alignment might be achievable on the fast path should still say we’d prefer shutdown if at all possible. I will not only continue to say that, but try to share this logic broadly. Pushing hard for caution of any sort will on average improve our odds. I don’t think we can get a shutdown (see other comment) but I’ll still state clearly that we should shut it all down.
It only takes a moment to say “well of course I think we’d shut it all down if we were wise, but assuming we don’t, here are my plans and hopes....”