Why do you think they chose to lead off with these signatures and not Eliezer Yudkowsky’s? If the push for individual withdrawal from capabilities work is a success, then any time a government-implemented pause is proposed the expert consensus will be that no pause is necessary and AI does not represent an existential risk.
The benefit of withdrawal is not a pause or a stop. As long as there is no consensus on AI risk, individual withdrawal cannot lead to a stop.
I think this is treating expert consensus and credibility as more fixed / independent / inexorably derived from working for big AI labs or gaining status in academia than is actually the case.
If lots of bright individuals all decide it’s not a good idea to work on AI capabilities, that fact itself shifts what the expert consensus actually is.
Also, when someone chooses not to work on AI capabilities research, it’s not like they’re withdrawing from productive society as a whole—they’re often very bright and capable people who choose to dedicate themselves to other socially and economically valuable projects.
Sometimes these other projects are directly related to aspects of AI x-risk (governance, consensus-building, movement building, etc.); sometimes they are entirely unrelated (earning to give, founding a startup or a charitable endeavor in an unrelated field). There are lots of ways for smart, hard-working, principled individuals to do good and build legible status, credibility, and wealth for themselves outside of AI capabilities. By choosing to work on AI capabilities instead, less talent will go into these alternatives.
I think this is treating expert consensus and credibility as more fixed / independent / inexorably derived from working for big AI labs or gaining status in academia than is actually the case.
If lots of bright individuals all decide it’s not a good idea to work on AI capabilities, that fact itself shifts what the expert consensus actually is.
Also, when someone chooses not to work on AI capabilities research, it’s not like they’re withdrawing from productive society as a whole—they’re often very bright and capable people who choose to dedicate themselves to other socially and economically valuable projects.
Sometimes these other projects are directly related to aspects of AI x-risk (governance, consensus-building, movement building, etc.); sometimes they are entirely unrelated (earning to give, founding a startup or a charitable endeavor in an unrelated field). There are lots of ways for smart, hard-working, principled individuals to do good and build legible status, credibility, and wealth for themselves outside of AI capabilities. By choosing to work on AI capabilities instead, less talent will go into these alternatives.