(I think that it’s common for AI safety people to talk too much about totally quashing risks rather than reducing them, in a way that leads them into unproductive lines of reasoning.)
Especially because we need to take into account non-AI X-risks. So maybe “What is the AI policy that would most reduce X-risks overall?” For people with lower P(X-risk|AGI) (if you don’t like P(doom)), longer timelines, and/or more worried about other X-risks, the answer may be do nothing or even accelerate AI (harkening back to Yudkowsky’s “Artificial Intelligence as a Positive and Negative Factor in Global Risk”.
Especially because we need to take into account non-AI X-risks. So maybe “What is the AI policy that would most reduce X-risks overall?” For people with lower P(X-risk|AGI) (if you don’t like P(doom)), longer timelines, and/or more worried about other X-risks, the answer may be do nothing or even accelerate AI (harkening back to Yudkowsky’s “Artificial Intelligence as a Positive and Negative Factor in Global Risk”.