What is materially different about the situation today (versus 2 years ago) that means we should pause now?
What specific dangerous capabilities do you expect the next generation (GPT-5/Gemini/Claude 3) to posses that you hope to mitigate with a pause?
What concrete measures of safety would allow us to un-pause? If the expert consensus is that the next model is likely to be safe, is that enough?
What is your estimate of the “background risk” of a permanent pause? I.e. what chance of catastrophe from other causes (nuclear war, climate change, asteroid impact, alien invasion, biological super-plague) do you estimate P(DOOM) would have to be lower than to justify pursing AI to mitigate these other risks?
Yet another substance free call for a pause.
What is materially different about the situation today (versus 2 years ago) that means we should pause now?
What specific dangerous capabilities do you expect the next generation (GPT-5/Gemini/Claude 3) to posses that you hope to mitigate with a pause?
What concrete measures of safety would allow us to un-pause? If the expert consensus is that the next model is likely to be safe, is that enough?
What is your estimate of the “background risk” of a permanent pause? I.e. what chance of catastrophe from other causes (nuclear war, climate change, asteroid impact, alien invasion, biological super-plague) do you estimate P(DOOM) would have to be lower than to justify pursing AI to mitigate these other risks?