It’s not that moderates and radicals are trying to answer different questions (and the questions moderates are answering are epistemically easier like physics).
That seems totally wrong. Moderates are trying to answer questions like “what are some relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget?” and “how can I cause AI companies to marginally increase that budget?” These questions are very different from—and much easier than—the ones the radicals are trying to answer, like “how can we radically change the governance of AI to prevent x-risk?”
Hmm, I think what I said was about half wrong and I want to retract my point.
That said, I think much of the relevant questions are overlapping (like, “how do we expect the future to generally go?”, “why/how is AI risky?”, “how fast will algorithmic progress go at various points?) and I interpret this post as just talking about the effect on epistemics around the overlapping questions (regardless of whether you’d expect moderates to mostly be working in domains with better feedback loops).
This isn’t that relevant for your main point, but I also think the biggest question for radicals in practice is mostly: How can we generate massive public/government support for radical action on AI?
That seems totally wrong. Moderates are trying to answer questions like “what are some relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget?” and “how can I cause AI companies to marginally increase that budget?” These questions are very different from—and much easier than—the ones the radicals are trying to answer, like “how can we radically change the governance of AI to prevent x-risk?”
Hmm, I think what I said was about half wrong and I want to retract my point.
That said, I think much of the relevant questions are overlapping (like, “how do we expect the future to generally go?”, “why/how is AI risky?”, “how fast will algorithmic progress go at various points?) and I interpret this post as just talking about the effect on epistemics around the overlapping questions (regardless of whether you’d expect moderates to mostly be working in domains with better feedback loops).
This isn’t that relevant for your main point, but I also think the biggest question for radicals in practice is mostly: How can we generate massive public/government support for radical action on AI?