Hmm, I think what I said was about half wrong and I want to retract my point.
That said, I think much of the relevant questions are overlapping (like, “how do we expect the future to generally go?”, “why/how is AI risky?”, “how fast will algorithmic progress go at various points?) and I interpret this post as just talking about the effect on epistemics around the overlapping questions (regardless of whether you’d expect moderates to mostly be working in domains with better feedback loops).
This isn’t that relevant for your main point, but I also think the biggest question for radicals in practice is mostly: How can we generate massive public/government support for radical action on AI?
Hmm, I think what I said was about half wrong and I want to retract my point.
That said, I think much of the relevant questions are overlapping (like, “how do we expect the future to generally go?”, “why/how is AI risky?”, “how fast will algorithmic progress go at various points?) and I interpret this post as just talking about the effect on epistemics around the overlapping questions (regardless of whether you’d expect moderates to mostly be working in domains with better feedback loops).
This isn’t that relevant for your main point, but I also think the biggest question for radicals in practice is mostly: How can we generate massive public/government support for radical action on AI?