I found this extremely, extremely useful! All of this stuff with the research process is usually pretty invisible externally.
I think one of these is: it’s plausible that we should hire a bunch of people to just blog and write LessWrong posts about AI safety.
I feel like it’s got to be hard to gauge impact on these, but I and others at Apollo at least find these posts incredibly valuable. Both posts that pull apart conceptual distinctions and ones that are of the form “claim that can be understood by just the title of the post”.
I found this extremely, extremely useful! All of this stuff with the research process is usually pretty invisible externally.
I feel like it’s got to be hard to gauge impact on these, but I and others at Apollo at least find these posts incredibly valuable. Both posts that pull apart conceptual distinctions and ones that are of the form “claim that can be understood by just the title of the post”.