Curated. I disagree with some stronger/broader forms of the various claims re: missing “mental elements”, but I’m not sure you intend the stronger forms of those claims and they don’t seem load bearing for the rest of the piece in any case. However, this is an excellent explanation[1] of why LLM-generated text is low-value to engage with when presented as a human output, especially in contexts like LessWrong. Notably, most of these reasons are robust to LLM output improving in quality/truthfulness (though I do expect some trade-offs to become much more difficult if LLM outputs start to dominate top human outputs on certain dimensions).
Curated. I disagree with some stronger/broader forms of the various claims re: missing “mental elements”, but I’m not sure you intend the stronger forms of those claims and they don’t seem load bearing for the rest of the piece in any case. However, this is an excellent explanation[1] of why LLM-generated text is low-value to engage with when presented as a human output, especially in contexts like LessWrong. Notably, most of these reasons are robust to LLM output improving in quality/truthfulness (though I do expect some trade-offs to become much more difficult if LLM outputs start to dominate top human outputs on certain dimensions).
To the point where I’m tempted to update our policy about LLM writing on LessWrong to refer to it.