LessWrong, is this rational? I wrote a reply to Elizabeth’s open bid for answers to her research questions in good faith. She replies not with anything substantive, but to claim it is written with AI. I’m happy for her to disagree with my answer, but flagging a difference in style to suggest low quality is not what I thought this community was supposed to be about. Cynically, one could suggest she doesn’t want to make good on her offer … For the record, I wrote it late at night and ran the response through an LLM to improve readability for her benefit.
If you don’t know what I’m talking about, see the most disliked comment on her post :)
To me, since LessWrong has a smart community that attracts people with high standards and integrity, by default if you (a median LW commenter) write your considered opinion about something, I take that very seriously and assume that it’s much, much more likely to be useful than an LLM’s opinion.
So if you post a comment that looks like an LLM wrote it, and you don’t explain which parts were the LLM’s opinion and which parts were your opinion, then that makes it difficult to use it. And if there’s a norm of posting comments that are partly unmarked LLM opinions, that means that I have to adopt the very large burden of evaluating every comment to try to figure out whether it’s an LLM, in order to figure out if I should take it seriously.
For the record, I wrote it late at night and ran the response through an LLM to improve readability for her benefit.
This is IMO generally considered bad form on LW. Please clearly mark whether an LLM was involved in writing a comment, if the final content is not something that genuinely reflects your own voice and writing style and you hold it to the same standard as your own writing. Like, it’s OK to iterate on a paragraph or two with an LLM without marking that super prominently, but if you have a whole comment which clearly is a straightforward copy-paste from an LLM, that should get you downvoted (and banned if you do it repeatedly).
I assume you’re in agreement that the reason for this is as nicely stated by cata in this thread: LW contributors are assumed to be a lot more insightful than an LLM so we don’t want to guess whether the ideas came from an LLM. It’s probably worth writing a brief statement on this unless you’ve added it to the FAQ since I last read it
I think beyond insightfulness, there is also a “groundedness” component that is different. LLM written writing either lies about personal experience, or is completely absent of references to personal experience. That makes writing usually much less concrete and worse, or actively deceptive.
LessWrong, is this rational? I wrote a reply to Elizabeth’s open bid for answers to her research questions in good faith. She replies not with anything substantive, but to claim it is written with AI. I’m happy for her to disagree with my answer, but flagging a difference in style to suggest low quality is not what I thought this community was supposed to be about. Cynically, one could suggest she doesn’t want to make good on her offer … For the record, I wrote it late at night and ran the response through an LLM to improve readability for her benefit.
If you don’t know what I’m talking about, see the most disliked comment on her post :)
https://www.lesswrong.com/posts/bFvEpE4atK4pPEirx/correct-my-h5n1-research-usdreward
To me, since LessWrong has a smart community that attracts people with high standards and integrity, by default if you (a median LW commenter) write your considered opinion about something, I take that very seriously and assume that it’s much, much more likely to be useful than an LLM’s opinion.
So if you post a comment that looks like an LLM wrote it, and you don’t explain which parts were the LLM’s opinion and which parts were your opinion, then that makes it difficult to use it. And if there’s a norm of posting comments that are partly unmarked LLM opinions, that means that I have to adopt the very large burden of evaluating every comment to try to figure out whether it’s an LLM, in order to figure out if I should take it seriously.
Thank you for your comment. I will highlight specifically which parts are my opinion in the future.
This is IMO generally considered bad form on LW. Please clearly mark whether an LLM was involved in writing a comment, if the final content is not something that genuinely reflects your own voice and writing style and you hold it to the same standard as your own writing. Like, it’s OK to iterate on a paragraph or two with an LLM without marking that super prominently, but if you have a whole comment which clearly is a straightforward copy-paste from an LLM, that should get you downvoted (and banned if you do it repeatedly).
I assume you’re in agreement that the reason for this is as nicely stated by cata in this thread: LW contributors are assumed to be a lot more insightful than an LLM so we don’t want to guess whether the ideas came from an LLM. It’s probably worth writing a brief statement on this unless you’ve added it to the FAQ since I last read it
I think beyond insightfulness, there is also a “groundedness” component that is different. LLM written writing either lies about personal experience, or is completely absent of references to personal experience. That makes writing usually much less concrete and worse, or actively deceptive.
Why not post the before/after and let people see if it was indeed more readable?
You probably should have said ‘yes’ when asked if it was AI-written.