I agree that it would be useful to have an official position.
There is no official position AFAIK but individuals in management have expressed the opinion that uncredited AI writing on LW is bad because it pollutes the epistemic commons (my phrase and interpretation).
I agree with this statement.
I don’t care if an AI did the writing as long as a human is vouching for the ideas making sense.
If no human is actively vouching for the ideas and claims being plausibly correct and useful, I don’t want to see it. There are more useful ideas here than I have time to take in.
That applies even if the authorship was entirely human. Human slop pollutes the epistemic commons just as much as AI slop.
If AI is used to improve the writing, and the human is vouching for the claims and ideas, I think it can be substantially useful. Having writing help can get more things from draft to post/comment, and better writing can reduce epistemic pollution.
So I’m happy to read LLM-aided but not LLM-created content on LW.
I strongly believe that authorship should be clearly stated. It’s considered an ethical violation in academia to publish others’ ideas as your own, and that standard seems like it should include LLM-generated ideas. It is not necessary or customary in academia to clearly state editing/writing assistance IF it’s very clear those assistants provided absolutely no ideas. I think that’s the right standard on LW, too.
I agree that it would be useful to have an official position.
There is no official position AFAIK but individuals in management have expressed the opinion that uncredited AI writing on LW is bad because it pollutes the epistemic commons (my phrase and interpretation).
I agree with this statement.
I don’t care if an AI did the writing as long as a human is vouching for the ideas making sense.
If no human is actively vouching for the ideas and claims being plausibly correct and useful, I don’t want to see it. There are more useful ideas here than I have time to take in.
That applies even if the authorship was entirely human. Human slop pollutes the epistemic commons just as much as AI slop.
If AI is used to improve the writing, and the human is vouching for the claims and ideas, I think it can be substantially useful. Having writing help can get more things from draft to post/comment, and better writing can reduce epistemic pollution.
So I’m happy to read LLM-aided but not LLM-created content on LW.
I strongly believe that authorship should be clearly stated. It’s considered an ethical violation in academia to publish others’ ideas as your own, and that standard seems like it should include LLM-generated ideas. It is not necessary or customary in academia to clearly state editing/writing assistance IF it’s very clear those assistants provided absolutely no ideas. I think that’s the right standard on LW, too.