Thank you for your kind tone and for noticing the effort I’ve put into improving my English.. I genuinely appreciate that. Also, since this site values really precise language, the bar for non-native speakers gets really high. Unless you speak more than one language fluently, it’s hard to understand how tough that can be. You need way more courage and patience—we constantly have to double-to check if our logic makes sense, wording is clear enough, or if we’ve 100% understood what others meant in the first place. I believe your comment points to a deeper issue that deserves serious attention.(actually I’m worrying about if this sentence looks too “LLM-generated ” but i don’t know other way to explain my feelings clearly and accurately enough)
Let me first refer to the official policy itself:
“You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.”
I completely agree with the intention behind this: to avoid AI replacing human thinking and to maintain the intellectual standard of the platform.
However, another line in the same guideline says:
“Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content.”
Added this part, I believe it reflects a native-speaker perspective like
“You prompt the AI for ideas or phrasing, and then rephrase and reframe everything in your own writing, then it’s your own work (to some extent)”
But for many non-native speakers like me, the process actually runs like:
“We come up with the ideas, write the draft ourselves, then use an LLM to check the grammar and phrasing to make sure the language is clear and not awkward.”
The goal is not to replace our thinking, but to make it readable in a high-standard English forum like LessWrong.
I fully support filtering out low-effort, AI-prompted fluff. But removing high-quality, idea-driven posts by non-native speakers simply because the writing “sounds like an LLM”—even though the thinking behind it is entirely original—defeats the very purpose of the rule.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
This kind of outcome creates a painful contradiction:
A native speaker can submit a low-quality post, but it’s allowed because it sounds “human.” A non-native speaker submits a thoughtful, valuable post (I’m not talking about myself, i know there must be other smarter nonnative speakers here are facing the same trouble)—but because the English is too clean or “LLM-like,” it gets rejected.
I don’t believe this is the intent of the policy. But the way it’s currently applied functions as a linguistic and cultural filter, shutting out good content from voices outside the English-speaking world. And that undermines the spirit of rationalism this community is built on.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
I believe that was not the intent, so I see this as a bug.
Well, that comment sounded perfectly normal (and quite smart) to me. (Not a native English speaker.)
Unless that happens repeatedly, I would suggest to just ignore it. It certainly is an unpleasant experience, especially when it happens to one of your first comments… well, I suppose the moderators will keep tweaking the algorithm so hopefully things like this won’t happen often to new users.
Thank you for your kind tone and for noticing the effort I’ve put into improving my English.. I genuinely appreciate that. Also, since this site values really precise language, the bar for non-native speakers gets really high. Unless you speak more than one language fluently, it’s hard to understand how tough that can be. You need way more courage and patience—we constantly have to double-to check if our logic makes sense, wording is clear enough, or if we’ve 100% understood what others meant in the first place. I believe your comment points to a deeper issue that deserves serious attention.(actually I’m worrying about if this sentence looks too “LLM-generated ” but i don’t know other way to explain my feelings clearly and accurately enough)
Let me first refer to the official policy itself:
“You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.”
I completely agree with the intention behind this: to avoid AI replacing human thinking and to maintain the intellectual standard of the platform.
However, another line in the same guideline says:
“Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content.”
Added this part, I believe it reflects a native-speaker perspective like
“You prompt the AI for ideas or phrasing, and then rephrase and reframe everything in your own writing, then it’s your own work (to some extent)”
But for many non-native speakers like me, the process actually runs like:
“We come up with the ideas, write the draft ourselves, then use an LLM to check the grammar and phrasing to make sure the language is clear and not awkward.”
The goal is not to replace our thinking, but to make it readable in a high-standard English forum like LessWrong.
I fully support filtering out low-effort, AI-prompted fluff. But removing high-quality, idea-driven posts by non-native speakers simply because the writing “sounds like an LLM”—even though the thinking behind it is entirely original—defeats the very purpose of the rule.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
This kind of outcome creates a painful contradiction:
A native speaker can submit a low-quality post, but it’s allowed because it sounds “human.”
A non-native speaker submits a thoughtful, valuable post (I’m not talking about myself, i know there must be other smarter nonnative speakers here are facing the same trouble)—but because the English is too clean or “LLM-like,” it gets rejected.
I don’t believe this is the intent of the policy. But the way it’s currently applied functions as a linguistic and cultural filter, shutting out good content from voices outside the English-speaking world. And that undermines the spirit of rationalism this community is built on.
I believe that was not the intent, so I see this as a bug.
Can you still link to the removed text?
thank you for the thoughtful reply! i found the link that moderator sent me, here’s the link
https://www.lesswrong.com/posts/nA58rarA7FPYR8cov/allamericanbreakfast-s-shortform?commentId=kpacwcjddWmSGEAwD
Well, that comment sounded perfectly normal (and quite smart) to me. (Not a native English speaker.)
Unless that happens repeatedly, I would suggest to just ignore it. It certainly is an unpleasant experience, especially when it happens to one of your first comments… well, I suppose the moderators will keep tweaking the algorithm so hopefully things like this won’t happen often to new users.