Are the LLM-writing rules here fair to non-native speakers?
For non-native English speakers who speak well like me (scored over 90 on the TOEFL, have English-speaking friends, can explain my field clearly in English—but don’t currently live in an English-speaking environment)reading and understanding English is OK but the hard part is recognizing the difference between “LLM style writing ” and “a perfect human writing .”
When I give my writing to an LLM for checking, and it changes some sentences, I tend to trust it If the meaning looks accurate, i’d just assume:“My original writing wasn’t native enough, LLM would never make a grammar mistake, So I must be wrong, it must be right.”
Now, just to avoid looking like I used an LLM, I’m forced to write entirely on my own—I have to apologize for the ridiculous grammar mistakes you may see in this post in advance.
I don’t know whether the rules are justified or not, but I do think they are unfair. As much as we try to be rational, I don’t think any of us are great at disregarding the reflex to interpret broken English as a sign of less intelligent thought, and so the perceived credibility of non-native speakers is going to take a hit.
(In your particular case, I wouldn’t worry too much, because your solo writing is good. But I do sympathise if it costs you extra time and effort to polish it.)
Thank you for your kind tone and for noticing the effort I’ve put into improving my English.. I genuinely appreciate that. Also, since this site values really precise language, the bar for non-native speakers gets really high. Unless you speak more than one language fluently, it’s hard to understand how tough that can be. You need way more courage and patience—we constantly have to double-to check if our logic makes sense, wording is clear enough, or if we’ve 100% understood what others meant in the first place. I believe your comment points to a deeper issue that deserves serious attention.(actually I’m worrying about if this sentence looks too “LLM-generated ” but i don’t know other way to explain my feelings clearly and accurately enough)
Let me first refer to the official policy itself:
“You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.”
I completely agree with the intention behind this: to avoid AI replacing human thinking and to maintain the intellectual standard of the platform.
However, another line in the same guideline says:
“Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content.”
Added this part, I believe it reflects a native-speaker perspective like
“You prompt the AI for ideas or phrasing, and then rephrase and reframe everything in your own writing, then it’s your own work (to some extent)”
But for many non-native speakers like me, the process actually runs like:
“We come up with the ideas, write the draft ourselves, then use an LLM to check the grammar and phrasing to make sure the language is clear and not awkward.”
The goal is not to replace our thinking, but to make it readable in a high-standard English forum like LessWrong.
I fully support filtering out low-effort, AI-prompted fluff. But removing high-quality, idea-driven posts by non-native speakers simply because the writing “sounds like an LLM”—even though the thinking behind it is entirely original—defeats the very purpose of the rule.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
This kind of outcome creates a painful contradiction:
A native speaker can submit a low-quality post, but it’s allowed because it sounds “human.” A non-native speaker submits a thoughtful, valuable post (I’m not talking about myself, i know there must be other smarter nonnative speakers here are facing the same trouble)—but because the English is too clean or “LLM-like,” it gets rejected.
I don’t believe this is the intent of the policy. But the way it’s currently applied functions as a linguistic and cultural filter, shutting out good content from voices outside the English-speaking world. And that undermines the spirit of rationalism this community is built on.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
I believe that was not the intent, so I see this as a bug.
Well, that comment sounded perfectly normal (and quite smart) to me. (Not a native English speaker.)
Unless that happens repeatedly, I would suggest to just ignore it. It certainly is an unpleasant experience, especially when it happens to one of your first comments… well, I suppose the moderators will keep tweaking the algorithm so hopefully things like this won’t happen often to new users.
As a data point (and also as a pro editor!), I have a significantly stronger “hmm maybe this sucks” instinct for LLM-y prose than for slightly broken English. Maybe you can get the best of both worlds if you ask only for typo fixes. LLM “clarity” edits are almost always tonally garbage in my experience (at least pre Opus 4; haven’t tried that one for this purpose yet)
Your text looks fine to me. There are a few nits I could pick if I was a stern TOEFL examiner and only give it a 95%, but really nothing worth commenting on here. Same goes for this. I’d say these are completely acceptable.
See, this is exactly why the bar for me to express myself is so high. It’s like, thousands of TOEFL examiners reading my words, silently grading me in their heads. The tension is real,and if I make a grammar mistake or say something that gets misinterpreted or pushed back on—not because my idea was bad, but because the English didn’t land right, it feels even worse than losing points on an actual exam essay.
I’m not just speaking for myself here. Yes, the process is exhausting for me—writing a draft, running it through an LLM for grammar and clarity check out, then going back and deliberately editing out anything that sounds “too smooth” or “too LLM-like,” sometimes even reintroducing my own non-native quirks just to avoid being flagged( which is so weird). But I’m planning to study in an English-speaking country and pursue a PhD, so I can treat this as language training anyway. What worries me more is that there are other non-native users here who are definitely smarter and more thoughtful than me, their valuable insights are being filtered out simply because of language. If that’s what LLMs have brought us, then what exactly have we gained from the development of LLM here ?
Are the LLM-writing rules here fair to non-native speakers?
For non-native English speakers who speak well like me (scored over 90 on the TOEFL, have English-speaking friends, can explain my field clearly in English—but don’t currently live in an English-speaking environment)reading and understanding English is OK but the hard part is recognizing the difference between “LLM style writing ” and “a perfect human writing .”
When I give my writing to an LLM for checking, and it changes some sentences, I tend to trust it If the meaning looks accurate, i’d just assume:“My original writing wasn’t native enough, LLM would never make a grammar mistake, So I must be wrong, it must be right.”
Now, just to avoid looking like I used an LLM, I’m forced to write entirely on my own—I have to apologize for the ridiculous grammar mistakes you may see in this post in advance.
I don’t know whether the rules are justified or not, but I do think they are unfair. As much as we try to be rational, I don’t think any of us are great at disregarding the reflex to interpret broken English as a sign of less intelligent thought, and so the perceived credibility of non-native speakers is going to take a hit.
(In your particular case, I wouldn’t worry too much, because your solo writing is good. But I do sympathise if it costs you extra time and effort to polish it.)
Thank you for your kind tone and for noticing the effort I’ve put into improving my English.. I genuinely appreciate that. Also, since this site values really precise language, the bar for non-native speakers gets really high. Unless you speak more than one language fluently, it’s hard to understand how tough that can be. You need way more courage and patience—we constantly have to double-to check if our logic makes sense, wording is clear enough, or if we’ve 100% understood what others meant in the first place. I believe your comment points to a deeper issue that deserves serious attention.(actually I’m worrying about if this sentence looks too “LLM-generated ” but i don’t know other way to explain my feelings clearly and accurately enough)
Let me first refer to the official policy itself:
“You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.”
I completely agree with the intention behind this: to avoid AI replacing human thinking and to maintain the intellectual standard of the platform.
However, another line in the same guideline says:
“Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content.”
Added this part, I believe it reflects a native-speaker perspective like
“You prompt the AI for ideas or phrasing, and then rephrase and reframe everything in your own writing, then it’s your own work (to some extent)”
But for many non-native speakers like me, the process actually runs like:
“We come up with the ideas, write the draft ourselves, then use an LLM to check the grammar and phrasing to make sure the language is clear and not awkward.”
The goal is not to replace our thinking, but to make it readable in a high-standard English forum like LessWrong.
I fully support filtering out low-effort, AI-prompted fluff. But removing high-quality, idea-driven posts by non-native speakers simply because the writing “sounds like an LLM”—even though the thinking behind it is entirely original—defeats the very purpose of the rule.
Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”
This kind of outcome creates a painful contradiction:
A native speaker can submit a low-quality post, but it’s allowed because it sounds “human.”
A non-native speaker submits a thoughtful, valuable post (I’m not talking about myself, i know there must be other smarter nonnative speakers here are facing the same trouble)—but because the English is too clean or “LLM-like,” it gets rejected.
I don’t believe this is the intent of the policy. But the way it’s currently applied functions as a linguistic and cultural filter, shutting out good content from voices outside the English-speaking world. And that undermines the spirit of rationalism this community is built on.
I believe that was not the intent, so I see this as a bug.
Can you still link to the removed text?
thank you for the thoughtful reply! i found the link that moderator sent me, here’s the link
https://www.lesswrong.com/posts/nA58rarA7FPYR8cov/allamericanbreakfast-s-shortform?commentId=kpacwcjddWmSGEAwD
Well, that comment sounded perfectly normal (and quite smart) to me. (Not a native English speaker.)
Unless that happens repeatedly, I would suggest to just ignore it. It certainly is an unpleasant experience, especially when it happens to one of your first comments… well, I suppose the moderators will keep tweaking the algorithm so hopefully things like this won’t happen often to new users.
As a data point (and also as a pro editor!), I have a significantly stronger “hmm maybe this sucks” instinct for LLM-y prose than for slightly broken English. Maybe you can get the best of both worlds if you ask only for typo fixes. LLM “clarity” edits are almost always tonally garbage in my experience (at least pre Opus 4; haven’t tried that one for this purpose yet)
Your text looks fine to me. There are a few nits I could pick if I was a stern TOEFL examiner and only give it a 95%, but really nothing worth commenting on here. Same goes for this. I’d say these are completely acceptable.
See, this is exactly why the bar for me to express myself is so high. It’s like, thousands of TOEFL examiners reading my words, silently grading me in their heads. The tension is real,and if I make a grammar mistake or say something that gets misinterpreted or pushed back on—not because my idea was bad, but because the English didn’t land right, it feels even worse than losing points on an actual exam essay.
I’m not just speaking for myself here.
Yes, the process is exhausting for me—writing a draft, running it through an LLM for grammar and clarity check out, then going back and deliberately editing out anything that sounds “too smooth” or “too LLM-like,” sometimes even reintroducing my own non-native quirks just to avoid being flagged( which is so weird). But I’m planning to study in an English-speaking country and pursue a PhD, so I can treat this as language training anyway.
What worries me more is that there are other non-native users here who are definitely smarter and more thoughtful than me, their valuable insights are being filtered out simply because of language.
If that’s what LLMs have brought us, then what exactly have we gained from the development of LLM here ?