I’ve noticed that some responses have focused on my English fluency, thanks for for the feedback, and I genuinely welcome corrections if you spot unclear phrasing.
but this is not really about me or my individual writing.
I’ve noticed that some of the responses focused on my English fluency. I appreciate the feedback, and I do welcome suggestions for clearer phrasing.
But my concern here isn’t really about my own writing—it’s about something larger:
I come from a background where what you’re allowed to say is often vague and implicitly policed. Not by specific rules, but by the constant fear of crossing a line you didn’t know existed.
In such an environment, people tend to stay silent—because you never know when something might be misinterpreted, or penalized. And I’ve found that a similar kind of uncertainty can arise here, around the LLM writing rules.
i carefully read the regulations regarding LLM-generated content. Perhaps because the development of LLMs has been so rapid and recent, there are still many grey areas in these rules. It takes constant experimentation to explore where the actual boundaries lie, and that’s why I wanted to raise this question.
Since I now have to spend extra time and energy revising the “writing style” of my posts without even knowing whether the changes are actually correct, I sometimes even have to add some“non-native mistakes” to avoid being misjudged. This already feels like a situation where you never know when you’re going to cross a red line.
To help LessWrong genuinely benefit from diverse, cross-cultural, and high-quality thinking, I believe the following suggestions could help reduce the current uncertainty around LLM-related content:
1. Allow users to voluntarily disclose how they used LLMs—for instance, “grammar check only,” “minor phrasing edits,” or “co-written.”
2. Foster a community-based language support system—something like peer review—where contributors can openly assist each other in refining language without fear of stigma.
3. Use AI-detection tools as soft signals or flags for moderator review, rather than as automatic deletion triggers.
These are just starting ideas—but I hope they point toward a more transparent and inclusive approach.
I personally believe (which I assume is also a widely shared view here) AI should empower individuals, giving a voice to those who might otherwise struggle to be heard, and helping communities grow through the inclusion of diverse perspectives. It should not become a new form of constraint.
When it comes to how society should understand and regulate LLM-generated content, many countries and regions still lack clear legal frameworks. We’re in a gray area, where the boundaries are uncertain and constantly shifting.
That’s exactly why communities like LessWrong-where technical knowledge meets thoughtful discourse—are uniquely positioned to explore the ethical boundaries of LLM use. By fostering open discussion and experimentation, we can help shape responsible norms not only for ourselves but for broader society.
(Edit note:) Some readers may have interpreted this post as taking a confrontational stance, but that wasn’t my intent. I was trying to highlight an uncertainty many non-native speakers may quietly face when navigating new moderation rules. I care about this community and believe honest feedback can help make the system more robust for everyone. I’m open to revising my assumptions if better alternatives are proposed.
thank you for the thoughtful reply! i found the link that moderator sent me, here’s the link
https://www.lesswrong.com/posts/nA58rarA7FPYR8cov/allamericanbreakfast-s-shortform?commentId=kpacwcjddWmSGEAwD