I read through the replies and noticed that most people are discussing the value of human thinking versus AI thinking—these big, abstract questions. But I just wanna ask one, simple,question
Has anyone ever thought about how non-native English speakers’ feeling?
This community asks for high-quality, clearly written posts, but at the same time says, “don’t write like an AI.” For non-native speakers, it’s sooooo hard to meet that standard.
I scored over 90 on the TOEFL, I can speak English fluently and even explain academic material in my field clearly. But to make sure I don’t make grammar mistakes and that I’m using the right technical terms, I have to use LLMs to help check my writing.
The ideas are 100% my own and I include personal experience. The writing is definitely high-effort and original. But I can’t always guarantee it “doesn’t look like AI’s work”.
If this policy doesn’t make space for non-native speakers, then it’s just using language as a filter to block high-quality ideas from other cultures. That goes against the principles of rationalism.
I think it might well be the case that non-native English speakers gained a benefit from LLMs that native-speakers didn’t, but I don’t think the fact there’s uneven impact means it’s wrong to disallow LLM assistance.
- At worst, we’re back in the pre-LLM situation, I guess facing the general unfairness that some peoplew grew up as native English speakers and others didn’t. - Practically, LLMs, whether they’re generated the idea or just wording, produce writing that’s often enough a bad experience that I and others struggle to read it at all, we just bounce off, and you will likely get downvoted. By and large, “could write good prose with LLM help” is a very good filter for quality. - Allowing LLM use for non-English speakers but disallowing it for other usage would be wholly impractical as a policy. Where would the line be? How long would moderators have to spend on essays trying to judge? (but in any case, the result text might be gramatically correct but still painful to read) - already the moderation burden of vetting the massive uptick in (overwhelmingly low quality AI-assisted essays) is too high and we’re going to have to automate more of it.
It’s sad to me that that with where LLMs are currently at, non-native speakers don’t get to use a tool that helps them communicate more easily, but I don’t think there’s an alternative here that’s at all viable as policy for LessWrong.
(Well, one alternative is moderator’s don’t pre-filter and then (1) the posts we’re currently filtering out would just get downvoted very hard, (2) we’d lose a lot of readers.)
I’m not native English speaker, i don’t feel like you. and i somewhat confused about “I have to use LLMs to help check my writing.”. like… you can risk having grammar errors? you can ask actual person for beta read?
actually feel much less wary about using AI in my native language then in English.
so, please at least speak in your own name and not non-native speakers.
I read through the replies and noticed that most people are discussing the value of human thinking versus AI thinking—these big, abstract questions. But I just wanna ask one, simple,question
Has anyone ever thought about how non-native English speakers’ feeling?
This community asks for high-quality, clearly written posts, but at the same time says, “don’t write like an AI.” For non-native speakers, it’s sooooo hard to meet that standard.
I scored over 90 on the TOEFL, I can speak English fluently and even explain academic material in my field clearly. But to make sure I don’t make grammar mistakes and that I’m using the right technical terms, I have to use LLMs to help check my writing.
The ideas are 100% my own and I include personal experience. The writing is definitely high-effort and original. But I can’t always guarantee it “doesn’t look like AI’s work”.
If this policy doesn’t make space for non-native speakers, then it’s just using language as a filter to block high-quality ideas from other cultures. That goes against the principles of rationalism.
I think it might well be the case that non-native English speakers gained a benefit from LLMs that native-speakers didn’t, but I don’t think the fact there’s uneven impact means it’s wrong to disallow LLM assistance.
- At worst, we’re back in the pre-LLM situation, I guess facing the general unfairness that some peoplew grew up as native English speakers and others didn’t.
- Practically, LLMs, whether they’re generated the idea or just wording, produce writing that’s often enough a bad experience that I and others struggle to read it at all, we just bounce off, and you will likely get downvoted. By and large, “could write good prose with LLM help” is a very good filter for quality.
- Allowing LLM use for non-English speakers but disallowing it for other usage would be wholly impractical as a policy. Where would the line be? How long would moderators have to spend on essays trying to judge? (but in any case, the result text might be gramatically correct but still painful to read)
- already the moderation burden of vetting the massive uptick in (overwhelmingly low quality AI-assisted essays) is too high and we’re going to have to automate more of it.
It’s sad to me that that with where LLMs are currently at, non-native speakers don’t get to use a tool that helps them communicate more easily, but I don’t think there’s an alternative here that’s at all viable as policy for LessWrong.
(Well, one alternative is moderator’s don’t pre-filter and then (1) the posts we’re currently filtering out would just get downvoted very hard, (2) we’d lose a lot of readers.)
I’m not native English speaker, i don’t feel like you. and i somewhat confused about “I have to use LLMs to help check my writing.”. like… you can risk having grammar errors? you can ask actual person for beta read?
actually feel much less wary about using AI in my native language then in English.
so, please at least speak in your own name and not non-native speakers.