Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don’t make very clear arguments, and we don’t have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
Thanks for the heads up, i start to think it’s unlikely I’ll be able to interact a lot here; I get the need to filter junk but the no AI use at all it’s an issue for me. I mean, it’s not like I ask it to write me a reply but because of how I am it’s very difficult for me to reorder my thoughts in order to write an essay.
I was trying to reply to a post the other day as a test and obviously because of the policy about LLMs it got rejected, but again… I used AIs only as an aide to help me write an argumentative reply based on a framework I already produced in the same way I write about complicated subjects lately: putting down my thoughts, stitching them together, testing the logic and rethinking/changing the parts that after a a bit feels wrong.
It’s been months since I started using AIs daily as a past time and because of it i’m very aware about the sometimes invisible traps laid by them: the biases, the hallucinations, the inexplicable rigid tangents they take sometimes, etc. Still, it’s the only way I found at the moment to put down what’s in my mind about complicated subjects or to get a better explanation If I don’t get or I’m not sure I get something. I think you can imagine it’s a bit frustrating to be denied the only tool that kinda works for me even if wonkily.
Hey Angelo, I figured I’d reply publicly here to help people form an understanding of our policy:
it’s very difficult for me to reorder my thoughts in order to write an essay.
LessWrong has particularly high standards for the internet (and, fairly oddly specific standards – a lot of academic work isn’t really the right fit here because it’s missing some background assumptions about how to reason and argue that the LW community cares about). And as papetoast mentions, we have even higher standards for AI.
I think it’s actually pretty important, to contribute to LessWrong, for people to have the mental scaffolding to organize their thoughts on their own without relying on AI assistance. What we’ve reliably found is that people using AI this way end up following AI-assisted-trails in a direction that just isn’t ever going to meet the LW quality standard.
It sounds like you’re already tracking a lot of the risks here, I mention it because I think it’s just even more important than you might realize to be able to think independently of AI.
Hi Raemon, I already said I understand the need to filter the junk, and I understand your policy even if I find it frustrating; I’m a guest at best and I’m not trying to do anything against the rules.
I only want to point out one thing that doesn’t sits right with me, I don’t know if I’m using the right adjectives and maybe it wasn’t your intention altogether but I found something you said a bit condescending and slightly insulting.
It’s not that I’m not able to think independently, it’s that among other things it’s very difficult for me to organise my thoughts in order to be able to explain them to others. I’m not using AI to think for me but to “talk” for me.
It seems likely that your post will get rejected. This is a rejection template I see in https://www.lesswrong.com/moderation. Also you may want to repost this comment on Open Thread Winter 2025⁄26 instead, afaict almost no one reads this post periodically
Thanks for the heads up, i start to think it’s unlikely I’ll be able to interact a lot here; I get the need to filter junk but the no AI use at all it’s an issue for me. I mean, it’s not like I ask it to write me a reply but because of how I am it’s very difficult for me to reorder my thoughts in order to write an essay.
I was trying to reply to a post the other day as a test and obviously because of the policy about LLMs it got rejected, but again… I used AIs only as an aide to help me write an argumentative reply based on a framework I already produced in the same way I write about complicated subjects lately: putting down my thoughts, stitching them together, testing the logic and rethinking/changing the parts that after a a bit feels wrong.
It’s been months since I started using AIs daily as a past time and because of it i’m very aware about the sometimes invisible traps laid by them: the biases, the hallucinations, the inexplicable rigid tangents they take sometimes, etc. Still, it’s the only way I found at the moment to put down what’s in my mind about complicated subjects or to get a better explanation If I don’t get or I’m not sure I get something. I think you can imagine it’s a bit frustrating to be denied the only tool that kinda works for me even if wonkily.
Hey Angelo, I figured I’d reply publicly here to help people form an understanding of our policy:
LessWrong has particularly high standards for the internet (and, fairly oddly specific standards – a lot of academic work isn’t really the right fit here because it’s missing some background assumptions about how to reason and argue that the LW community cares about). And as papetoast mentions, we have even higher standards for AI.
I think it’s actually pretty important, to contribute to LessWrong, for people to have the mental scaffolding to organize their thoughts on their own without relying on AI assistance. What we’ve reliably found is that people using AI this way end up following AI-assisted-trails in a direction that just isn’t ever going to meet the LW quality standard.
It sounds like you’re already tracking a lot of the risks here, I mention it because I think it’s just even more important than you might realize to be able to think independently of AI.
Hi Raemon, I already said I understand the need to filter the junk, and I understand your policy even if I find it frustrating; I’m a guest at best and I’m not trying to do anything against the rules.
I only want to point out one thing that doesn’t sits right with me, I don’t know if I’m using the right adjectives and maybe it wasn’t your intention altogether but I found something you said a bit condescending and slightly insulting.
It’s not that I’m not able to think independently, it’s that among other things it’s very difficult for me to organise my thoughts in order to be able to explain them to others. I’m not using AI to think for me but to “talk” for me.
Anyway thanks for the reply.