Hey Angelo, I figured I’d reply publicly here to help people form an understanding of our policy:
it’s very difficult for me to reorder my thoughts in order to write an essay.
LessWrong has particularly high standards for the internet (and, fairly oddly specific standards – a lot of academic work isn’t really the right fit here because it’s missing some background assumptions about how to reason and argue that the LW community cares about). And as papetoast mentions, we have even higher standards for AI.
I think it’s actually pretty important, to contribute to LessWrong, for people to have the mental scaffolding to organize their thoughts on their own without relying on AI assistance. What we’ve reliably found is that people using AI this way end up following AI-assisted-trails in a direction that just isn’t ever going to meet the LW quality standard.
It sounds like you’re already tracking a lot of the risks here, I mention it because I think it’s just even more important than you might realize to be able to think independently of AI.
Hi Raemon, I already said I understand the need to filter the junk, and I understand your policy even if I find it frustrating; I’m a guest at best and I’m not trying to do anything against the rules.
I only want to point out one thing that doesn’t sits right with me, I don’t know if I’m using the right adjectives and maybe it wasn’t your intention altogether but I found something you said a bit condescending and slightly insulting.
It’s not that I’m not able to think independently, it’s that among other things it’s very difficult for me to organise my thoughts in order to be able to explain them to others. I’m not using AI to think for me but to “talk” for me.
Hey Angelo, I figured I’d reply publicly here to help people form an understanding of our policy:
LessWrong has particularly high standards for the internet (and, fairly oddly specific standards – a lot of academic work isn’t really the right fit here because it’s missing some background assumptions about how to reason and argue that the LW community cares about). And as papetoast mentions, we have even higher standards for AI.
I think it’s actually pretty important, to contribute to LessWrong, for people to have the mental scaffolding to organize their thoughts on their own without relying on AI assistance. What we’ve reliably found is that people using AI this way end up following AI-assisted-trails in a direction that just isn’t ever going to meet the LW quality standard.
It sounds like you’re already tracking a lot of the risks here, I mention it because I think it’s just even more important than you might realize to be able to think independently of AI.
Hi Raemon, I already said I understand the need to filter the junk, and I understand your policy even if I find it frustrating; I’m a guest at best and I’m not trying to do anything against the rules.
I only want to point out one thing that doesn’t sits right with me, I don’t know if I’m using the right adjectives and maybe it wasn’t your intention altogether but I found something you said a bit condescending and slightly insulting.
It’s not that I’m not able to think independently, it’s that among other things it’s very difficult for me to organise my thoughts in order to be able to explain them to others. I’m not using AI to think for me but to “talk” for me.
Anyway thanks for the reply.