Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!