As I said before—I think it’s a bit unfair to call it “LLM writing” when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if “LLM writing” should be scrutinised and put in the spotlight with moderation comments, it’d be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording you’ve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buck’s recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, I’ll avoid LLM editing in the future: I’d rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!
Thank you for your feedback.
As I said before—I think it’s a bit unfair to call it “LLM writing” when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if “LLM writing” should be scrutinised and put in the spotlight with moderation comments, it’d be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording you’ve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buck’s recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, I’ll avoid LLM editing in the future: I’d rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!