I also use some LLM-y phrases or punctuation sometimes! It’s a bit disturbing when it happens, but that’s life. I still remember the first time I wrote a little pastiche for a group chat and someone asked if ChatGPT wrote it … alas!
That makes it particularly important to, like Habryka said, clarify an edge case of our LLM writing policy. I wouldn’t be surprised if this post gets referenced by someone whose content I reject, or return to draft. I want to be able to say to the person whose content I reject, “Yep, your post isn’t that much less-edited than Katalina’s, but Katalina’s was explicitly on the edge, and I said so at the time”.
Separately, I wanted to publically pushback against LLM writing on LessWrong. Because this post is so upvoted, I think it risks normalising this level of LLM writing. I think it would be quite bad for LessWrong if people started to post a bunch of LLM writing to it[1]; it’s epistemically weak in some ways I mentioned in my previous comment (like kind of making up people’s experience)[2].
Thanks for all your work on law and AI. I know this is the second time I’ve moderated you recently, and I appreciate that you keep engaging with LessWrong! I think the legal lens is valuable and underprovided here, so thanks for that. I would like to engage with your arguments more (I think I might have some substantive disagreements with parts of this post), but this isn’t the post I’ll do it on.
P.S. “As a lawyer” was not the LLM-y part of that sentence. I can imagine lots of these are just normal writing, but the density seemed quite high for no LLM involvement.
As I said before—I think it’s a bit unfair to call it “LLM writing” when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if “LLM writing” should be scrutinised and put in the spotlight with moderation comments, it’d be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording you’ve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buck’s recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, I’ll avoid LLM editing in the future: I’d rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!
I also use some LLM-y phrases or punctuation sometimes! It’s a bit disturbing when it happens, but that’s life. I still remember the first time I wrote a little pastiche for a group chat and someone asked if ChatGPT wrote it … alas!
I’d like to clarify why I left my comment.
This post is pretty highly upvoted. In fact, it’s the second most upvoted post of the week it was published. That makes it both very prominent, and somewhat norm-establishing for LessWrong.
That makes it particularly important to, like Habryka said, clarify an edge case of our LLM writing policy. I wouldn’t be surprised if this post gets referenced by someone whose content I reject, or return to draft. I want to be able to say to the person whose content I reject, “Yep, your post isn’t that much less-edited than Katalina’s, but Katalina’s was explicitly on the edge, and I said so at the time”.
Separately, I wanted to publically pushback against LLM writing on LessWrong. Because this post is so upvoted, I think it risks normalising this level of LLM writing. I think it would be quite bad for LessWrong if people started to post a bunch of LLM writing to it[1]; it’s epistemically weak in some ways I mentioned in my previous comment (like kind of making up people’s experience)[2].
Thanks for all your work on law and AI. I know this is the second time I’ve moderated you recently, and I appreciate that you keep engaging with LessWrong! I think the legal lens is valuable and underprovided here, so thanks for that. I would like to engage with your arguments more (I think I might have some substantive disagreements with parts of this post), but this isn’t the post I’ll do it on.
P.S. “As a lawyer” was not the LLM-y part of that sentence. I can imagine lots of these are just normal writing, but the density seemed quite high for no LLM involvement.
At least, this month. Maybe at some point soon LLM writing will become valuable enough that we should use it a lot.
I also find LLM writing quality to be weak, but I am more willing to accept we should have bad writing than bad thinking on LessWrong.
Thank you for your feedback.
As I said before—I think it’s a bit unfair to call it “LLM writing” when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if “LLM writing” should be scrutinised and put in the spotlight with moderation comments, it’d be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording you’ve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buck’s recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, I’ll avoid LLM editing in the future: I’d rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of “mod voice” to say that LessWrong is generally not a place where it’s OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but I’m substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we don’t appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (I’m tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didn’t like the writing in Buck’s post, but I didn’t explicitly notice it was AI. I’m treating the fact that I didn’t notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the “1 minute per 50 words”. I’m tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buck’s post would be in violation of that rule, and I’m not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or “vibes-based”.
As I mentioned in the DM: There’s probably an unfair disadvantage on policy or legal writing being “more similar” to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe that’s just skill issue 🤣 but that’s why I vote for “yes” on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from “no mistakes allowed” legal mindset ☺️).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure… But when I’ve done it, people said it makes it look AI generated…
In the end, exchanges like these are helping me understand what gets through to people and what doesn’t. So, thank you!