Iāve asked for a clarification on Lesswrong ās LLM policy usage via DM.
That said, for readers:
The TLDR was summarised using an LLM, at the end, because I wanted to provide a quick summary for people to skim.
The ideas in this post are mine. Hyperbolic talk āas a lawyerā or what you mentioned āthese are merely illustrative examplesā are, sadly, mine. Itās not the first time Iāve been told āthis sounds like ChatGPT ā.
At some point, someone wrote in one of my posts that I should avoid being too verbose if I want LessWrong posts to actually be read. Since I care about this issue so much, I did ask an LLM to trim and edit what I wrote to make it more digestible for a non-legal audience. Hence why it may have triggered the LLM alert (Ben).
As for my credentials, feel free to check out my LinkedIn. Iāve also linked my Substack to my profile.
Notice how I just said āhenceā⦠I talk like this š„². Thatās precisely why I wanted to edit the post to make it better for LessWrong, which apparently backfired.
Please let me know if anything contravenes the siteās policy. Iāll also keep this feedback in mind. Iāve updated towards āposting as isā even if Iām nervous that people will think it sounds weird⦠Because LLM editing doesnāt help.
I also use some LLM-y phrases or punctuation sometimes! Itās a bit disturbing when it happens, but thatās life. I still remember the first time I wrote a little pastiche for a group chat and someone asked if ChatGPT wrote it ⦠alas!
That makes it particularly important to, like Habryka said, clarify an edge case of our LLM writing policy. I wouldnāt be surprised if this post gets referenced by someone whose content I reject, or return to draft. I want to be able to say to the person whose content I reject, āYep, your post isnāt that much less-edited than Katalinaās, but Katalinaās was explicitly on the edge, and I said so at the timeā.
Separately, I wanted to publically pushback against LLM writing on LessWrong. Because this post is so upvoted, I think it risks normalising this level of LLM writing. I think it would be quite bad for LessWrong if people started to post a bunch of LLM writing to it[1]; itās epistemically weak in some ways I mentioned in my previous comment (like kind of making up peopleās experience)[2].
Thanks for all your work on law and AI. I know this is the second time Iāve moderated you recently, and I appreciate that you keep engaging with LessWrong! I think the legal lens is valuable and underprovided here, so thanks for that. I would like to engage with your arguments more (I think I might have some substantive disagreements with parts of this post), but this isnāt the post Iāll do it on.
P.S. āAs a lawyerā was not the LLM-y part of that sentence. I can imagine lots of these are just normal writing, but the density seemed quite high for no LLM involvement.
As I said beforeāI think itās a bit unfair to call it āLLM writingā when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if āLLM writingā should be scrutinised and put in the spotlight with moderation comments, itād be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording youāve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buckās recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, Iāll avoid LLM editing in the future: Iād rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of āmod voiceā to say that LessWrong is generally not a place where itās OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but Iām substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we donāt appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (Iām tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didnāt like the writing in Buckās post, but I didnāt explicitly notice it was AI. Iām treating the fact that I didnāt notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the ā1 minute per 50 wordsā. Iām tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buckās post would be in violation of that rule, and Iām not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or āvibes-basedā.
As I mentioned in the DM: Thereās probably an unfair disadvantage on policy or legal writing being āmore similarā to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe thatās just skill issue 𤣠but thatās why I vote for āyesā on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from āno mistakes allowedā legal mindset āŗļø).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure⦠But when Iāve done it, people said it makes it look AI generatedā¦
In the end, exchanges like these are helping me understand what gets through to people and what doesnāt. So, thank you!
Thank you Kave (also Elisabeth and Ben).
Iāve asked for a clarification on Lesswrong ās LLM policy usage via DM.
That said, for readers:
The TLDR was summarised using an LLM, at the end, because I wanted to provide a quick summary for people to skim.
The ideas in this post are mine. Hyperbolic talk āas a lawyerā or what you mentioned āthese are merely illustrative examplesā are, sadly, mine. Itās not the first time Iāve been told āthis sounds like ChatGPT ā.
At some point, someone wrote in one of my posts that I should avoid being too verbose if I want LessWrong posts to actually be read. Since I care about this issue so much, I did ask an LLM to trim and edit what I wrote to make it more digestible for a non-legal audience. Hence why it may have triggered the LLM alert (Ben).
As for my credentials, feel free to check out my LinkedIn. Iāve also linked my Substack to my profile.
Notice how I just said āhenceā⦠I talk like this š„². Thatās precisely why I wanted to edit the post to make it better for LessWrong, which apparently backfired.
Please let me know if anything contravenes the siteās policy. Iāll also keep this feedback in mind. Iāve updated towards āposting as isā even if Iām nervous that people will think it sounds weird⦠Because LLM editing doesnāt help.
(I also posted this on my Substack, and it doesnāt have the TLDR. Because that was added here, for anyone who may want to skim-only).
I also use some LLM-y phrases or punctuation sometimes! Itās a bit disturbing when it happens, but thatās life. I still remember the first time I wrote a little pastiche for a group chat and someone asked if ChatGPT wrote it ⦠alas!
Iād like to clarify why I left my comment.
This post is pretty highly upvoted. In fact, itās the second most upvoted post of the week it was published. That makes it both very prominent, and somewhat norm-establishing for LessWrong.
That makes it particularly important to, like Habryka said, clarify an edge case of our LLM writing policy. I wouldnāt be surprised if this post gets referenced by someone whose content I reject, or return to draft. I want to be able to say to the person whose content I reject, āYep, your post isnāt that much less-edited than Katalinaās, but Katalinaās was explicitly on the edge, and I said so at the timeā.
Separately, I wanted to publically pushback against LLM writing on LessWrong. Because this post is so upvoted, I think it risks normalising this level of LLM writing. I think it would be quite bad for LessWrong if people started to post a bunch of LLM writing to it[1]; itās epistemically weak in some ways I mentioned in my previous comment (like kind of making up peopleās experience)[2].
Thanks for all your work on law and AI. I know this is the second time Iāve moderated you recently, and I appreciate that you keep engaging with LessWrong! I think the legal lens is valuable and underprovided here, so thanks for that. I would like to engage with your arguments more (I think I might have some substantive disagreements with parts of this post), but this isnāt the post Iāll do it on.
P.S. āAs a lawyerā was not the LLM-y part of that sentence. I can imagine lots of these are just normal writing, but the density seemed quite high for no LLM involvement.
At least, this month. Maybe at some point soon LLM writing will become valuable enough that we should use it a lot.
I also find LLM writing quality to be weak, but I am more willing to accept we should have bad writing than bad thinking on LessWrong.
Thank you for your feedback.
As I said beforeāI think itās a bit unfair to call it āLLM writingā when it was only LLM edited rather than entirely generated. And why I sought clarity on the actual policy (if there is one) is because, if āLLM writingā should be scrutinised and put in the spotlight with moderation comments, itād be helpful to know what counts as LLM writing vs LLM-assisted or edited.
The wording youāve used seems to accuse me of actually generating this post with AI rather than using it to edit the ideas (see Buckās recent comment about how he wrote Christian homeschoolers in the year 3000). Would that be LLM writing?
Others, thank you for your feedback. I honestly care more about the definitional work than about the writing sounding or looking better. So, Iāll avoid LLM editing in the future: Iād rather not risk readers not focusing on what matters, because of it.
Yes, the moderator comment part is a good question, I nearly explicitly mentioned that.
I wanted to make it clear I was clarifying an edge case, and setting some precedent. I also wanted to use a bit of āmod voiceā to say that LessWrong is generally not a place where itās OK to post heavily-LLM-produced writing. I think those are appropriate uses of the moderator comment, but Iām substantially uncertain.
Regarding policies: on LessWrong, moderation is mostly done reactively by moderators, who intervene when they think something is going wrong. Mostly, we donāt appeal to an explicit policy, but try and justify our reasoning for the decision. Policies clarified upfront are the exception rather than the rule; the LLM writing policy was largely (Iām tempted to say primarily?) written for making it easy to handle particularly egregious cases, like users basically posting the output of a ChatGPT session, which, IIRC, was happening at a noticeable frequency when the policy was written.
It takes more time and effort to moderate any given decision in a reactive way, but it saves a lot of time up front. It also think it makes it easier for people to argue with our decisions, because they can dispute them in the specific case, rather than trying to overturn a whole explicit policy. Of course, there are probably also costs borne from inconsistency.
I didnāt like the writing in Buckās post, but I didnāt explicitly notice it was AI. Iām treating the fact that I didnāt notice it as a bellwether for its acceptability; Buck, I think, exerted a more acceptable level of control over the final prose. Another factor is the level of upvoting. Your post was substantially more upvoted (though the gap is narrowing).
If I were to rewrite the LLM policy, I think I would be more precise about what people must do with the ā1 minute per 50 wordsā. Iām tempted to ask for that time to be spent copy-editing the output, not thinking upfront or guiding the LLM. I think that Buckās post would be in violation of that rule, and Iām not confident whether that would be the right outcome.
I actually think that this re-write of the policy would be beneficial. It may not be the default opinion but, for me, I find it better to have a reference document which is well-specified. It also promotes transparency of decision -making, rather than risking moderation looking very subjective or āvibes-basedā.
As I mentioned in the DM: Thereās probably an unfair disadvantage on policy or legal writing being āmore similarā to how an LLM sounds. Naturally, once edited using an LLM, it will likely be even more LLM-sounding than writing about philosophy or fiction. Maybe thatās just skill issue 𤣠but thatās why I vote for āyesā on you adding that change. Again, I will keep this feedback very present in the future (thank you for encouraging me to think more naturally and less over edited. Tbh, I need to untrain myself from āno mistakes allowedā legal mindset āŗļø).
Fun ending remark: I was in a CAIDP meeting recently where we were advised to use a bunch of emojis for policy social media posts. And a bullet pointed structure⦠But when Iāve done it, people said it makes it look AI generatedā¦
In the end, exchanges like these are helping me understand what gets through to people and what doesnāt. So, thank you!
Nothing you did went against site policy! Kave was just giving feedback as a moderator and clarifying an edge-case.
Thank you! I just wanted to be very sure. And I appreciate the feedback too. Iāll keep posting and Iāll do better next time.