That’s fair, I think I was being overconfident and frustrated, such that these don’t express my real preferences.
But I did make it clear these were preferences unrelated to my call, which was “you should warn people” not “you should avoid direct LLM output entirely”. I wouldn’t want such a policy, and wouldn’t know how to enforce it anyway.
I think I’m allowed to have an unreasonable opinion like “I will read no LLM output I don’t prompt myself, please stop shoving it into my face” and not get called on epistemic grounds except in the context of “wait this is self-destructive, you should stop for that reason”. (And not in the context of e.g. “you’re hurting the epistemic commons”.)
You can also ask Raemon or habykra why they, too, seem to systematically downvote content they believe to be LLM-generated. I don’t think they’re being too unreasonable either.
That said, I agree with you there’s a strong selection effect with what writers choose to keep from the LLM, and that there’s also the danger of people writing exactly like LLMs and me calling them out on it unfairly. I tried hedging against this the first time, though maybe that was in a too-inflammatory manner. The second time, I decided to write this OP instead of addressing the local issue directly, because I don’t want to be writing something new each time and would rather not make “I hate LLM output on LW” become part of my identity, so I’ll keep it to a minimum after this.
Both these posts I found to have some value, though in the same sense my own LLM outputs have value, where I’ll usually quickly scan what’s said instead of reading thoroughly. LessWrong has always seemed to me to be among the most information-dense places out there, and I hate to see some users go this direction instead. If we can’t keep low density writing out of LessWrong, I don’t know where to go after that. (And I am talking about info density, not style. Though I do find style grating sometimes as well.)
I consider a text where I have to skip through entire paragraphs and ignore every 5th filler word (e.g. “fascinating”) to be bad writing, and not inherently enjoyable beyond the kernel of signal there may be in all that noise. And I don’t think I would be being unfair if I demanded this level of quality, because this site is a fragile garden with high standards and maintaining high standards is the same thing as not tolerating mediocrity.
Also everyone has access to the LLMs, and if I wanted an LLM output, I would ask it myself, and I don’t consider your taste in selection to bring me that much value.
I also believe (though can’t back this up) that I spend nearly ~ an order of magnitude more time talking to LLMs than the average person on LW, and am a little skeptical of the claim that maybe I’ve been reading some direct LLM output on here without knowing it. Though that day will come.
It also doesn’t take much effort not to paste LLM output outright, so past a certain bar of quality I don’t think people are doing this. (Hypothetical people who are spending serious effort selecting LLM outputs to put under their own name would just be writing it directly in the real world.)
Sounds interesting, I talk to LLMs quite a bit as well, I’m interested in any tricks you’ve picked up. I put quite a lot of effort into pushing them to be concise and grounded.
eg, I think an LLM bot designed by me would only get banned for being an LLM, despite consistently having useful things to say when writing comments—which, relatedly, would probably not happen super often, despite the AI reading a lot of posts and comments—it would be mostly showing up in threads where someone said something that seemed to need a specific kind of asking them for clarification, and I’d be doing prompt design for the goal of making the AI itself be evaluating its few and very short comments against a high bar of postability.
I also think a very well designed summarizer prompt would be useful to build directly into the site, mostly because otherwise it’s a bunch of work to summarize each post before reading it—I often am frustrated that there isn’t a built-in overview of a post, ideally one line on the homepage, a few lines at the top of each post. Posts where the author writes a title which accurately describes post contents and an overview at the top are great but rare(r than I’d prefer they be); the issue is that pasting a post and asking for an overview typically gets awful results. My favorite trick for asking for overviews is “Very heavily prefer direct quotes any time possible.” also, call it compression, not summarization, for a few reasons—unsure how long those concepts will be different, but usually what I want is more like the former, in places where the concepts differ.
However, given the culture on the site, I currently feel like I’m going to get disapproval for even suggesting this. Eg,
if I wanted an LLM output, I would ask it myself
There are circumstances where I don’t think this is accurate, in ways beyond just “that’s a lot of asking, though!”—I would typically want to ask an LLM to help me enumerate a bunch of ways to put something, and then I’d pick the ones that seem promising. I would only paste highly densified LLM writing. It would be appreciated if it were to become culturally unambiguous that the problem is shitty, default-LLM-foolishness, low-density, high-fluff writing, rather than simply “the words came from an LLM”.
I often read things, here and elsewhere, where my reaction is “you don’t dislike the way LLMs currently write enough, and I have no idea if this line came from an LLM but if it didn’t that’s actually much worse”.
I tried hedging against this the first time, though maybe that was in a too-inflammatory manner. The second time
Sorry for not replying in more detail, but in the meantime it’d be quite interesting to know whether the authors of these posts confirm that at least some parts of them are copy-pasted from LLM output. I don’t want to call them out (and I wouldn’t have much against it), but I feel like knowing it would be pretty important for this discussion. @Alexander Gietelink Oldenziel, @Nicholas Andresen you’ve written the posts linked in the quote. What do you say?
(not sure whether the authors are going to get a notification with the tag, but I guess trying doesn’t hurt)
My highlight link didn’t work but in the second example, this is the particular passage that drove me crazy:
The punchline works precisely because we recognize that slightly sheepish feeling of being reflexively nice to inanimate objects. It transforms our “irrational” politeness into accidental foresight.
The joke hints at an important truth, even if it gets the mechanism wrong: our conversations with current artificial intelligences may not be as consequence-free as they seem.
Thanks for articulating this – it’s genuinely helpful. You’ve pinpointed a section I found particularly difficult to write.
Specifically, the paragraph explaining the comic’s punchline went through maybe ten drafts. I knew why the punchline worked, but kept fumbling the articulation. I ended up in a long back-and-forth with Claude trying to refine the phrasing to be precise and concise, and that sentence is the endpoint. I can see that the process seems to have sanded off the human feel.
As for the “hints at an important truth” line… that phrasing feels generic in retrospect. I suspect you’re right – after the prior paragraph I probably just grabbed the first functional connector I could find (a direct Claude suggestion I didn’t think about too much) just to move the essay forward. It does seem like the type of cliché I was trying to avoid.
Point taken that leveraging LLM assistance without falling into the uncanny valley feel is tricky, and I didn’t quite nail it there. Appreciate the pointer.
My general workflow involves writing the outline and main content myself (this essay actually took several weeks, though I’m hoping to get faster with practice!) and then using LLMs as a grammar/syntax checker, to help with sign-posting and logical flow, or to help rephrase awkward or run-on sentences. Primarily I’m trying to make my writing more information dense and clear.
That’s fair, I think I was being overconfident and frustrated, such that these don’t express my real preferences.
But I did make it clear these were preferences unrelated to my call, which was “you should warn people” not “you should avoid direct LLM output entirely”. I wouldn’t want such a policy, and wouldn’t know how to enforce it anyway.
I think I’m allowed to have an unreasonable opinion like “I will read no LLM output I don’t prompt myself, please stop shoving it into my face” and not get called on epistemic grounds except in the context of “wait this is self-destructive, you should stop for that reason”. (And not in the context of e.g. “you’re hurting the epistemic commons”.)
You can also ask Raemon or habykra why they, too, seem to systematically downvote content they believe to be LLM-generated. I don’t think they’re being too unreasonable either.
That said, I agree with you there’s a strong selection effect with what writers choose to keep from the LLM, and that there’s also the danger of people writing exactly like LLMs and me calling them out on it unfairly. I tried hedging against this the first time, though maybe that was in a too-inflammatory manner. The second time, I decided to write this OP instead of addressing the local issue directly, because I don’t want to be writing something new each time and would rather not make “I hate LLM output on LW” become part of my identity, so I’ll keep it to a minimum after this.
Both these posts I found to have some value, though in the same sense my own LLM outputs have value, where I’ll usually quickly scan what’s said instead of reading thoroughly. LessWrong has always seemed to me to be among the most information-dense places out there, and I hate to see some users go this direction instead. If we can’t keep low density writing out of LessWrong, I don’t know where to go after that. (And I am talking about info density, not style. Though I do find style grating sometimes as well.)
I consider a text where I have to skip through entire paragraphs and ignore every 5th filler word (e.g. “fascinating”) to be bad writing, and not inherently enjoyable beyond the kernel of signal there may be in all that noise. And I don’t think I would be being unfair if I demanded this level of quality, because this site is a fragile garden with high standards and maintaining high standards is the same thing as not tolerating mediocrity.
Also everyone has access to the LLMs, and if I wanted an LLM output, I would ask it myself, and I don’t consider your taste in selection to bring me that much value.
I also believe (though can’t back this up) that I spend nearly ~ an order of magnitude more time talking to LLMs than the average person on LW, and am a little skeptical of the claim that maybe I’ve been reading some direct LLM output on here without knowing it. Though that day will come.
It also doesn’t take much effort not to paste LLM output outright, so past a certain bar of quality I don’t think people are doing this. (Hypothetical people who are spending serious effort selecting LLM outputs to put under their own name would just be writing it directly in the real world.)
Sounds interesting, I talk to LLMs quite a bit as well, I’m interested in any tricks you’ve picked up. I put quite a lot of effort into pushing them to be concise and grounded.
eg, I think an LLM bot designed by me would only get banned for being an LLM, despite consistently having useful things to say when writing comments—which, relatedly, would probably not happen super often, despite the AI reading a lot of posts and comments—it would be mostly showing up in threads where someone said something that seemed to need a specific kind of asking them for clarification, and I’d be doing prompt design for the goal of making the AI itself be evaluating its few and very short comments against a high bar of postability.
I also think a very well designed summarizer prompt would be useful to build directly into the site, mostly because otherwise it’s a bunch of work to summarize each post before reading it—I often am frustrated that there isn’t a built-in overview of a post, ideally one line on the homepage, a few lines at the top of each post. Posts where the author writes a title which accurately describes post contents and an overview at the top are great but rare(r than I’d prefer they be); the issue is that pasting a post and asking for an overview typically gets awful results. My favorite trick for asking for overviews is “Very heavily prefer direct quotes any time possible.” also, call it compression, not summarization, for a few reasons—unsure how long those concepts will be different, but usually what I want is more like the former, in places where the concepts differ.
However, given the culture on the site, I currently feel like I’m going to get disapproval for even suggesting this. Eg,
There are circumstances where I don’t think this is accurate, in ways beyond just “that’s a lot of asking, though!”—I would typically want to ask an LLM to help me enumerate a bunch of ways to put something, and then I’d pick the ones that seem promising. I would only paste highly densified LLM writing. It would be appreciated if it were to become culturally unambiguous that the problem is shitty, default-LLM-foolishness, low-density, high-fluff writing, rather than simply “the words came from an LLM”.
I often read things, here and elsewhere, where my reaction is “you don’t dislike the way LLMs currently write enough, and I have no idea if this line came from an LLM but if it didn’t that’s actually much worse”.
Sorry for not replying in more detail, but in the meantime it’d be quite interesting to know whether the authors of these posts confirm that at least some parts of them are copy-pasted from LLM output. I don’t want to call them out (and I wouldn’t have much against it), but I feel like knowing it would be pretty important for this discussion. @Alexander Gietelink Oldenziel, @Nicholas Andresen you’ve written the posts linked in the quote. What do you say?
(not sure whether the authors are going to get a notification with the tag, but I guess trying doesn’t hurt)
My highlight link didn’t work but in the second example, this is the particular passage that drove me crazy:
Thanks for articulating this – it’s genuinely helpful. You’ve pinpointed a section I found particularly difficult to write.
Specifically, the paragraph explaining the comic’s punchline went through maybe ten drafts. I knew why the punchline worked, but kept fumbling the articulation. I ended up in a long back-and-forth with Claude trying to refine the phrasing to be precise and concise, and that sentence is the endpoint. I can see that the process seems to have sanded off the human feel.
As for the “hints at an important truth” line… that phrasing feels generic in retrospect. I suspect you’re right – after the prior paragraph I probably just grabbed the first functional connector I could find (a direct Claude suggestion I didn’t think about too much) just to move the essay forward. It does seem like the type of cliché I was trying to avoid.
Point taken that leveraging LLM assistance without falling into the uncanny valley feel is tricky, and I didn’t quite nail it there. Appreciate the pointer.
My general workflow involves writing the outline and main content myself (this essay actually took several weeks, though I’m hoping to get faster with practice!) and then using LLMs as a grammar/syntax checker, to help with sign-posting and logical flow, or to help rephrase awkward or run-on sentences. Primarily I’m trying to make my writing more information dense and clear.