LLM Style Slop is Absolutely Everywhere

Epistemic status: something of a rant. Is not meant to make claims about the general capabilities (or lack thereof) of LLMs (beyond their prose), but observations about how society seems to use them excessively without being perfectly candid about it.

image.png

Nanobanana’s take on the situation

Over the past few months, it has become a bit of a running gag that every evening, I inform my girlfriend of the multiple cases of LLM slop I encountered in the wild throughout the day[1]. And pretty much every day, I have several cases to report. These typically involve:

  • Blog posts (including at least four cases from 2026 on LessWrong[2], where I’ve even seen one author use (at least partially) AI-generated text in replies to comments)

  • YouTube videos

  • At least one keynote speech from the CEO of a tech company

  • Messages from coworkers

  • Several personal messages from a friend

  • News articles

  • Press releases

  • Tweets[3]

  • Many subreddits, including, ironically, ones about getting LLMs to sound more human-like, such as /​r/​humanizing/​ and /​r/​humanizeAIwriting/​

Exhibit 1: The all-time top-rated post on /​r/​humanize—if TwainGPT is so great, why didn’t they use it for this post? Many other posts and comments on that subreddit read equally LLM-generated.

Exhibit 2: post on thatprivacyguy.com—Not quite as “in your face”, but exactly matches “LLM article style”, from “silently” to the many short sentences to “The file sits at”. The rest of the article is no different in that regard.

Exhibit 3: https://​​browsergate.eu/​​ - well, what else would one expect from an “association of commercial LinkedIn users”

I’m using the term “slop” pretty loosely throughout this post and primarily mean LLM-written speech/​text, most of which one may classify as “stylistic slop”. E.g., at least two out of the four slop-style LessWrong posts I encountered seemed quite valuable and sincere, and the authors likely just used LLMs for the writing itself while still communicating primarily their original thoughts and ideas. So, when I speak of slop, I don’t necessarily mean there’s no value behind something, but rather that some people don’t appear to care a lot about the words they communicate, and that LLMs’ way of speaking is showing up absolutely everywhere on the internet, including in many places I wouldn’t necessarily have expected it.

I don’t think anyone here will be surprised that this happens a lot. But I still am occasionally shocked by the frequency of it. While for blog posts and articles, this seems somewhat expected, I was more surprised to find this to be the case in some personal messages, Slack messages, and online comment sections (isn’t the overhead of using an LLM for that higher than the time it saves people?). And in videos—because creating a video involves many steps that an LLM won’t notably speed up, so the effort-saving aspect is much less relevant here than when the medium is text.

To name a few examples of videos that appear to involve a lot of LLM speech[2]:

I don’t want to get too deep into why I have a high credence that the scripts of these videos are at least partially LLM-generated, but not everyone here will be familiar with the unique tells of LLMs. It may also be the case that different people pick up very different patterns. And the most infamous example of how to recognize LLM text is the em dash — which you can’t hear or see in a video. So, I’ll just give a few examples of the types of patterns that are more and more ubiquitous due to today’s frontier models being completely in love with them:

  • “Not X — Y”, which most people are aware of by now, but nonetheless many creators don’t seem to mind in their scripts. E.g., “That’s about two water molecules. Not droplets — molecules” and “To do that, we don’t just bring in a laser — we guide the light to a plasmonic transducer” from the SeaGate video, or “The paper didn’t call what was happening AI psychosis. They called it disempowerment.” and “Conversations with the potential for severe or moderate reality distortions got more thumbs up, not less.” in the Sky News video.

  • Click-bait-style sentences that do nothing but announces what comes next, e.g. “But here’s the most unsettling part of all this” in the Sky News video, or “But this is only the tip of the iceberg, and I think this next point shows how it’s a really tight balancing act to design around.” from Zomboman, or “But the biggest unlock is what this now means for me and my investors.” from GEN, or “Still with us? Good. ’Cause we’re just getting to the cool part.” by SeaGate

  • Many short sentences with a certain “punchline rhythm”, e.g., “Nothing in your code changed. The model did.” from the Devsplainers video, or “So they changed the prompts. They changed the industries. They gave it all new context. They even tried bribing the damn thing with rewards. Nothing worked. The bias barely moved.” from Mo Bitar.

  • There’s something very distinctive about when LLMs try to be witty, which I find a bit hard to describe, but there’s a lot of it in the Mo Bitar video, such as “[...] PUBG, the game where you parachute onto an island stark naked and beat a stranger to death with a frying pan, like it’s a Tuesday in Florida”, “And this man follows every step like it’s an IKEA manual for screwing people.”, “And Reddit explodes, because the letter reads like a ransom note that went through Grammarly.”, or “He then pitches it his actual idea, which is a little AI turret that sits on your kitchen counter and sprays your cat with water if it tries to climb up. Basically, a Roomba that bullies animals.

Asking Opus 4.6 to wittily explain the game PUBG. Its very first suggestion includes “parachute onto an island” and a reference to frying pans. Maybe that’s just what’s most salient about this game, or maybe that’s what happens when you ask an LLM to be witty.

There are many, many more such patterns, and each of the videos listed above contains dozens of such cases. I’m not claiming they’re entirely LLM-written, and some seem less LLM-like overall than others, but I am pretty certain that a substantial amount of the words in the video scripts were originally produced by an LLM. That said, I don’t think there’s any way to prove that the examples I mentioned really are (partially) AI-generated, and I may be wrong about any individual case.[5]

What Does This Tell Us About the World?

Overall, there seem to be at least three possible explanations for the recently much higher frequency of LLM speech patterns on the internet:

  1. Slop everywhere: The world really is incredibly full of humans presenting LLM-generated text as their own.

  2. Sloppification of human brains: People have talked to LLMs so much that they inadvertently picked up their patterns of speech, and some of these people just actually sound like that now. So perhaps some of these examples above are written by actual humans who managed to nail LLM style perfectly.

  3. Nothing to see here: People have always spoken like this on the internet, and there’s actually nothing going on; I’m just imagining things and am now seeing the entire internet through my slop-shaped confirmation bias.

As you can imagine, explanation #1 seems most likely to me, even though I do acknowledge that “sloppification of human brains” is a real effect and I sometimes catch myself reaching for phrases that I probably picked up from LLMs[6]. However, I doubt that #2 can explain the recent omnipresence of LLM speech, for three reasons:

  1. In the world where the main factor of LLM speech in the wild is people accidentally picking up their speech patterns, we would expect LLM speech to occur more for people who use LLMs a lot. But this doesn’t seem to be the case as far as I can tell. The people I know personally who use LLMs most still speak and write perfectly “normally” and human-like, and LLM style slop seems to be emitted about equally often by people who don’t have that much experience with the technology.

  2. I don’t think I ever encountered a person who speaks like this in person; it only happens in writing (or narrating/​presenting a pre-written script). If this really was an “accidental habit” effect, I’d suspect at least parts of that would affect live speech as well.

  3. I would expect a more “gradual distribution” where you see many cases of people writing a bit like LLMs, fewer that write a bit more like LLMs, and even fewer who sound a lot like LLMs. But my impression is that the “how LLM-like does the writing of people sound” distribution is much more bimodal, where many published artifacts, like my examples above, look very LLM-like and most other things sound not at all LLM-like, with only relatively few cases in between.

The “nothing to see here” explanation also seems unlikely to me overall, as the evidence that slop style really is everywhere seems pretty overwhelming. Although I could imagine that I’m sometimes reacting too strongly, and some of the cases of suspected slop I encounter really are just false positives where I’m reading too much into a few stylistic coincidences. For instance, “And this is where it gets interesting”-type phrases may never have been that unusual, and I just now started paying attention to them.[7]

All that said, even if it’s true that many people out there are casually presenting LLM speech as their own words, I want to make no claim about how much effort any of these creators have put into these pieces overall. It certainly reduces my trust in them doing thorough work, but that’s merely a heuristic that may obviously be wrong about any given case.

Why Are People Doing This?

Why would creators (and friends, and colleagues, and CEOs giving keynote speeches) do this and rely so heavily on LLM-written text without disclosure? I haven’t asked them, but can only speculate that possible reasons may range from laziness, to a lack of time and pressure from deadlines, to just not seeing a problem with it, and considering LLMs writing text for them to be a completely acceptable case of tool use – which is definitely a point one can make, of course, even though I’d largely disagree, as I’ll explain later.

Part of the reason is very likely also that many people underestimate how recognizable LLM language really is (unless you put real effort into prompting them out of it; but my impression is that this is very hard to do). And indeed, many people who use LLMs to write text that they publish at least take that one extra step of replacing em dashes with some other character to make it less obvious that their text is LLM-written. So, many people do seem to prefer to hide that fact.

Of the sample of people I’ve spoken to (in a somewhat arbitrary sample of non-rationalists), it seemed that more than half of them are at least aware of the “Not X — Y” pattern. And yet, a successful tech CEO and his team, as well as the people who made that SeaGate video, averaged about one instance of that widely known pattern per minute without realizing it makes their speech sound LLM-generated[8]. Which makes me think that surprisingly many people really are oblivious to the fact that LLM-writing is easy to recognize, and that when you use it, (some) people will be able to tell.

Why Do I Care?

There are a variety of reasons why this entire relatively new development seems less than ideal to me.

Honesty /​ Truth /​ Authenticity

First and foremost, it just seems dishonest when people sell LLM writing as their own words. Sure, there are many degrees here. Some people may invest a lot of cognitive work ahead of time and come up with well-thought-out lists of ideas/​arguments/​whatever, and they then merely use an LLM to connect the dots and turn their ideas into flowing prose. And perhaps they then invest even more time to meticulously check whether the LLM’s output remains truthful to their original ideas. Others may use LLMs because they have a hard time phrasing something in a diplomatic, non-offensive way when they are angry or annoyed about someone or something. Others again may not feel comfortable writing in English, or whatever language they publish their work in[9]. I can certainly sympathize with such cases. But I’d be very surprised if these are the most common ones. E.g., in the videos I linked to, none of these caveats seem to apply.

Two people I know have shared blog posts with me in the past half year that “they wrote”, that, very clearly, were written by LLMs, from em dashes to the typical section headings to all the slop patterns I described earlier in the post. Again, it’s hard to say if they invested any real effort into these posts at all, but based on how little time they seemingly spent writing or editing, that seems unlikely. I’m happy to read something a person has put actual effort into, but if it’s not worth your time to write it, then it’s not worth my time to read it.

Similarly, a friend as well as some work colleagues of mine have repeatedly used LLMs to write chat messages, or sometimes Google doc comments, even in entirely informal 1-1 interactions. I see no issues with doing so when you flag it explicitly, like “Here’s Claude’s summary of my thoughts on the issue” or whatever, but often this was not the case, and then it seems pretty deceptive.

Correlated Communication

Many people are familiar with the anchoring effect: if you ask others to estimate some number, but then first present them with your own guess, this tends to systematically skew their estimate towards yours. One explanation for why this happens is that when people take a guess, they intuitively have some fuzzy range of plausible-seeming values in mind. When not being anchored, they might do a good job of finding something close to the middle of that range. But when you anchor them, they may instead start out at the anchor and then gradually move closer towards their own plausible range until they’re satisfied, which leads to systematically different responses.

anchoring.png

Depiction of Anchoring. Instead of sampling unbiased from your intuitively plausible range of some value, you instead unwittingly start from the anchor and move in one direction until the value seems plausible enough. (image generated with ChatGPT)

I’d argue that a similar thing happens in writing. Say you have some idea in your head that you want to communicate. When you write on your own, you try to find the words that best match that idea; you basically aim for the “middle” of the conceptual space that you want to describe. If, instead, you let an LLM write for you, then chances are it will describe something subtly different, or focus on different aspects, or hedge in different places than you would. But it’s just close enough to what you had in mind that you give it your stamp of approval.

One issue with this approach is that it makes your message less precise. As a consumer of your writing, I likely care more about what you actually think than what’s just close enough to what you think that you’d approve it. What’s more, this can lead to a high correlation in the communication of many people, where, say, Claude’s or ChatGPT’s world model and propensities suddenly taint huge amounts of the things that are being shared on the internet. Of course, this happens already through the fact that many people talk to these LLMs and use them for research and reasoning purposes. But then also letting them choose the words that you project out into the world magnifies this effect even more.

Bad Signaling

When people do put a lot of effort into whatever they create, but they still let it look superficially like LLM slop, that’s also not optimal, as they’re then sending a broken signal, signaling “this is slop” to the world, when in fact it isn’t! So, people like me will likely not engage with their work, even though it may be valuable, because the evidence we see is that they took shortcuts and wanted to get something out quickly, likely at the expense of accuracy and quality.

Imagine a journalist friend of yours puts a huge amount of work into some investigative piece, but then publishes it with countless typos because they didn’t bother to go that last little step of polishing it. I’d be a bit mad about them being so sloppy about one thing that then casts doubt on the entire rest of their work. Using LLM writing, to me, seems pretty similar.

Aesthetics

I can imagine that many people don’t care much about this, but freedom of style and expression seems like a nice thing to me. I like it when people have their own quirks and patterns and occasionally do interesting things with the tools their language provides. But now, it seems like the English language in particular is progressively collapsing. Slop style is taking over all kinds of publicized writing, and few people seem to care or notice. People write articles or create videos that hundreds of thousands of people will read, and don’t even invest an extra twenty minutes to get rid of the slop phrases or make it sound like their own voice. And then everything, everywhere, sounds more and more the same.

A Unique Point in Time

I acknowledge that this post may have a bit of a negative vibe. But on the flip side to all of the above, there is one positive to the situation: we’re at a point in time where it’s often unusually easy to know which people you can ignore as they (very likely) take serious shortcuts in their thinking, judgment, and communication. At least if you agree with my take that selling undisclosed LLM writing as your own is a strong signal for the quality of people’s outputs being low. Three things seem to be true at the same time today:

  1. It’s very easy to intuitively detect most[10] LLM-written text, once you’ve deliberately engaged with it a bit.

  2. Yet, the vast majority of people appear to be entirely oblivious to this fact and just use LLM writing for their creations, presenting it as their own.

  3. The labs don’t seem to particularly care about fixing this. Almost all LLMs sound extremely similar, and even elaborate prompting hardly works as a mitigation. Building coding agents is probably just so much more profitable compared to making LLMs produce non-slop-style prose, so the latter hasn’t been high on the priority list? Or perhaps, for some reason, this problem is much harder to solve than it looks. It probably does get progressively harder, given that the share of slop-like language on the internet is rapidly increasing.

What Do We Do With This?

For those of you who haven’t engaged much with what LLM speech sounds like, it may be worthwhile to do so. Both to recognize when you’re exposed to slop, and to avoid producing things yourself that sound like slop to others. When letting LLMs write for you, be aware that there may be many patterns in your text that are not apparent to you, but to others, and that may lead to some unfavorable judgments.

As JustisMills has recently put it in a related post:

Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus.

If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you.

I end up with two main takeaways about all of this.

First, as a general realization about the state of the world, the last few months taught me something similar to the Gell-Mann Amnesia Effect. I realize much more than before how much of the media out there, and sometimes even supposedly personal messages, are partially or mostly LLM-written. It’s probably on me that I didn’t realize and expect the extent of slop in the world earlier. But this experience of realizing first-hand just how many people take such shortcuts when they think others won’t notice just left a mark.

And second, adding to the JustisMills post linked above, I’ll end on an appeal to those who rely heavily on LLMs as writing partners. I’ve used LLMs for countless use cases before, and I’m not here to argue about their general capabilities (or any lack thereof). I let them write close to 100% of my code. I use them for brainstorming, some forms of fact-checking, general feedback on my writing, and more. And in the past, I have occasionally used them to aid my writing directly. But the more I noticed their extremely dominant speech patterns, the further I started keeping them away from the actual writing process. And I wish others did the same. I can only speak for myself here, but I, for one, want to hear your own words, as a direct and dense representations of your thoughts, and not any LLM’s lossy, biased, and stylistically stale interpretation of them.

  1. ^

    I’m also very fun at parties, I think.

  2. ^

    Two of which were from earlier this year though, before the new LLM policy was announced.

  3. ^

    Just as a test, I just logged into X for the first time after months to have a look at the top of my (admittedly not very curated) feed. Ignoring the one-liners, 5 out of the first 5 longer tweets read like AI slop (1 of which was all lowercase—which could both indicate that the author just learned to speak in that way, or that they asked their LLM to do that, which really wouldn’t surprise me), after which I stopped scrolling further. Admittedly, Twitter in particular may actually incentivize people to write in that “punchline”-style way that LLMs love, so I assume the risk of false positives is higher here than elsewhere. Besides, even if it were the case that X is full of AI slop, I’d also be the first to argue that one shouldn’t judge a tool by its average output; if I just followed the right people and blocked the countless slop producers out there, then I wouldn’t have this experience. However, the point remains that slop (style) appears to be the default, almost everywhere, and unless you’ve engaged with any given platform with intention and know what you’re doing, then slop is likely what you’ll find.

  4. ^

    OK, this one does look like obvious slop based on title and thumbnail alone. When it was recommended to me, I only clicked it because I already suspected it would make a good case for this post. So perhaps I’ve trained YouTube a bit to show me slop, after all? But then again, I dislike all videos that contain LLM speech, so I would hope that provides sufficient counter-incentive.

  5. ^

    While there are AI detectors out there, and some seem to be quite reliable as far as fully AI-written texts and fully human-written texts go, I’m less convinced of their judgment on mixed content. And in the case of YouTube videos, we don’t even have the original transcript with all its punctuation, but can only recreate an imperfect copy.

  6. ^

    Like that very sentence. “real” is an adjective LLMs just love, and “reaching for phrases” is one of their favorite types of metaphors. Oops.

  7. ^

    If someone actually thinks that the “nothing to see here” explanation is actually likely, I’d be happy to collaborate on some way to test this.

  8. ^

    Or maybe they did realize, but just didn’t care, or didn’t think that would lead to any negative reactions? Seems a bit unlikely to me, but who knows.

  9. ^

    Although I’d argue the way to go then would be to write in a language you’re comfortable in, and then translate the text. This would avoid most LLM style slop, even if you use an LLM for the translation.

  10. ^

    Naturally, I can’t be sure if it really is “most”. I can only detect what I can detect, and even for those cases, I can’t be entirely sure. But if some people put enough effort into their text creation that their LLM slop is truly not detectable as such, then at least they’ve put effort into something. And then perhaps this also applies to other parts of their process. :)