When you first run into it, it’s fun and original. After a few months, you start experiencing physical pain every time you see “You’re absolutely right! This isn’t only X — it’s Y.”
I think that this is an artifact of how LLMs are constrained during fine-tuning rather than something inherent to the medium. I agree that the speech patterns are incredibly grating—I’m one of the people that found them grating even when it was only humans that talked that way. If AI companies were sufficiently motivated, I think they could quite thoroughly eradicate these verbal ticks and create a much more pleasant writing tone.
I think the bigger issue with “AI slop” is that it doesn’t convey information. If I write a thousand-word essay, then the information I wanted to convey to you was best-expressed over those thousand words[1]. You get information about what I want, what I believe, and why I want/believe those things, and you can use that information to better model the behavior of me and people like me. If I ask an LLM to generate a thousand word essay supporting my one-sentence claim, then the information I’m conveying to you is “I support this one sentence claim”, and everything else is just noise.
It’s related to what you say about compression, but I think this relates specifically to the absence of a human writer, and is thus not solvable through technical means. Even if LLMs were able to write in such a way that no fundamental mathematical definition of complexity showed their output to be simpler than that of a human, the useful information conveyed would be less.
I run into this in coding a lot too—I’ve found that I can consistently get better results from Claude by telling it to rewrite the code more concisely before I go through it (which makes me wonder why Anthropic hasn’t already tried to engineer this into Claude Code via prompt or something).
I think that this is an artifact of how LLMs are constrained during fine-tuning rather than something inherent to the medium. I agree that the speech patterns are incredibly grating—I’m one of the people that found them grating even when it was only humans that talked that way. If AI companies were sufficiently motivated, I think they could quite thoroughly eradicate these verbal ticks and create a much more pleasant writing tone.
I think the bigger issue with “AI slop” is that it doesn’t convey information. If I write a thousand-word essay, then the information I wanted to convey to you was best-expressed over those thousand words[1]. You get information about what I want, what I believe, and why I want/believe those things, and you can use that information to better model the behavior of me and people like me. If I ask an LLM to generate a thousand word essay supporting my one-sentence claim, then the information I’m conveying to you is “I support this one sentence claim”, and everything else is just noise.
It’s related to what you say about compression, but I think this relates specifically to the absence of a human writer, and is thus not solvable through technical means. Even if LLMs were able to write in such a way that no fundamental mathematical definition of complexity showed their output to be simpler than that of a human, the useful information conveyed would be less.
(depending on my skill as a writer, it could be less, but it’s usually within an OOM)
I run into this in coding a lot too—I’ve found that I can consistently get better results from Claude by telling it to rewrite the code more concisely before I go through it (which makes me wonder why Anthropic hasn’t already tried to engineer this into Claude Code via prompt or something).