Excerpt below, but read the (not much longer) full thing for the part involving Kolmogorov complexity. I suspect reading the full thing is better than reading the excerpt first for most people, in expectation. It’s not that much longer.
As a man who has lived in both Israel and Northern California, I have been a member of more than a few majority-vegan social circles over the years. Among the Vegans, the traditional form of dining is to eat something that’s almost, but not quite, entirely like real food.
It may be a homemade cake that used bananas instead of eggs, or the latest iteration of the Impossible Burger. Invariably, the vegans will celebrate it as unprecedentedly indistinguishable from the real version of the food it tries to be. And on the first bite, I will invariably agree. For the first few bites, the Impossible Burger will taste great, so much like real meat that I’ll consider going vegan myself would be a barely noticeable loss.
I have never once managed to actually finish eating an Impossible Burger.
It’s a form of slop, you see. The first few bites pattern-match onto a real hamburger, because it’s made to resemble it well enough that, if your brain doesn’t know what to expect, it pattern-matches onto “hamburger” so it tastes like a hamburger at first. But the actual range of flavors is a lot less rich or varied than a real hamburger, and by the end of the burger the brain has learned to model it as its own thing. And since an Impossible Burger is a lot more flat and uniform than a real hamburger, it makes it (a) a more parsimonious model (very efficient for the brain to compress the information about it) and (b) boring. Once we know what to expect and don’t have variety, it’s just too bland to feel worth eating anymore.
AI creative writing is like that7. When you first run into it, it’s fun and original. After a few months, you start experiencing physical pain every time you see “You’re absolutely right! This isn’t only X — it’s Y.”
Again, It’s about compression. There’s nothing much to pattern match it to at first, so you read it as a piece of novel writing and find the structure a normal amount of nice and surprising. But once you learn to pattern match it, it starts feeling bland, because you can compress it to something smaller and simpler and no longer feel surprise at it. And it’s grating to see something bland pretend to be surprising, like someone wearing a bad mask of your ex wife’s face for breakfast every day. It can’t pretend to be the real thing, because it’s fundamentally simpler than the minimum possible complexity that anything that looks like the real thing could be.
I disagree with this. I mean, I read it and felt mildly insulted, so higher than baseline chance I’m rationalizing, but
I’m somewhat suspicious of the “slop” label in general. I don’t have a crisp description of the general underlying cognitive phenomena, but calling stuff slop seems to me to come from the same place as people calling industrially produced furniture or clothing bad, or people calling stuff made in china bad.
Part of the phenomena is probably that
1) you have an expensive and arduous way to make a thing
2) → you find a less expensive/arduous way to make it
3) → people don’t want to admit that the expense was unnecessary.
Ie, I buy an expensive clock. Then someone coming along with a identical clock, saying they found some way to get it for way cheaper. I can very clearly imagine myself feeling bad, and a part of my mind coming online that grasps for reasons why I didn’t actually make a suboptimal choice
There are/were some not entirely senseless criticisms of industrially produced goods, and stuff made in china. But like, AIs write code that works. I think people are stopping calling AI code slop, or will stop soon. Writing is harder to objectively evaluate the quality of than code, which makes it hard to make AIs good at writing, but also makes it easier for people to rationalize why AI writing is bad.
I don’t think the complexity thing actually makes sense. Ie saying “But the actual range of flavors is a lot less rich or varied than a real hamburger”. Is this true? I’ve eaten meat burgers before, and the meat they’re made of has been ground up, so the taste variation within a single burger is pretty narrow. I feel most of the variation in taste comes from different bites having different ratios of different parts of the whole burger. Ie if bite two has some mustard on it or some fresh onion, which bite one didn’t, that dominates the variation of taste within the two bites of the meat part of the burger.
If you’re just talking about the complexity of the taste of the burgers themselves, then I don’t think that makes sense. Like, how would you objectively measure the complexity of the taste? Like the ingredients list of an impossible burger is longer than the ingredients list of a hamburger. Maybe meat itself is very high complexity, it contains very many different molecules. But the impossible burger contains stuff like yeast extract which I think contains a bunch of different molecules too
If you’re talking about variation between burgers, that’s fair if you’re talking about just the impossible burget. But between different vegan burgers, and different meat burgers, I’d say there’s more taste and textual variety between the vegan ones. Eg, a lentil burger, a mushroom burger and an impossible burger have more taste variation between them than three meat burgers.
Impossible Burger tastes fine to me, but — as Benjamin Franklin would say — it creates in the bowel a great quantity of wind, usually offensive to company.
I generally share your distaste for imitation meat, though. Tasty vegan or vegetarian food seems to come from cultures that have iterated on it for a while, and who don’t seek to imitate meat-eating. Rastafari Ital is a pretty interesting historically-recent example.
I am not so sure about applying the general category of “slop”. A sincere effort to figure out the features of X and replicate them without some undesired property Y, seems like a good thing to try. I am not sure it should be conflated with mechanical imitation.
> A sincere effort to figure out the features of X and replicate them without some undesired property Y, seems like a good thing to try
I agree that it’s a good thing to try, but it often (especially in early efforts) end up compromising on some of the things that make X good (which makes sense, if X wasn’t optimizing for avoiding Y it’s probably easier to optimize it for its primary goals).. Sometimes Y is bad and these are worthy compromises, and sometimes after some early mistakes you can make the disadvantages go away, so it’s still often worth putting effort into despite the downsides.
With basic prompt engineering techniques and a proper agent harness, Opus 4.6 can solve the provided puzzle in 7 minutes, demonstrating hard technical skills (math, python) as well as multiple intuitive leaps.
Puzzle: https://puzzmon.world/puzzles/hidden_numbers
The setup was simple. I copied the puzzle to Opus, asked her to write up a CLAUDE.md for an out of the box puzzle solving agent, then ran that agent in Claude Code and approved all tool calls (mostly to Python). On the first run, the agent identified the integers thing, then discovered a connection to ip addresses, and extracted the domains. This took about six minutes.
She stopped there, but after I pointed out that the answer had to be a string of some kind, she almost immediately made the intuitive leap to converting the domains to the string “SURFS UP” via indexing into them.
Link to conversation (note that I tried to use the Ralph Loop plugin, but it’s buggy and doesn’t properly restart the agent after it goes down). So I had to manually say that the solution was incomplete.
https://pastebin.com/qxUpw5ZT (system prompt)
https://pastebin.com/rQMpF3wB (full transcript)
Here is her writeup on the solution:
SOLVED: “Find the Domain”
Answer
The answer to “find the domain” is literal internet domains — the seven functions encode IP addresses that resolve to:
How It Works
Step 1: The Clue — “INTEGER”
The polynomial coefficients [9, 14, 20, 5, 7, 5, 18] map to letters (A=1, B=2, …): I-N-T-E-G-E-R — telling you to find the integer in each domain.
Step 2: Find the Integer in Each Domain
Each restricted function constrains the domain to an interval (or union of intervals) containing exactly one integer:
Step 3: Evaluate f(integer)
At each integer, the restricted function produces a nice value (log₁₀(10) = 1, cosh(0) = 1, sin(π/2) = 1, etc.), and the large polynomial constants were specifically chosen so that f(integer) is itself an integer:
Step 4: The (a)(b)(c)(d) Tuples Split f(x) into IP Octets
The (a)(b)(c)(d) tuples encode digit group sizes — how many digits each IP octet occupies:
Every octet is in [0, 255] — valid IP addresses. Function #5 producing 127.0.0.1 (localhost) is the dead giveaway that confirms this interpretation.
Step 5: Index into Domain Names
The integer from each domain does double duty — it’s also a 1-indexed position into the domain name, extracting one letter:
Final Answer: SURFS UP
Puzzle Design Elegance
The word “domain” is a pun: mathematical domain → internet domain
The coefficients spell “INTEGER” as a breadcrumb clue
Each restricted function is carefully constructed to have exactly one integer in its domain
The restricted functions evaluate to “nice” values at those integers (1, 17, 10, 49, 2, 13, 4) — making f(x) exactly an integer
The large polynomial constants are reverse-engineered from the target IP addresses
The (a)(b)(c)(d) tuples serve double duty: they look like multipliers but actually encode digit boundaries
Including 127.0.0.1 (localhost) serves as an internal consistency check
I could not have solved this; I would not even have known where to start. I have no mathematical expertise to solve the first layer. I also would never have made the multiple intuitive leaps that the puzzle required.
Interesting. Do you know if it’s possible it looked up a solution someone posted online?
It’s possible but unlikely, these puzzles are from an unindexed puzzle site and when I tested several other puzzles she would often spend a few minutes fruitlessly searching before getting to work.
So AI slop is horseradish?
Hm a bit, although I don’t think there’s a non-continuous break with it.
I think that this is an artifact of how LLMs are constrained during fine-tuning rather than something inherent to the medium. I agree that the speech patterns are incredibly grating—I’m one of the people that found them grating even when it was only humans that talked that way. If AI companies were sufficiently motivated, I think they could quite thoroughly eradicate these verbal ticks and create a much more pleasant writing tone.
I think the bigger issue with “AI slop” is that it doesn’t convey information. If I write a thousand-word essay, then the information I wanted to convey to you was best-expressed over those thousand words[1]. You get information about what I want, what I believe, and why I want/believe those things, and you can use that information to better model the behavior of me and people like me. If I ask an LLM to generate a thousand word essay supporting my one-sentence claim, then the information I’m conveying to you is “I support this one sentence claim”, and everything else is just noise.
It’s related to what you say about compression, but I think this relates specifically to the absence of a human writer, and is thus not solvable through technical means. Even if LLMs were able to write in such a way that no fundamental mathematical definition of complexity showed their output to be simpler than that of a human, the useful information conveyed would be less.
(depending on my skill as a writer, it could be less, but it’s usually within an OOM)
I run into this in coding a lot too—I’ve found that I can consistently get better results from Claude by telling it to rewrite the code more concisely before I go through it (which makes me wonder why Anthropic hasn’t already tried to engineer this into Claude Code via prompt or something).