That was coherent, and I moderately enjoyed reading it.
Science fiction editor Teresa Nielsen Hayden once wrote a blog post “Slushkiller”, which described what it was like for an editor to sort through “slush”, the unsolicited manuscripts submitted by aspiring authors. “Slush” is a lot like “slop”, except it’s human-written. And much of it is terrible, to the point that first-time readers become “slush drunk.”
TNH classified slush using a 14 point scale, starting with:
Author is functionally illiterate.
Author has submitted some variety of literature we don’t publish: poetry, religious revelation, political rant, illustrated fanfic, etc.
Author has a serious neurochemical disorder, puts all important words into capital letters, and would type out to the margins if MSWord would let him.
..and ending with:
(You have now eliminated 95-99% of the submissions.)
Someone could publish this book, but we don’t see why it should be us.
Author is talented, but has written the wrong book.
It’s a good book, but the house isn’t going to get behind it, so if you buy it, it’ll just get lost in the shuffle.
Buy this book.
I feel like this short story falls somewhere in the (7-11) range. If I were feeling very generous, I might go with a variation of:
The book has an engaging plot. Trouble is, it’s not the author’s, and everybody’s already seen that movie/read that book/collected that comic.
Except, of course, LLMs don’t fail in quite the same way as humans. They are quite competent at putting words together coherently, almost to a fault if you encourage them. But there is a deep underlying predictability to LLMs at every possible level. They almost always do the most predictable version of whatever it is they’re trying to do. Which isn’t surprising, coming from a next-token predictor. For programming (my profession), this is arguably an advantage. We often want the most predictable and boring code that fits some (possibly stringent) constraints. For writing, I think that you probably want to aspire to “delightful surprise” or “I never knew I wanted this, but it’s exactly perfect.”
But the fact that LLMs can now write very short stories that are technically solid, moderately humorous, and not-totally-pointless is a big step. Certainly, this story is better than a clear majority of what humans used to submit unsolicited to publishing houses. And it compares well to the median entry on Royal Road’s “latest updates.”
In my recent experiences with Claude Code, I would estimate that it beats 75-80% of the college interns I’ve hired in my career (in first week performance).
If you can’t even imagine AGI within our lifetimes (possibly after 0-2 more transformer-sized breakthroughs, as Eliezer put it), you’re not paying attention.
(I am pretty far from happy about this, because I believe robust alignment of even human-level intelligences is impossible in principle. But that’s another discussion.)
That was coherent, and I moderately enjoyed reading it.
Science fiction editor Teresa Nielsen Hayden once wrote a blog post “Slushkiller”, which described what it was like for an editor to sort through “slush”, the unsolicited manuscripts submitted by aspiring authors. “Slush” is a lot like “slop”, except it’s human-written. And much of it is terrible, to the point that first-time readers become “slush drunk.”
TNH classified slush using a 14 point scale, starting with:
..and ending with:
I feel like this short story falls somewhere in the (7-11) range. If I were feeling very generous, I might go with a variation of:
Except, of course, LLMs don’t fail in quite the same way as humans. They are quite competent at putting words together coherently, almost to a fault if you encourage them. But there is a deep underlying predictability to LLMs at every possible level. They almost always do the most predictable version of whatever it is they’re trying to do. Which isn’t surprising, coming from a next-token predictor. For programming (my profession), this is arguably an advantage. We often want the most predictable and boring code that fits some (possibly stringent) constraints. For writing, I think that you probably want to aspire to “delightful surprise” or “I never knew I wanted this, but it’s exactly perfect.”
But the fact that LLMs can now write very short stories that are technically solid, moderately humorous, and not-totally-pointless is a big step. Certainly, this story is better than a clear majority of what humans used to submit unsolicited to publishing houses. And it compares well to the median entry on Royal Road’s “latest updates.”
In my recent experiences with Claude Code, I would estimate that it beats 75-80% of the college interns I’ve hired in my career (in first week performance).
If you can’t even imagine AGI within our lifetimes (possibly after 0-2 more transformer-sized breakthroughs, as Eliezer put it), you’re not paying attention.
(I am pretty far from happy about this, because I believe robust alignment of even human-level intelligences is impossible in principle. But that’s another discussion.)