I infer you mean “the claim that LLM writing is a slog to get through.” Which yes, I wait for the day that most LLM writing I see is not usually a slog to get through. I hope it comes! It’s perfectly possible I see lots of LLM writing that was created by prompting wizards using state-of-the-art $200-a-month models in ways not dreamt of in my philosophy. If so, great. If you can fool me, we both win.
But I see so much LLM writing in the wild and professionally that just sucks. Whether it sucks because of some fundamental property of LLMs (doubtful), or because of path dependencies for the LLMs that are commercially available (getting warmer), or because of bog standard skill issues on the part of the relevant centaur (could be), one clear fact is that the authors don’t think there’s a problem.
In situations like this, where I see a lot of people doing something that seems like a pretty big mistake, I want to just say “don’t do that.” Because I think if I say “don’t do that, unless you’re good at it,” well, if the people making the mistake knew they weren’t good at it, they’d already not be making the mistake! I’d rather say “don’t go cave diving” and the tiny minority of expert, professional cave divers who know and relentlessly apply all the proper cave diving safety rules can smile knowingly and ignore me. Of course, the amateurs can ignore me too. But I am here advising otherwise!
Yeah, it’s an interesting question how good human detection is. My guess is that people who are paying attention are getting better at sniffing out AI faster than AI is getting less distinctively scented, but “people who are paying attention” is a heck of a sleight of hand.
Overall, I suppose my main feeling is that I see AI generated stuff all the time in lots of different arenas, and I see other people judging it, and it sort of feels like an Eternal September where some people are freshly excited by some AI use case in their thinking, and don’t realize how it comes off (or do, but it hasn’t occurred to them that it comes off badly for good reasons as well as straightforward prejudice). There may also be lots of people using AI so skillfully that they don’t fall into any of these traps. It’s even possible they far outnumber the people who are (I think) bumbling. Perhaps my doubt for this last proposition is a stuck prior. But if so, it is well and truly stuck.