If the goal is maximizing skill at writing, one should use LLMs a lot. What you wrote about likely failure modes of doing so is true, but not an inevitable outcome. If Language Models are useful tools for writing, avoiding their use due to concerns about being unable to handle them is a mistake regardless of whether these concerns are warranted. Why?
if you’re trying to make a splash with your writing, you need to meet a much higher bar than the average person
Having aptitude necessary to “make a splash” is very rare. Not taking chances probably means one won’t reach the top. Especially if competent LLM use raises the ceiling of human capability.
Note that by competent use I mean something like cyborgism: https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism
“cyborgs”, a specific kind of human-in-the-loop system which enhances and extends a human operator’s cognitive abilities without relying on outsourcing work to autonomous agents
My result, GPT-5 Pro
Also, I asked GPT-5 Pro to judge the text against several others (including “The Scaling Hypothesis” and “Meditations on Moloch”) + infer things about the authors. It judged it is the best one. Lol. I wonder whether it inferred I’m the author (well, the prompter more accurately). Nothing it said it inferred about me is correct except for age, just barely. Or maybe that’s because it was the generator. But it also judged Claude 3.7′s output pretty well when I tried it.
It is about 200kb of AI slop. 0th_md started as me just asking 3.7 Sonnet to write something based on several sources.
These sources being:
These are listed in the ‘references’ section, but there were also
“Thought Network Architecture” was the result of chat with Claude, prompt being
And then I kept iterating on this for way too long. Today, I spent several more having GPT-5 Pro iterate on it, ultimately resulting in 8th_md. I didn’t even read it.
The prompt I’ve used that did classify it as crank:
I wonder whether re-adding the references back (which got stripped somewhere along the way apparently) would make it no longer classify it as such...