You can see the chat here. I prompted Claude with a detailed outline, a previous draft that followed a very different structure, and a copy of “The case for ensuring powerful AIs are controlled” for reference about my writing style. The outline I gave Claude is in the Outline tab, and the old draft I provided is in the Old draft tab, of this doc.
As you can see, I did a bunch of back and forth with Claude to edit it. Then I copied to a Google doc and edited substantially on my own to get to the final product.
I think reading LLM content can unnoticeably make me worse at writing, or harm my thinking in other ways. If people keep posting LLM content without flagging it on top, I’ll probably leave LW.
I at first wondered whether this would count as an answer to nostalgebraist’s when will LLMs become human-level bloggers? which he asked back in March, but then upon rereading I’m less sure. I kind of buy DaemonicSigil’s top-karma response that “writing a worthwhile blog post is not only a writing task, but also an original seeing task… So the obstacle is not necessarily reasoning… but a lack of things to say”, and in this case you were clearly the one with the things to say, not Opus 4.1
I do think that the post had a worse argumentative structure than posts I normally write (hence all the confusions in the comments). But that was totally on me, not the AI. I’m interested in whether you think your problem was with the argumentative structure or the prose.
The writing style feels a bit too polished and the structure is a bit too formal. Feels like it was written as an essay that will be graded. I think some of your usual humor / style is missing too, but that might be me reading too much into it at this point.
A bunch of my recent blog posts were written with a somewhat similar process, it works surprisingly well! I’ve also had great results with putting a ton of my past writing into the context
I’ve been reading a lot of web content, including this post, after asking my favorite LLM[1] to “rewrite it in Wei Dai’s style” which I find tends to make it shorter and easier for me to read, while still leaving most of the info intact (unlike if I ask for a summary). Before I comment, I’ll check the original to make sure the AI’s version didn’t miss a key point (or read the original in full if I’m sufficiently interested), and also ask the AI to double-check that my comment is sensible.
It feels like you did all the hard parts of the writing, and let the AI do the “grunt work” so to speak. You provided a strong premise for the fundamental thesis, a defined writing style, and made edits for style at the end. I think the process of creating the framework out of just a simple premise would be far more impressive, and that’s still where LLM’s seem to struggle in writing. It’s somewhat analogous to how models have improved at coding since gpt 4, you used to say “implement a class which allows users to reply, it should have X parameters and Y functions which do Z” and now you say “make a new feature that allows users to reply” and it just goes ahead and does it.
Maybe I am underestimating the difficulty of selecting the exact right words, and I acknowledge that the writing was pretty good and devoid of so-called “slop”, but I just don’t think this is extremely impressive as a capability compared to other possible tests.
I agree that I had had all the ideas, but I hadn’t previously been able to get AIs to even do the “grunt work” of turning it into prose with anything like that level of quality!
Fun fact: My post Christian homeschoolers in the year 3000 was substantially written by Claude Opus 4.1.
You can see the chat here. I prompted Claude with a detailed outline, a previous draft that followed a very different structure, and a copy of “The case for ensuring powerful AIs are controlled” for reference about my writing style. The outline I gave Claude is in the Outline tab, and the old draft I provided is in the Old draft tab, of this doc.
As you can see, I did a bunch of back and forth with Claude to edit it. Then I copied to a Google doc and edited substantially on my own to get to the final product.
I was shocked by how good Claude was at this.
I believe this is in compliance with the LLM writing assistance policy.
This is one of the nastiest aspects of the LLM surge, that there’s no way to opt out of having this prank pulled on me, over and over again.
Yeah, and personally I don’t want to read LLM writing (I didn’t read this particular post anyway).
I didn’t intend this to be a prank! I just wanted to write faster and better.
I’d be curious to hear more about your negative reaction here.
I think reading LLM content can unnoticeably make me worse at writing, or harm my thinking in other ways. If people keep posting LLM content without flagging it on top, I’ll probably leave LW.
I at first wondered whether this would count as an answer to nostalgebraist’s when will LLMs become human-level bloggers? which he asked back in March, but then upon rereading I’m less sure. I kind of buy DaemonicSigil’s top-karma response that “writing a worthwhile blog post is not only a writing task, but also an original seeing task… So the obstacle is not necessarily reasoning… but a lack of things to say”, and in this case you were clearly the one with the things to say, not Opus 4.1
Hmm, I ended up downvoting that post, whereas I usually like yours. I think in retrospect it’s clearly because it was AI written / assisted.
I do think that the post had a worse argumentative structure than posts I normally write (hence all the confusions in the comments). But that was totally on me, not the AI. I’m interested in whether you think your problem was with the argumentative structure or the prose.
The writing style feels a bit too polished and the structure is a bit too formal. Feels like it was written as an essay that will be graded. I think some of your usual humor / style is missing too, but that might be me reading too much into it at this point.
A bunch of my recent blog posts were written with a somewhat similar process, it works surprisingly well! I’ve also had great results with putting a ton of my past writing into the context
I’ve been reading a lot of web content, including this post, after asking my favorite LLM[1] to “rewrite it in Wei Dai’s style” which I find tends to make it shorter and easier for me to read, while still leaving most of the info intact (unlike if I ask for a summary). Before I comment, I’ll check the original to make sure the AI’s version didn’t miss a key point (or read the original in full if I’m sufficiently interested), and also ask the AI to double-check that my comment is sensible.
currently Gemini 2.5 Pro because it’s free through AI Studio, and the rate limit is high enough that I’ve never hit it
The rise of this kind of thing was one of my main predictions for late 2025:
Well, looks like you’re 4⁄4
It feels like you did all the hard parts of the writing, and let the AI do the “grunt work” so to speak. You provided a strong premise for the fundamental thesis, a defined writing style, and made edits for style at the end. I think the process of creating the framework out of just a simple premise would be far more impressive, and that’s still where LLM’s seem to struggle in writing. It’s somewhat analogous to how models have improved at coding since gpt 4, you used to say “implement a class which allows users to reply, it should have X parameters and Y functions which do Z” and now you say “make a new feature that allows users to reply” and it just goes ahead and does it.
Maybe I am underestimating the difficulty of selecting the exact right words, and I acknowledge that the writing was pretty good and devoid of so-called “slop”, but I just don’t think this is extremely impressive as a capability compared to other possible tests.
I agree that I had had all the ideas, but I hadn’t previously been able to get AIs to even do the “grunt work” of turning it into prose with anything like that level of quality!