I think this isn’t getting readers because the explanation of AI use is at the bottom not the top. I think people tend to skip AI written posts by default.
I saw the disclaimer at the top and it caused me to not engage at all, so I’m not sure what your model is here.
As an exercise inspired by your comment, I went ahead and tried engaging with it anyway. After the first 600 words it pinged my internal AI sense, at which point I did my normal thing of strong downvoting and skipping the post. I checked afterwards and Pangram reports those words as 100% AI-generated.
It’s all in an AI block. The new rules require it. You can’t try to pass it off without announcing.
This is in the recent The new Editor post. They run pamgram on every post except the blocks marked ai, because they’re labeled.
I hope you’ll use that downvote in a more measured way, now that people can’t pass off AI work as their own. AI generated text isn’t the problem; AI generated ideas are. There can be a lot of merit in work where an AI did a bunch of the writing IMO.
My issue with AI-generated work has always been the text, not the attribution. I have read a decent amount of current-gen outputs, including works that self-describe as partly human-written, and it has left me feeling confident that such works will almost always earn my downvote on the merits. Presumably some future model release will cause me to reevaluate this approach, but for Q2 2026, I feel it is a perfectly measured policy.
People who get more out of AI writing than I do are of course free to vote differently.
The sequence clarifies the term “psychopathy,” preventing unhelpful or outright misdiagnoses
The sequence clarifies the internal experience and differences in the internal experience of people with psychopathy, which is important for self insight and treatment
The sequence explains the adaptive mechanisms reducing stigma, which makes treatment more accessible
The sequence clarifies the term “recovery” and reframes it as a menu that patients can choose from, which breaks black and white thinking around whether one is perfect or broken
And much more
So you can happily downvote because you don’t have anyone in your life you has any of these conditions so it’s not useful for you personally, but when you downvote it because you don’t like something about my writing style, then I wouldn’t call it downvoting on its merits but more on linguistic-aesthetic taste.
I downvoted it because I dislike Claude’s writing quality, the post is described 10-70% written by Claude, and inspection reveals that it does appear to be heavily written by Claude.
I think most people agree that GPT 3.0 makes for a poor coauthor because the model just isn’t smart enough to do the task well. Everyone has their own version number at which they find LLM-coauthored works valuable, mine happens to be higher than Claude 4.6.
I saw the disclaimer at the top and it caused me to not engage at all, so I’m not sure what your model is here.
As an exercise inspired by your comment, I went ahead and tried engaging with it anyway. After the first 600 words it pinged my internal AI sense, at which point I did my normal thing of strong downvoting and skipping the post. I checked afterwards and Pangram reports those words as 100% AI-generated.
I don’t think the disclaimer is the issue.
It’s all in an AI block. The new rules require it. You can’t try to pass it off without announcing.
This is in the recent The new Editor post. They run pamgram on every post except the blocks marked ai, because they’re labeled.
I hope you’ll use that downvote in a more measured way, now that people can’t pass off AI work as their own. AI generated text isn’t the problem; AI generated ideas are. There can be a lot of merit in work where an AI did a bunch of the writing IMO.
My issue with AI-generated work has always been the text, not the attribution. I have read a decent amount of current-gen outputs, including works that self-describe as partly human-written, and it has left me feeling confident that such works will almost always earn my downvote on the merits. Presumably some future model release will cause me to reevaluate this approach, but for Q2 2026, I feel it is a perfectly measured policy.
People who get more out of AI writing than I do are of course free to vote differently.
The sequence clarifies the term “psychopathy,” preventing unhelpful or outright misdiagnoses
The sequence clarifies the internal experience and differences in the internal experience of people with psychopathy, which is important for self insight and treatment
The sequence explains the adaptive mechanisms reducing stigma, which makes treatment more accessible
The sequence clarifies the term “recovery” and reframes it as a menu that patients can choose from, which breaks black and white thinking around whether one is perfect or broken
And much more
So you can happily downvote because you don’t have anyone in your life you has any of these conditions so it’s not useful for you personally, but when you downvote it because you don’t like something about my writing style, then I wouldn’t call it downvoting on its merits but more on linguistic-aesthetic taste.
I downvoted it because I dislike Claude’s writing quality, the post is described 10-70% written by Claude, and inspection reveals that it does appear to be heavily written by Claude.
I think most people agree that GPT 3.0 makes for a poor coauthor because the model just isn’t smart enough to do the task well. Everyone has their own version number at which they find LLM-coauthored works valuable, mine happens to be higher than Claude 4.6.
Lol, then knock yourself out with this one, because that’s virtually all hand-written! (Inb4 it’s also unaesthetic, just in a different way. 🙈)