I think this isn’t getting readers because the explanation of AI use is at the bottom not the top. I think people tend to skip AI written posts by default.
Also that explanation was not satisfying. What does 10-70% written by Claude mean? You’ve got to be able to describe the process more precisely than that. And I think readers care.
I suggest you move it to the top and clarify. I think the work you’ve done makes it worth reading.
I’ve only skimmed the content so far, but this looks like a real contribution to the literature. I did a little lit review on psychopathy recently and the field was a mess, as you describe.
I think this isn’t getting readers because the explanation of AI use is at the bottom not the top. I think people tend to skip AI written posts by default.
I saw the disclaimer at the top and it caused me to not engage at all, so I’m not sure what your model is here.
As an exercise inspired by your comment, I went ahead and tried engaging with it anyway. After the first 600 words it pinged my internal AI sense, at which point I did my normal thing of strong downvoting and skipping the post. I checked afterwards and Pangram reports those words as 100% AI-generated.
It’s all in an AI block. The new rules require it. You can’t try to pass it off without announcing.
This is in the recent The new Editor post. They run pamgram on every post except the blocks marked ai, because they’re labeled.
I hope you’ll use that downvote in a more measured way, now that people can’t pass off AI work as their own. AI generated text isn’t the problem; AI generated ideas are. There can be a lot of merit in work where an AI did a bunch of the writing IMO.
My issue with AI-generated work has always been the text, not the attribution. I have read a decent amount of current-gen outputs, including works that self-describe as partly human-written, and it has left me feeling confident that such works will almost always earn my downvote on the merits. Presumably some future model release will cause me to reevaluate this approach, but for Q2 2026, I feel it is a perfectly measured policy.
People who get more out of AI writing than I do are of course free to vote differently.
The sequence clarifies the term “psychopathy,” preventing unhelpful or outright misdiagnoses
The sequence clarifies the internal experience and differences in the internal experience of people with psychopathy, which is important for self insight and treatment
The sequence explains the adaptive mechanisms reducing stigma, which makes treatment more accessible
The sequence clarifies the term “recovery” and reframes it as a menu that patients can choose from, which breaks black and white thinking around whether one is perfect or broken
And much more
So you can happily downvote because you don’t have anyone in your life you has any of these conditions so it’s not useful for you personally, but when you downvote it because you don’t like something about my writing style, then I wouldn’t call it downvoting on its merits but more on linguistic-aesthetic taste.
I downvoted it because I dislike Claude’s writing quality, the post is described 10-70% written by Claude, and inspection reveals that it does appear to be heavily written by Claude.
I think most people agree that GPT 3.0 makes for a poor coauthor because the model just isn’t smart enough to do the task well. Everyone has their own version number at which they find LLM-coauthored works valuable, mine happens to be higher than Claude 4.6.
Yeah, I didn’t keep track of what words were written by who. It’s quite plausible that I’ve touched every single sentenced and wrote almost half of them from scratch, but it’s also plausible that I touched maybe 80% of sentences and wrote 20% from scratch. The Choice article is mostly hand-written because the AI didn’t have a lot of ideas, but this one is more mixed. It’s hard for me to reconstruct at this point. In the future I can try to commit all changes to Git with correct attribution and then share the whole edit history for transparency. (But really, when two people coauthor an article, it’s also often not clear who wrote which sentences, contributed which ideas, did which edits during the proofreading, etc.)
I’ll move the collapsible box to the top if it lets me! (The editor gets a bit weird when I use these blocks.)
I’d reference this comment. It gives a lot more information than 10-70% which sounds very strange and like you’re maybe hiding something.
Of course it’s the provenance of the claims more than the words that matters. I’m guessing you came up with the claims largely independent of Claude and I’d say that too even though it’s even harder to track that.
I don’t think you need to track every edit to explain to people roughly how the process went.
Thanks, I can expand my LLM note a bit more! I just remembered that I have a backup of my full conversation (up to the point where I took the backup, but almost all of it) with Claude, including the first drafts.
Having thought and read about psychopathy for so long, I felt very confused about how to structure my mental model, so my input to Claude were countless fairly unorganized thoughts about models, contradictions, advantages and disadvantages of framings, etc., and Claude’s first big contribution was to suggest this tag structure where tags (made up of a letter and a descriptor) get combined to form a personality profile. That was a format that hadn’t occurred to me and that I loved for its power and flexibility. But then it was me again who fleshed out that model – introduced the layers of genetics, neurology, psychodynamics, behavior, etc.
I think this isn’t getting readers because the explanation of AI use is at the bottom not the top. I think people tend to skip AI written posts by default.
Also that explanation was not satisfying. What does 10-70% written by Claude mean? You’ve got to be able to describe the process more precisely than that. And I think readers care.
I suggest you move it to the top and clarify. I think the work you’ve done makes it worth reading.
I’ve only skimmed the content so far, but this looks like a real contribution to the literature. I did a little lit review on psychopathy recently and the field was a mess, as you describe.
I saw the disclaimer at the top and it caused me to not engage at all, so I’m not sure what your model is here.
As an exercise inspired by your comment, I went ahead and tried engaging with it anyway. After the first 600 words it pinged my internal AI sense, at which point I did my normal thing of strong downvoting and skipping the post. I checked afterwards and Pangram reports those words as 100% AI-generated.
I don’t think the disclaimer is the issue.
It’s all in an AI block. The new rules require it. You can’t try to pass it off without announcing.
This is in the recent The new Editor post. They run pamgram on every post except the blocks marked ai, because they’re labeled.
I hope you’ll use that downvote in a more measured way, now that people can’t pass off AI work as their own. AI generated text isn’t the problem; AI generated ideas are. There can be a lot of merit in work where an AI did a bunch of the writing IMO.
My issue with AI-generated work has always been the text, not the attribution. I have read a decent amount of current-gen outputs, including works that self-describe as partly human-written, and it has left me feeling confident that such works will almost always earn my downvote on the merits. Presumably some future model release will cause me to reevaluate this approach, but for Q2 2026, I feel it is a perfectly measured policy.
People who get more out of AI writing than I do are of course free to vote differently.
The sequence clarifies the term “psychopathy,” preventing unhelpful or outright misdiagnoses
The sequence clarifies the internal experience and differences in the internal experience of people with psychopathy, which is important for self insight and treatment
The sequence explains the adaptive mechanisms reducing stigma, which makes treatment more accessible
The sequence clarifies the term “recovery” and reframes it as a menu that patients can choose from, which breaks black and white thinking around whether one is perfect or broken
And much more
So you can happily downvote because you don’t have anyone in your life you has any of these conditions so it’s not useful for you personally, but when you downvote it because you don’t like something about my writing style, then I wouldn’t call it downvoting on its merits but more on linguistic-aesthetic taste.
I downvoted it because I dislike Claude’s writing quality, the post is described 10-70% written by Claude, and inspection reveals that it does appear to be heavily written by Claude.
I think most people agree that GPT 3.0 makes for a poor coauthor because the model just isn’t smart enough to do the task well. Everyone has their own version number at which they find LLM-coauthored works valuable, mine happens to be higher than Claude 4.6.
Lol, then knock yourself out with this one, because that’s virtually all hand-written! (Inb4 it’s also unaesthetic, just in a different way. 🙈)
Thanks! <3
Yeah, I didn’t keep track of what words were written by who. It’s quite plausible that I’ve touched every single sentenced and wrote almost half of them from scratch, but it’s also plausible that I touched maybe 80% of sentences and wrote 20% from scratch. The Choice article is mostly hand-written because the AI didn’t have a lot of ideas, but this one is more mixed. It’s hard for me to reconstruct at this point. In the future I can try to commit all changes to Git with correct attribution and then share the whole edit history for transparency. (But really, when two people coauthor an article, it’s also often not clear who wrote which sentences, contributed which ideas, did which edits during the proofreading, etc.)
I’ll move the collapsible box to the top if it lets me! (The editor gets a bit weird when I use these blocks.)
I’d reference this comment. It gives a lot more information than 10-70% which sounds very strange and like you’re maybe hiding something.
Of course it’s the provenance of the claims more than the words that matters. I’m guessing you came up with the claims largely independent of Claude and I’d say that too even though it’s even harder to track that.
I don’t think you need to track every edit to explain to people roughly how the process went.
Thanks, I can expand my LLM note a bit more! I just remembered that I have a backup of my full conversation (up to the point where I took the backup, but almost all of it) with Claude, including the first drafts.
Having thought and read about psychopathy for so long, I felt very confused about how to structure my mental model, so my input to Claude were countless fairly unorganized thoughts about models, contradictions, advantages and disadvantages of framings, etc., and Claude’s first big contribution was to suggest this tag structure where tags (made up of a letter and a descriptor) get combined to form a personality profile. That was a format that hadn’t occurred to me and that I loved for its power and flexibility. But then it was me again who fleshed out that model – introduced the layers of genetics, neurology, psychodynamics, behavior, etc.