I reread this comment and want to correct a misunderstanding. I think this may explain some of the rather vehement anti-llm responses that I found so puzzling earlier.
I used a LLM to write this post, like I advice anyone whose time is valuable to do as saves a lot of time. That said, it is not unedited at all and I spent many hours going through various drafts and iterations.
2. The post wasn’t written by telling gpt “write a piece about natos military edge slopping”. This was based on a handwritten initial draft outlining every technical point. This is based on several years of tracking this topic semi-seriously as an amateur military buff.
I would say it is the testament to the extraordinary quality of 1shot LLM responses that they occasionally show that is now expected that people assume that the above was a simple unedited prompt.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant. [emphasis mine]
But why listen to me when you could listen to the pocket PhD?
Why do some people think that’s bad? Roughly:
It breaks the social contract of discussion. On forums like LW/StackExchange/etc, the implicit deal is: “You are reading my thoughts.” If you post raw model output, readers are actually getting: “Here is a generic sample from a text generator, lightly prompted by me.” That feels like misrepresentation, especially if not disclosed.
It’s extremely cheap spam.
Human-written comments cost time and attention.
Model-written comments are nearly free and can be produced in unlimited quantity. If everyone does that, discussion quality drowns in fluent but shallow text. Downvoting “ChatGPT-y” comments is partly a defense mechanism against being flooded.
Low epistemic reliability. Models confidently hallucinate, oversimplify, or miss key cruxes. When a human writes, they can be challenged: “Why do you believe that?” and they (usually) have some model of the world behind it. With a raw LLM comment, there often isn’t a stable belief or understanding behind the words—just next-token prediction. That undermines the goal of rigorous reasoning.
Skill atrophy and shallow engagement. If you mostly outsource your arguing/thinking to a model, you don’t get better at reasoning or writing. From the community’s perspective, you’re contributing less original thought and more “generic internet essay”.
Style + content are often generic. LLM text has a distinctive “smooth, polite, yet vague” feel. People go to niche forums for idiosyncratic, deeply-thought comments, not for something they could get by clicking “generate” themselves.
Why the “why walk when you can bike?” analogy doesn’t quite fit
Biking vs walking:
Both are you moving under your own power.
Biking just makes you faster/more efficient.
Using raw AI output is more like:
Sending a delivery robot to a meetup in your name and letting it talk for you.
You gave it the address and a topic, but you don’t fully control what it says moment-to-moment.
Using AI as a tool (drafting, brainstorming, checking math, summarizing sources) and then carefully editing, fact-checking, and putting your own reasoning into the result is more like using a bike or calculator. Dumping unedited Claude/ChatGPT output as a comment and treating it as “your contribution” is what people are objecting to.
So: it’s not that “biking” (using AI tools) is inherently bad; it’s that outsourcing the whole comment to the AI and presenting it as your own thought breaks norms around effort, honesty, and epistemic quality, and communities push back on that.
I understand and verified to the best of my ability the information contained in the post. If a LW moderator want to take action, I welcome their correction.
EDIT: I checked and the post contains about a 1000 words. At a 1 minute per 50 words this would be about 20 min. I have probably spent at least 3 hours on drafting this post, plus additional edits and engagement.
Quite frankly, the concerns raised here seem to be more originating from a ludditist denial and an intuitive dismissal of AI stylistic choices rather than genuine issues with the content—A stubborn attachment to the weakness of the flesh if you will. This post has generated quite a lot of content-level engagement so seemingly the style is only an issue for a loud minority.
Strong-downvoted for being indistinguishable from unedited ChatGPT output.
(That apparently Claude wrote it doesn’t matter.)
I reread this comment and want to correct a misunderstanding. I think this may explain some of the rather vehement anti-llm responses that I found so puzzling earlier.
I used a LLM to write this post, like I advice anyone whose time is valuable to do as saves a lot of time. That said, it is not unedited at all and I spent many hours going through various drafts and iterations.
2. The post wasn’t written by telling gpt “write a piece about natos military edge slopping”. This was based on a handwritten initial draft outlining every technical point. This is based on several years of tracking this topic semi-seriously as an amateur military buff.
I would say it is the testament to the extraordinary quality of 1shot LLM responses that they occasionally show that is now expected that people assume that the above was a simple unedited prompt.
Why walk when you can bike ?
Because it’s against LW policy:
But why listen to me when you could listen to the pocket PhD?
Why do some people think that’s bad? Roughly:
It breaks the social contract of discussion.
On forums like LW/StackExchange/etc, the implicit deal is: “You are reading my thoughts.”
If you post raw model output, readers are actually getting: “Here is a generic sample from a text generator, lightly prompted by me.”
That feels like misrepresentation, especially if not disclosed.
It’s extremely cheap spam.
Human-written comments cost time and attention.
Model-written comments are nearly free and can be produced in unlimited quantity.
If everyone does that, discussion quality drowns in fluent but shallow text. Downvoting “ChatGPT-y” comments is partly a defense mechanism against being flooded.
Low epistemic reliability.
Models confidently hallucinate, oversimplify, or miss key cruxes.
When a human writes, they can be challenged: “Why do you believe that?” and they (usually) have some model of the world behind it. With a raw LLM comment, there often isn’t a stable belief or understanding behind the words—just next-token prediction. That undermines the goal of rigorous reasoning.
Skill atrophy and shallow engagement.
If you mostly outsource your arguing/thinking to a model, you don’t get better at reasoning or writing. From the community’s perspective, you’re contributing less original thought and more “generic internet essay”.
Style + content are often generic.
LLM text has a distinctive “smooth, polite, yet vague” feel. People go to niche forums for idiosyncratic, deeply-thought comments, not for something they could get by clicking “generate” themselves.
Why the “why walk when you can bike?” analogy doesn’t quite fit
Biking vs walking:
Both are you moving under your own power.
Biking just makes you faster/more efficient.
Using raw AI output is more like:
Sending a delivery robot to a meetup in your name and letting it talk for you.
You gave it the address and a topic, but you don’t fully control what it says moment-to-moment.
Using AI as a tool (drafting, brainstorming, checking math, summarizing sources) and then carefully editing, fact-checking, and putting your own reasoning into the result is more like using a bike or calculator.
Dumping unedited Claude/ChatGPT output as a comment and treating it as “your contribution” is what people are objecting to.
So: it’s not that “biking” (using AI tools) is inherently bad; it’s that outsourcing the whole comment to the AI and presenting it as your own thought breaks norms around effort, honesty, and epistemic quality, and communities push back on that.
I understand and verified to the best of my ability the information contained in the post. If a LW moderator want to take action, I welcome their correction.
EDIT: I checked and the post contains about a 1000 words. At a 1 minute per 50 words this would be about 20 min. I have probably spent at least 3 hours on drafting this post, plus additional edits and engagement.
Quite frankly, the concerns raised here seem to be more originating from a ludditist denial and an intuitive dismissal of AI stylistic choices rather than genuine issues with the content—A stubborn attachment to the weakness of the flesh if you will. This post has generated quite a lot of content-level engagement so seemingly the style is only an issue for a loud minority.