RobertM had made this table for another discussion on this topic, it looks like the actual average is maybe more like “8, as of last month”, although on a noticeable uptick.
You can see that the average used to be < 1.
I’m slightly confused about this because the number of users we have to process each morning is consistently more like 30 and I feel like we reject more than half and probably more than 3⁄4 for being LLM slop, but that might be conflating some clusters of users, as well as “it’s annoying to do this task so we often put it off a bit and that results in them bunching up.” (although it’s pretty common to see numbers more like 60)
[edit: Robert reminds me this doesn’t include comments, which was another 80 last month)
Having just done so, I now have additional appreciation for LW admins; I didn’t realize the role involved wading through so much of this sort of thing. Thank you!
From the filtered posts, looks like something happened somewhere between Feb and April 2025. My guess would be something like Claude searching the web which gives users a clickable link, and gpt-4o updates driving the uptick in these posts. Reducing friction for links can be a pretty big driver of clicks, iirc aella talked about this somewhere; none of the other model updates/releases seem like good candidates to explain the change.
Things that happened according to o3:
Grok 3 releases in mid-Feb
GPT-4.5 released in end-Feb (highly doubt this was the driver tho)
Claude 3.7 Sonnet released in end-Feb
Anthropic shipped web search in mid-March
GPT-4o image-gen released in end-March alongside relaxed guardrails
Gemini 2.5 Pro experimental in end-March
o3+o4-mini in mid-April
GPT-4.1 in the API in mid-April
GPT-4o syncopancy in end-April
Maybeeee Claude 3.7 Sonnet also drives this but I’m quite doubtful of that claim given how Sonnet doesn’t seem as agreeable as GPT-4o
I wonder if some AI scraper with 5 million IPs just scraped lesswrong and now it’s in mainstream datasets. Other hypothesis would be learning curve of users, and lesswrong style content getting closer to overton window for LLM users.
What??? How many posts do people make on this site a day that don’t get seen?
RobertM had made this table for another discussion on this topic, it looks like the actual average is maybe more like “8, as of last month”, although on a noticeable uptick.
You can see that the average used to be < 1.
I’m slightly confused about this because the number of users we have to process each morning is consistently more like 30 and I feel like we reject more than half and probably more than 3⁄4 for being LLM slop, but that might be conflating some clusters of users, as well as “it’s annoying to do this task so we often put it off a bit and that results in them bunching up.” (although it’s pretty common to see numbers more like 60)
[edit: Robert reminds me this doesn’t include comments, which was another 80 last month)
Again you can look at https://www.lesswrong.com/moderation#rejected-posts to see the actual content and verify numbers/quality for yourself.
Having just done so, I now have additional appreciation for LW admins; I didn’t realize the role involved wading through so much of this sort of thing. Thank you!
From the filtered posts, looks like something happened somewhere between Feb and April 2025. My guess would be something like Claude searching the web which gives users a clickable link, and gpt-4o updates driving the uptick in these posts. Reducing friction for links can be a pretty big driver of clicks, iirc aella talked about this somewhere; none of the other model updates/releases seem like good candidates to explain the change.
Things that happened according to o3:
Grok 3 releases in mid-Feb
GPT-4.5 released in end-Feb (highly doubt this was the driver tho)
Claude 3.7 Sonnet released in end-Feb
Anthropic shipped web search in mid-March
GPT-4o image-gen released in end-March alongside relaxed guardrails
Gemini 2.5 Pro experimental in end-March
o3+o4-mini in mid-April
GPT-4.1 in the API in mid-April
GPT-4o syncopancy in end-April
Maybeeee Claude 3.7 Sonnet also drives this but I’m quite doubtful of that claim given how Sonnet doesn’t seem as agreeable as GPT-4o
I wonder if some AI scraper with 5 million IPs just scraped lesswrong and now it’s in mainstream datasets. Other hypothesis would be learning curve of users, and lesswrong style content getting closer to overton window for LLM users.