The recent Gordon Seidoh Worley/Said Achmiz blowup and the subsequent threads (1, 2) it spawned, along my own involvement in them, got me thinking a bit about this site, on a more nostalgic/meta level.
To be clear, I continue to endorse my belief that Said is right about most of the issues he identifies, about the epistemic standards of this site being low, and about the ever-present risk that absent consistent and pointed (reasonable) criticism, comment sections and the site culture will inevitably devolve into happy death spirals over applause lights.
And yet… lukeprog hasn’t been seriously active on this site for 7 years, Wei Dai hasn’t written a post in over a year (even as he engages in productive discussions here occasionally), Turntrout mostly spends his time away from LW, Quintin Pope spends all his time away from LW, Roko comments much less than he used to more than a decade ago, Eliezer and Scott write occasional comments once every 3 months or so, Richard Ngo has slowed down his pace of posting considerably, gwern posts here very infrequently (and when he does, it’s usually just linking to other places), Duncan Sabien famously doesn’t spend time here anymore, lsusr said an official goodbye (edit: it was an April Fool’s joke) months ago...
While speculating about the private or subconscious beliefs of others is rightly frowned upon here in general, I will say I do suspect some of the moderator pushback to Said comes from the (IMO correct) observation that… LW is just missing something, something that Said contributed, at least a bit, to pushing away in the aggregate (even if any one given action of his was by itself worthwhile from a cost/benefit perspective). Something that every single one of these authors used to provide in the past, something that used to prevent “the project of thinking more clearly [from falling] by the wayside”, something which resulted in “questions left in the articles for commenters to answer”, something that’s a bit hard to fully pin down...
Back in 2009, Eliezer wrote “Rationality: Common Interest of Many Causes” and talked about “Raising the Sanity Waterline” in broader society. He then wrote HPMOR; later on, he wrote Inadequate Equilibria, both of which were flawed but worthwhile books. Scott started the SSC and talked about everything, from science and rationality to politics and economics to medicine and social dynamics. There was a certain… vibe, for lack of a better term, connected with all this. It’s what spawned the original LW wave of support for CFAR, from people who were not sufficiently plugged into the social dynamics on the ground to realize that was apparently never what CFAR was supposed to be about. It’s what got people hopeful about The Martial Art of Rationality, a sense that a rationality dojo is possible. It’s what’s embodied in one of the best and most emblematic comments ever written on this site, namely gwern’s pointed and comprehensive takedown of Eliezer’s FAQ on technological unemployment. It’s a sense of curiosity embodied in the virtue of scholarship. It’s covering a breadth of topics for the sake of discussing them and becoming stronger and more knowledgeable.
Now, it’s mostly just AI. But honestly, it’s not even the long conversations or (somewhat) shorter debates about AI that used to generate and propagate tremendous insights. It’s… different. Even when it’s not AI, the conversation feels… stilted, lacking in a way. The engagement feels low, it feels off; the comment section isn’t producing totally different yet insightful models of the problems discussed in posts and spawning long and fruitful conversations anymore. I’m not sure what to really make of it.
There are some who buck this trend. Viliam, Steve Byrnes, jefftk, Sarah Constantin… I’m sure I’m missing some names. But it’s just not what it used to be overall.
And yet… lukeprog hasn’t been seriously active on this site for 7 years, Wei Dai hasn’t written a post in over a year (even as he engages in productive discussions here occasionally), Turntrout mostly spends his time away from LW, Quintin Pope spends all his time away from LW, Roko comments much less than he used to more than a decade ago, Eliezer and Scott write occasional comments once every 3 months or so, Richard Ngo has slowed down his pace of posting considerably, gwern posts here very infrequently (and when he does, it’s usually just linking to other places), Duncan Sabien famously doesn’t spend time here anymore...
At least on my own account, I can say that Said Achmiz’s replies are not responsible for me not commenting/posting on LW2: he rarely replies to me, and we do all our arguing on IRC anyway.
He is probably indirectly responsible for me writing less here by his work on Gwern.net, but that should not be held against him.
(After all, LW2 would not look or function as it does without that work.)
From my perspective, I shifted off LW2 as a main writing outlet a long time ago for a mix of reasons about both LW2 and myself.
I don’t think there is any feature, or set of features, which could make me switch to writing primarily on LW2.
I was using LW1 for things I now use Gwernnet for—my bibliography comments or posts are now just tags (recent example, or just the bookmarks/newest-links page in general).
I am less interested in arguments or critiques when I have so much more of my own writings I would like to do, as now I suffer from an embarrassment of riches in things I’d like to write compared to 2015, and I get less out of arguing than I did when starting out.
(Being able to ban a LW1 user like Lumifer from my posts/comments would have changed this only slightly.)
And having my own website, and Said Achmiz to implement more complex features on demand, has obviously made me much more interested in writing primarily for my own site and tailoring the medium to the message.
(I can make “Bell, Crow, Moon” default to ‘dark-mode’ and randomize the illustration image, for example, or I can make “October The First Is Too Late” switch to dark-mode at a key point and hide the spoilers using ‘reader-mode’, which lets me write that page in a novel way, similar to “It Looks Like You’re Trying To Take Over The World”.)
And similar to Scott Alexander, the more committed I am to Gwernnet, the more I am incentivized to write for it, to explore designs and build the brand and consolidate everything in one place for the AIs etc.
(Even if I am not making anything like Scott’s $500k ⧸ (365⁄3) ≈ $5k per page I post!)
This might change a little now that we have finished developing a lightweight ‘blog’ feature for Gwernnet which makes writing effort-posts off-site much less of a waste, but nevertheless, my priority these days is building up Gwernnet—not LW2.
LW1 was, back then, much more of a general-tech-interest website: closer to Hacker News than Alignment Forum. The latest meta-analysis on the Replication Crisis in psychology? “Sure, why not.” Dubious new Russian nootropic cerebrolysin? “Yeah, we can discuss that, we have >40 years until AGI, after all… - wait, chapter 19 of Methods of Rationality just dropped!” LW2 had to narrow down in scope under the pressure of ever-shorter AI timelines. (No one would be too interested in starting CFAR today to ‘raise the sanity waterline’.) So that has had a cost in diversity. There would not be much point in submitting my, say, ~11 cat psychology pages to LW2, although LW1 probably would’ve loved them all as a ‘catquences’. I also have made a few strategic decisions, like deciding back in October 2020 to set up /r/MLScaling on Reddit to aim at a more centrist coverage of AI scaling which got down-in-the-weeds with every relevant paper or link rather than flood LW2 with that material, and leave AI safety discussions to LW2/EAF/AF.
Also, writing careers change over time. Personally, I would be suspicious of anyone who write a lot but was writing as much on LW2 in 2025 as they were on LW1 in 2015. I would be thinking to myself, “what are you doing here still? Where are you going? Have you not grown up at all, nor chafe at your limits? Remember: if the chick is not able to break the shell of his egg, he will die without having ever been born.”
(I won’t try to analyze Eliezer’s trajectory here. I don’t understand his post-MoR trajectory from LW1 to Arbital to… Facebook… to forum glowfic… to Twitter? Nor what happened to lukeprog. I would also note that Roko not posting here is a feature, not a bug; have you read his tweets over the past decade...?)
So, I think the right question is not ‘why don’t Scott, Gwern et al write as much on LW2 as they did on LW1?’. It would be weird if we did!
The right question is, ‘where’s the next generation of writers on LW2?’
When I look at ACX, LW2, EAF, my Twitter/Reddit/HN, it does feel like there is a general shortage of good new writers online everywhere, not just LW2.
In terms of ‘emerging bloggers’ (including in this Substack, ‘long tweets’ etc), it feels like it remains Millennial/GenX-dominated; I can think of few Zoomers/GenAlpha-type writers of note.
(Even relatively new writers who come to mind as being of interest, like TracingWoodgrains or Henrik Karlsson or Cremieux, tend to be older and to have simply recently ramped up writing and have been around for a while beforehand.)
There is no LW1 of today.
Maybe I’m just old and out of touch, and they’ve all moved to video? Videos are extremely popular… but so what? Lots of media are popular, in terms of profit or man-hours consumed, that doesn’t mean they are important to the long-term culture or the intellectual goals we have here. If there were incisive rationalist-related videos which were setting the zeitgeist, where are they? Where are the videos introducing new catchphrases I will be using 10 years from now once they’ve become endemic? Why does, eg, ACX seem to be so much more vastly influential?
Perhaps the answer is that the pipeline of writers has been jammed.
Maybe “the culture is stuck” and people are hiding in “the dark forest” because the hypersonic winds of social media tear apart everything of immediate value, and destroys the normal progression of writers from low stake safe writings like small comments or interactions like upvotes or editing wikis to longer comments & debates to effort-posts to eventually their own site/newsletter/community for riskier ‘real’ writing.
(Not that ‘video’ is the only culprit here—all walled gardens want to infantilize you.
A black hole like Discord provides no way to ‘graduate’ from Discord; it wants you to be trapped there forever, writing short comments destined to be forgotten as soon as the screen scrolls past them, emoting and upvoting, and never going anywhere, and using Discord 10 years from now just like you use it today...)
LW2 had to narrow down in scope under the pressure of ever-shorter AI timelines
I wouldn’t say the scope was narrowed, in fact the admin team took a lot of actions to preserve the scope, but a lot of people have shown up for AI or are now heavily interested in AI, simply making that the dominant topic. But, I like to think that people don’t think of LW as merely an “AI website”.
people don’t think of LW as merely an “AI website”.
The word “people” is doing heavy lifting here, I have found a lot of people online who think just that in tech adjacent circles. Besides gwern seems to be operating under similar premises, so I won’t be surprised if other (less informed) people also had similar takeaway as his.
If there were incisive rationalist-related videos which were setting the zeitgeist, where are they?
The YouTube channel Rational Animations seems pretty successful in terms of sheer numbers: 385K subscribers, which is comparable to YouTubers who talk about media and technology. Their videos “The True Story of How GPT-2 Became Maximally Lewd” and “The Goddess of Everything Else” have over two million views. Qualitatively, I have seen their biggest videos mentioned a few times where a LW post wouldn’t be. However, the channel principally adapts existing rationalist and AI-safety content. (Sort the videos by popular to see.) I think they’re good at it. Through their competence, new incisive rationalist-related videos exist—as adaptations of older incisive rationalist-related writing.
I don’t know of another channel like it, even though popular YouTube channels attract imitators, and it is hard to imagine them switching to new ideas. Part of it is the resources involved in producing animation compared to writing. With animation so labor-intensive, it makes sense to try out and refine ideas in text and only then adapt them to video. Posters on video-LW with original high-effort content would come to resent how much each mistake cost them compared to a textual post or comment. AI video generation will make it easier to create videos, but precise control over content and style will still demand significantly more effort than text.
A black hole like Discord provides no way to ‘graduate’ from Discord; it wants you to be trapped there forever, writing short comments destined to be forgotten as soon as the screen scrolls past them, emoting and upvoting, and never going anywhere, and using Discord 10 years from now just like you use it today...)
I generally agree with any and all criticisms of Discord, but its search is pretty good. If you know what to search for, you can dig out that old post. Of course, leaving memorable breadcrumbs you can search for three years later is, at best, an art, and in my case seems like something that’s purely luck-of-the-draw when it comes to improbable phrases that you’ve mentioned only once or a handful of times.
On the other hand, Discord usersdo tend to have a lower threshold of reading stamina; “I ain’t reading all that — I’m happy for you, or sorry that happened” seems to happen more often in Discord unless you’re in a Discord guild that’s pre-selected for people who can read long things — a Gaming Lawyers guild, perhaps.
If you know what to search for, you can dig out that old post. Of course, leaving memorable breadcrumbs you can search for three years later is, at best, an art
Yes, that has been my experience too. Sure, Discord (like Twitter) gives you fairly powerful search primitives, to a greater extent than most people ever notice. You can filter by user, date-ranges, that sort of thing… It was written by nerds for nerds, originally, and it shows. However, I have still struggled to find many older Discord comments by myself or others, because it is inherent to the nature of realtime shortform media that it can be extremely difficult to remember the exact breadcrumb, if any, or write in such a way that your hazy searches years later doesn’t pull up 500 other hits. (No one argues for doing all documentation as random IRC conversations “because you can just grep your IRC logs”, and I have also sometimes seriously struggled to find old IRC conversations despite remembering them fairly clearly.)
Without any kind of organization or summarization or FAQ/wiki-like accumulating document, this is inevitable. And Discord doesn’t particularly care about this because it wants you to spend all your time there, not consolidate knowledge or build on past comments or create public knowledge, so it optimizes for that, and no amount of Boolean queries can make up for a design which cares only about the most recent screen of comments.
Which is my point: everything is lost in the rain, despite your tears, and there is no path to growth or long content.
Discord (like Twitter) gives you fairly powerful search primitives, to a greater extent than most people ever notice. You can filter by user, date-ranges, that sort of thing… It was written by nerds for nerds, originally, and it shows.
It seems worth noting that 𝕏 search has been broken for quite a while, and shows no sign of improvement.
unless you’re in a Discord guild that’s pre-selected for people who can read long things — a Gaming Lawyers guild, perhaps.
I was on a few rationality adjacent discord and got politically corrected for typing short and frequent messages, which often amounted to a rambling. Eventually I became low status enough that people just ignored what I said or had nothing to add to my conversation.
Which is in strong contrast to other “I am your mogged sigma in ohio, wanna rizz me up with your gyatt skibidi? No? What the sigma? Sus, no cap or I will fanum tax.” types of replies I was used to.
In normal speak, it would amount to “I am your guy, do you want to date with me attractive woman? No? are you sure? Seems suspicious, if you lie I will steal your lifeline/destroy you.” . But with multiple levels of irony and humorous tone.
I think any explanation here must be compared to the null hypothesis that most people do not sustain blogging for 10 years. My guess is that most Twitter accounts that are popular these days were not big 10 years ago, nor most reddit accounts, and a similar thing is true of LessWrong accounts. Longevity in blogging like Cowen or Alexander is not the norm, most people change life circumstances and it no longer is one of their top hobbies.
Nevertheless, there is a pretty serious problem here if you believe (as I do) that a large part of what made LW great early on in terms of rationality content was selection effects causing the best and most insightful rationality-interested writers to move here: people like Eliezer (obviously), Robin Hanson, Anna Salamon, Scott back when he was known as Yvain, gwern, (later on) Duncan Sabien etc.
Once the well of insightful new blood starts running dry (because the “lowest hanging fruit” potential contributors have already been attracted to the site) and the old guard starts retiring/moving on, keeping the lifeline going depends more on in-house training and “building” the culture and community to turn rationality learners into dojo teachers. (In my understanding, something similar caused the death of LW 1.0 as well.)
Not only is this hard in theory and mostly hasn’t panned out in practice, it also doesn’t seem to have been prioritized all that much (has LW-rationality been marketed as the common interest of many causes and types of individuals instead of just narrowly appealing to nerdy CS-inclined young STEM westerners? has there been an emphasis on raising the sanity waterline globally, or have projects marketed as doing that instead actually focused only on building narrow pipelines for math talent to join MIRI? was the challenge of writing Level 2 to the Sequences, as Eliezer approved of, actually undergone, or was it all left to Eliezer himself to come back and partly write it?).
It might be nice to move all AI content to the Alignment Forum. I’m not sure the effect you’re discussing is real, but if it is, it might be because LW has become a de facto academic journal for AI safety research, so many people are posting without significant engagement with the LW canon or any interested in rationality.
The current rules around who can post on the Alignment Forum seem a bit antiquated. I’ve been working on alignment research for over 2 years and I don’t know off the top of my head how to get permission to post there. And I expect the relevant people to see stuff if it’s on LW anyway.
When I’ve brought this up, a few people asked why we don’t just put all the AI content on the Alignment Forum. This is a fairly obvious question, but:
a) It’d be a pretty big departure from what the Alignment Forum is currently used for.
b) I don’t think it really changes the fundamental issue of “AI is what lots of people are currently thinking about on LessWrong.”
The Alignment Forum’s current job is not to be a comprehensive list of all AI content, it’s meant to especially good content with a high signal/noise ratio. All Alignment Forum posts are also LessWrong posts, and LessWrong is meant to be the place where most discussion happens on them. The AF versions of posts are primarily meant to be a thing you can link to professionally without having to explain the context of a lot of weird, not-obviously-related topics that show up on LessWrong.
We created the Alignment Forum ~5 years ago, and it’s plausible the world needs a new tool now. BUT, it still feels like a weird solution to try and move the AI discussion off of LessWrong. AI is one of the central topics that motivate a lot of other LessWrong interests. LessWrong is about the art of rationality, but one of the important lenses here is “how would you build a mind that was optimally rational, from scratch?”.
I think you’re comparing the goals of past lesswrong to the goals of present lesswrong. I don’t think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:
And I do think, at this stage, this is the right collective move to make. I do often roll my eyes when I see new insight-porn on the front page. It is almost always useless. I think actually going out, doing some world-modeling, and solving problems is what it looks like to refine the art after you’ve read, like, the sequences, and superforecasting, and some linear algebra.
I’m not saying there should be no meta-thinking, but once your epistemics aren’t embarrassingly incompetent, and you’ve absorbed most of the good philosophical arguments here, the way to meta-think will end up looking like doing a bunch of deep dives, forecasts, and solving problems, then coming back to the community, presenting your results, and thinking about how you could’ve done that better.
To borrow an old metaphore, you don’t get good at martial arts by sitting alone in a room thinking about how to be a good martial artist. You get good by going out and actually fighting. And similarly, you shouldn’t trust people giving you “rationality advice” unless they themselves are accomplished in a wide variety of fields (or of course, have sufficiently good (read: mathematical) arguments on their side).
Edit: I think to a large extent whats going on here, also, is nostalgia on your part. The past of LessWrong was different but I don’t think it was better than what we have now. I for one wouldn’t trade one GeneSmith for 10 Duncans!
I’m not sure that all of this kind of wandering-away is something that could reasonably be prevented, though.
I think it’s a combination of:
Many of the Old Greats™ are moderately tapped out.
Many of the Old Greats™ have other, more appealing places to post, and there’s basically nothing LW could do to bait them back.
Gwern likes his own site. I don’t think his site was that attractive to him ten years ago.
Scott now has a Substack that gets him megabucks, and he’s much more of a conventional thinker now than he was ten years ago. For him, posting here basically means leaving money on the table.
Eliezer has his book that he’s finishing up. I’m not sure if he’s still posting a lot on Facebook.
I was going to say Roko has an active Twitter account, but apparently he’s nuked all his tweets. At any rate, he has a Substack (latest post: 12/12/2024), so he has an obvious other place to post things, and quite possibly a financial disincentive to POSSE them here. Plus, it’s not like his current publicly-stated political preferences are anywhere near the Bay Area Overton Window, so AFAICT he has a total lack of incentive to have people with Bay Area political preferences get better at systematized winning.
Now, you (i.e. anyone who might be reading this) might be wondering “well, where’re your current and future contributions, hotshot?”
A fair question!
I have roughly an order (or two) of magnitude fewer rationality-relevant ideas (compared to anything else) that I can post on the Internet. I have a “Rationalist-adjacent” folder in my Drafts folder, and it has 13 things in it. A few of them aren’t all that really rationalist-adjacent, a few of them I’m never going to publish because for reasons, and a few of them I can’t really get the politics out of, and I don’t want to be a political poster on the Internet.
So there are a couple good ones in there, but polishing any of them to the point where they’re explained at enough length to win someone over (as opposed to the people who will see a one-line summary of my idea and say “oh yeah, he’s right; I just never thought of it that way before”) sounds like pulling teeth (or maybe polishing turds) for not much extra benefit to me, or anyone else.
Sure, I could post on my Shortform, but I figure the odds of them getting picked up by a better writer who then polishes them into something good that advances the cause of rationality is…slim to none.
a couple related terms: skill corridor, or competency plateaus, exist when a community both fails to cultivate newbies (creating a skill floor) and brain drain as people above a certain skill ceiling tend to leave as they have better opportunities available.
I think this is mostly just the macro-trend of the internet shifting away from open forums and blogs and towards the “cozy web” of private groupchats etc., not anything specific about LessWrong. If anything, LessWrong seems to be bucking the trend here, since it remains much more active than most other sites that had their heyday in the late 00s.
I don’t have any dog in the Achmiz/Worley debate, but I’m having trouble getting in the headspace of someone who is driven away from posting here because of one specific commenter.
First of all, I don’t think anyone is ever under any obligation to reply to commenters at all—simply dropping out of a conversation thread doesn’t feel rude/confrontational in the way it would be to say IRL “I’m done talking to you now.”
Second, I would find it far more demotivating to just get zero engagement on my posts—if I didn’t think anybody was reading, it’s hard to justify the time and effort of posting. But otherwise, even if some commenters disagree with me, my post is still part of the discourse, which makes it worthwhile.
In the tradeoff between emphasis on intellectual exploration vs. emphasis on correctness and applicability LW seems to have moved closer to the latter, and I think you’re mourning the former. I do feel this has been driven largely by AI moving from a speculative idea to very much a reality.
Also re: Said Achmiz—to quote The Big Lebowski, “You’re not wrong Walter. You’re just an asshole.” (I agree with Said on the object level more often than not but his tone can be more abrasive than necessary. But then again too much agreeableness can make it hard to get at the truth. Sometimes the truth hurts.)
Oops! That’s a pretty embarrassing error. I remembered his comment complaining about contemporary LW and saying it might be more worthwhile for him to transition to video content on other platforms, and I incorrectly pattern-matched that to his post.
gwern posts here very infrequently (and when he does, it’s usually just linking to other places)
I’m very sure that gwern’s lack of posting here has nothing to do with any sort of “Said might post annoying comments under my posts” concern. (Gwern can speak for himself, of course, but I am just registering my prediction about what he would/will say.)
… does not get paid large sums of cash money for posting on LW. This seems to quite suffice to explain his preference for Substack.
lukeprog hasn’t been seriously active on this site for 7 years
As far as I can tell, he hasn’t been seriously active anywhere else, either. (Heck, he doesn’t even seem to care about maintaining access to his old writings—try clicking on some links to his site, in older LW posts; most of them are dead links, now.)
Roko comments much less than he used to more than a decade ago
… I rather think that the blame for that one can be placed squarely on a certain someone’s shoulders, and that someone sure ain’t me.
I remain highly skeptical of claims about my alleged effects (via my commenting, anyhow) on anyone or anything, but at least in principle I’m fine with accepting responsibility for the consequences of my actions (whether those consequences are good or bad is, of course, an entirely different question). But a lot of the stuff that you mention here has nothing whatsoever to do with me, or with anyone like me, or anything connected to me in any way.
P.S.: Apologies in advance for what will be a lack of prompt replies to comments; it seems that I am currently rate-limited such that I can only post one comment per day, on the whole site.
Out of curiosity, what evidence would change your mind?
This one seems pretty easy. If multiple notable past contributors speak out themselves and say that they stopped contributing to LW because of individual persistently annoying commenters, naming Said as one of them, that would be pretty clear evidence. Also socially awkward of course. But the general mindset of old-school internet forum discourse is that stuff people say publicly under their own accounts exists and claimed backchannel communications are shit someone made up to win an argument.
I disagree-voted this comment [edit: reversed now because I misread the comment I’m replying to] because the sort of pushback Said typically gives doesn’t remind me of “the good old days” (I think that’s a separate thing), but I want to flag that, as someone who’s had negative reactions to Said’s commenting style in the past, I feel like the past two years or so, I noticed several times where I thought he left valuable comments or criticism that felt on point, and I have noticed a lot less (possibly zero) instances of “omg this feels uncharitable and nitpicky/deliberately playing dense.” So, for my part at least, I no longer consider myself as having strong opinions on this topic.
(Note that I haven’t read the recent threads with Gordon Seidoh Worley, so this shouldn’t be interpreted as me taking a side on that.)
Your interpretation seems like the opposite of sunwillrise’s remark about Said?
… I do suspect some of the moderator pushback to Said comes from the (IMO correct) observation that… LW is just missing something, something that Said contributed, at least a bit, to pushing away in the aggregate (even if any one given action of his was by itself worthwhile from a cost/benefit perspective). Something that every single one of these authors used to provide in the past…
Said is right … about the epistemic standards of this site being low
Could you, or someone who agrees with you, be specific about this. What exactly are the higher standards of discussion that are not being met? What are the endemic epistemic errors that are being allowed to flourish unchecked?
I wouldn’t say they are being allowed to flourish unchecked; au contraire, most of them get corrected by the Said-aligned group of users on this site (but in a way high-status authors and, apparently, also moderators seem to disapprove of strongly).
To give some illustrative examples:
A failure to comprehend the basics of what truth and evidence mean and how to communicate them properly (which sits at the core of what LW has been about ever since the Sequences)
Getting upset at (and trying to argue against) the critical POC|GTFO aspect of asking for examples whenever an author asserts they have discovered a new, critical insight (and correctly concluding the author is full of shit when he/she fails to step up to the challenge)
Same as 6, but this time it’s meditation or buddhism (it’s almost always meditation or buddhism)
Overt focuses on the social aspects of important matters while dismissing the relevance of anything about epistemics
… (I can go on for hours)
Overall, it’s a failure of (mostly high-status, mostly high-karma) users who ought to know a heck of a lot better to comprehend the foundational principles behind LW truth-seeking, namely the lessons of “Noticing Confusion” and “37 Ways That Words Can Be Wrong.” In particular, a deliberate such failure caused by selfish desires to manipulate discourse and accrue personal status.
From my perspective, there’s been something of a democritization of people sharing their opinions on LessWrong, where way more people feel comfortable writing and opining on the site than they did 10 years ago, including from many people who are less ideologically on board with the founding writing of LessWrong, which has led to far lower standards in the bottom ~30% of cases, but has allowed for a much wider set of ideas and considerations to be sorted through and to rise to the top (when weighted by karma & attention).
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed. This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
I think the main cause of less greatness is less great writing. Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong. That was what attracted much of the greatness. There’s been a lot of good on LessWrong that’s attracted good writers, and better than most places, but Eliezer-writing-the-sequences is not something one simply “does again”.[1]
(And this difficulty has essentially nothing to do with the sorts of comments that Said writes.)
Though I did spend all of yesterday and today working out the details of a project to cause something quite similar to happen, and got Eliezer’s feedback on it, which I continue to feel is promising. So I am trying!
I do think there are a lot more bad takes on LW than before
Seriously, people, go back to a randomly selected comment section from 10 years ago. Go back to a random discussion post from 10 years ago. These were not, in the median, better posts or comments! Indeed, they were very consistently much much worse.
I don’t think it’s even the case that more bad takes are written now that we have more content. The ratio to the most active time of LW 1.0 is only like 2x or 3x, and indeed those most active times were the ones where you had a ton of really dumb political discussions, and pickup artistry discussions and everything was inundated with people who just showed up because of HPMoR, which to be clear included me, but I was still a dumb commenter with dumb takes.
This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
Fwiw, I personally choose to write criticism only in spots where it’s important yet missing (sometimes to the point where it seems everyone else is dropping the ball by allowing the authors to push a frame that’s wrong/misleading/incomplete/insufficiently argued for). Illustrative examples include Critch’s post on LLM consciousness, Bensinger’s post (and Ruby’s curation) on computationalism and identity, Abram Demski’s post on Circular Reasoning, Said’s skepticism of “statements should be at least two of true, necessary/useful, and kind,” cursory references to CEV by many top users on this site (including Habryka), Rohin Shah arguing Eliezer’s presentation of coherence arguments is fine instead of deeply misleading, etc.
One thing virtually all of these have in common is that they all come from highly reputable users on this site, they often get praise from other top users, and yet I think they’re all wrong but nobody else seems to have identified (and enunciated!) the critical issues on my mind.
(Note all the examples I chose for the grandparent comment also follow the same pattern. It’s not average Joe Schmoe failing to apply basic rules of epistemics, it’s reputable users on the level of Valentine, as an example.)
obviously just way more frequent good content than there was on LW 1.0
This is, actually, far from obvious, at least to me. LW 1.0 really went downhill in the last several years before the relaunch, so it’s not implausible that what you say is true for the period of, say, 2014–2017… but even then, I wouldn’t bet a lot of money on it.
If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week
Bit of an overestimate. There aren’t even any curated posts “2–3x per week”, never mind “insightful + thoughtful” ones…
But that’s fine, “more rare but more good” is great, 1–2 a week is generally enough, if they’re good enough, and I’ve explicitly endorsed a move in that direction in the past; so let’s see how the last 20 curated posts (i.e., one full page of them, on GreaterWrong) stack up:
interesting language thing
the “armchair speculations about medical stuff” genre is really just way too easy to write bullshit in, so who knows whether this one’s any good (it’s not like we’ve got a bunch of real experts weighing in…)
interesting “field report”
very interesting review of the state of a field
good post, good point, no complaints
superficial appearance of usefulness, actually just a pile of worthless garbage; mod who curated this one clearly didn’t read it (just like most upvoters and commenters, probably)
contentless vibes (comments also full of contentless vibes)
technical (?) AI stuff
technical AI stuff; not my domain of expertise, I’ll just assume that this one is very good, why not
interesting examination of a concept, with useful examples
basically insight porn
one of the worst pieces of pernicious bullshit I’ve ever read on this website (par for the course for this author, though)
more AI stuff, mostly no comment on this one, but discussion in comments seems good (as in, I see important points being discussed sanely)
the subject matter is interesting and good to know, but the treatment here is amateurish; this would be fine if we had more people interested in this sort of thing who could correct misconceptions in the comments, but alas… still, probably good on net
technical AI stuff
glorified “shower thought” (also par for the course for this author); at least it started some not-completely-worthless discussion in the comments
technical AI stuff
seems useful for people who care more about the subject matter than I do, which is fine
technical AI stuff
also shower thoughts / insight porn, but this one is mildly interesting, I guess
(These are deliberately shuffled from their displayed reverse-chronological order, since my point here is the aggregate trends, not criticism of any particular post.)
Not a great record. The technical AI stuff is all fine, I don’t really have any complaints about such posts even if most of them sail over my head. The good:crap ratio in the rest of it is deeply unimpressive. And this is just the curated posts!
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed.
This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
Yes, well, here’s the thing about that…
First, you do not know in advance which writing is the top 20% and which is the bottom 20%. That’s a big part of what discussions in the comments are for. And yes, that includes comments like “examples?” or “what do you mean by [some word]?”, or “that part makes no sense”. That sort of thing makes good writing better (thereby revealing its goodness, which may’ve been somewhat obscured to begin with), while showing bad writing for what it is.
Second… the grandparent comment links to several posts on which I left critical comments. Now, were these posts in the top 20%, or in the bottom 20%?
If they were in the top 20%, then my critiques of these posts satisfy your expressed desire for critique of the top 20% of writing.
But if they were in the bottom 20% of writing, then their authors can hardly be claimed to be the sort of “good writers” of “good content” whom we wish to retain on Less Wrong…
Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong.
Including Eliezer himself.
I think, at this point, that the “attract more writers, and then somehow this results in LW producing stuff as great as [some old stuff from back in the day]” is a failed project. You can’t get quality out of quantity like this.
I… don’t get your overall judgement. Didn’t you just say that within the last few weeks the curated feed included:
interesting – 3
AI stuff – 4
very interesting – 1
mildly interesting – 1
who knows – 1
good post no complaints – 1
good – 1
good on net – 1
assume is very good – 1
seems useful though not for me – 1
contentless vibes – 1
insight porn – 1
glorified shower thought – 1
worthless garbage – 1
worst pieces of pernicious bullshit – 1
If we include the AI stuff you didn’t comment on in good, which I think it generally is and makes sense by your lights for the judgement of LessWrong, then that’s like 5⁄15 being bad by your lights, and like 8⁄15 actively good by your lights.
That… seems like a pretty solid hit rate? In your own words, if you are bothered by the bad ones, why not just move on and ignore them? You don’t have to engage with them, and this hit rate by your own judgement seems hardly indicative of something terrible going on.
IDK, maybe you meant to convey some different vibe with your list of judgements, but I was very confused by the contrast of your list seeming pretty positive, and then somehow, because you don’t like 1⁄3 of posts, you ending up at the conclusion of “The good:crap ratio in the rest of it is deeply unimpressive”.
Well, first of all, you’ve miscounted somehow… I don’t want to get too far into the weeds about each individual example, but here’s how I’d characterize my list:
shouldn’t be on LW at all (but since that’s not really how LW is run now, let’s call this one “shouldn’t be anywhere but the author’s personal blog section”): 5
fine for LW, but definitely not “curated”-quality (and says something very sad about LW if it is included in “curated”): 6
worthy of “curated”: 3
~technical AI stuff: 6
Remember, this is just the “curated” posts. If I were listing from the “All Posts” feed, or probably even from the “Frontpage Posts” feed, then of course you would be right to say “don’t like? don’t read!”. But my point isn’t “sometimes people post bad or mediocre posts on Less Wrong dot com—the horror!”. Recall that I wrote this in response to Ben’s claim about how much good stuff there is:
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
And I am saying: no, actually, this is false. If you just read the curated posts, you will not, in fact, find “post after post of insightful + thoughtful content 2-3x per week”. Not even close.
This is important, because the “but look how much good stuff there is!” argument gets brought out whenever we have this “look how much bad stuff there is!” argument. In other words, the claim that gets made is “yes we have lower standards than you might like, but that’s the price of attracting all of this good stuff that we’ve got”. If it is not in fact true that there is a lot of good stuff, then that reply loses all of its force.
The big question for me is where I should post (LW, or somewhere else) if I want Said-style feedback on something I think is rationality-related. Sure, I could just e-mail him directly, but I’d rather have that kind of feedback from more than one person.
The answer is not obviously “Less Wrong”, which is alarming.
The answer is not obviously “Less Wrong”, which is alarming.
Why alarming? I don’t think LessWrong is the hub for any one sort of feedback, but on balance it seems like a good source of feedback. Certainly Said & his approach isn’t the best possible response in every circumstance, I’m sure even he would agree with that, even if he thinks there should be more of it.
Because it used to be the obvious place to post something rationality-related where one could get good critical feedback, up to and including “you’re totally wrong, here’s why” or “have you considered…?” (where considering the thing totally invalidates or falsifies the idea I was trying to put forward).
I haven’t looked much into AI doom, I still find some posters here useful. Just to note a lot of AI doomer criticism posts do get a lot of upvotes if they appeal to the attitudes of lesswrong users.
The recent Gordon Seidoh Worley/Said Achmiz blowup and the subsequent threads (1, 2) it spawned, along my own involvement in them, got me thinking a bit about this site, on a more nostalgic/meta level.
To be clear, I continue to endorse my belief that Said is right about most of the issues he identifies, about the epistemic standards of this site being low, and about the ever-present risk that absent consistent and pointed (reasonable) criticism, comment sections and the site culture will inevitably devolve into happy death spirals over applause lights.
And yet… lukeprog hasn’t been seriously active on this site for 7 years, Wei Dai hasn’t written a post in over a year (even as he engages in productive discussions here occasionally), Turntrout mostly spends his time away from LW, Quintin Pope spends all his time away from LW, Roko comments much less than he used to more than a decade ago, Eliezer and Scott write occasional comments once every 3 months or so, Richard Ngo has slowed down his pace of posting considerably, gwern posts here very infrequently (and when he does, it’s usually just linking to other places), Duncan Sabien famously doesn’t spend time here anymore, lsusr said an official goodbye (edit: it was an April Fool’s joke) months ago...
While speculating about the private or subconscious beliefs of others is rightly frowned upon here in general, I will say I do suspect some of the moderator pushback to Said comes from the (IMO correct) observation that… LW is just missing something, something that Said contributed, at least a bit, to pushing away in the aggregate (even if any one given action of his was by itself worthwhile from a cost/benefit perspective). Something that every single one of these authors used to provide in the past, something that used to prevent “the project of thinking more clearly [from falling] by the wayside”, something which resulted in “questions left in the articles for commenters to answer”, something that’s a bit hard to fully pin down...
Back in 2009, Eliezer wrote “Rationality: Common Interest of Many Causes” and talked about “Raising the Sanity Waterline” in broader society. He then wrote HPMOR; later on, he wrote Inadequate Equilibria, both of which were flawed but worthwhile books. Scott started the SSC and talked about everything, from science and rationality to politics and economics to medicine and social dynamics. There was a certain… vibe, for lack of a better term, connected with all this. It’s what spawned the original LW wave of support for CFAR, from people who were not sufficiently plugged into the social dynamics on the ground to realize that was apparently never what CFAR was supposed to be about. It’s what got people hopeful about The Martial Art of Rationality, a sense that a rationality dojo is possible. It’s what’s embodied in one of the best and most emblematic comments ever written on this site, namely gwern’s pointed and comprehensive takedown of Eliezer’s FAQ on technological unemployment. It’s a sense of curiosity embodied in the virtue of scholarship. It’s covering a breadth of topics for the sake of discussing them and becoming stronger and more knowledgeable.
Now, it’s mostly just AI. But honestly, it’s not even the long conversations or (somewhat) shorter debates about AI that used to generate and propagate tremendous insights. It’s… different. Even when it’s not AI, the conversation feels… stilted, lacking in a way. The engagement feels low, it feels off; the comment section isn’t producing totally different yet insightful models of the problems discussed in posts and spawning long and fruitful conversations anymore. I’m not sure what to really make of it.
There are some who buck this trend. Viliam, Steve Byrnes, jefftk, Sarah Constantin… I’m sure I’m missing some names. But it’s just not what it used to be overall.
At least on my own account, I can say that Said Achmiz’s replies are not responsible for me not commenting/posting on LW2: he rarely replies to me, and we do all our arguing on IRC anyway. He is probably indirectly responsible for me writing less here by his work on Gwern.net, but that should not be held against him. (After all, LW2 would not look or function as it does without that work.)
From my perspective, I shifted off LW2 as a main writing outlet a long time ago for a mix of reasons about both LW2 and myself. I don’t think there is any feature, or set of features, which could make me switch to writing primarily on LW2.
I was using LW1 for things I now use Gwernnet for—my bibliography comments or posts are now just tags (recent example, or just the bookmarks/newest-links page in general). I am less interested in arguments or critiques when I have so much more of my own writings I would like to do, as now I suffer from an embarrassment of riches in things I’d like to write compared to 2015, and I get less out of arguing than I did when starting out. (Being able to ban a LW1 user like Lumifer from my posts/comments would have changed this only slightly.) And having my own website, and Said Achmiz to implement more complex features on demand, has obviously made me much more interested in writing primarily for my own site and tailoring the medium to the message. (I can make “Bell, Crow, Moon” default to ‘dark-mode’ and randomize the illustration image, for example, or I can make “October The First Is Too Late” switch to dark-mode at a key point and hide the spoilers using ‘reader-mode’, which lets me write that page in a novel way, similar to “It Looks Like You’re Trying To Take Over The World”.)
And similar to Scott Alexander, the more committed I am to Gwernnet, the more I am incentivized to write for it, to explore designs and build the brand and consolidate everything in one place for the AIs etc. (Even if I am not making anything like Scott’s $500k ⧸ (365⁄3) ≈ $5k per page I post!) This might change a little now that we have finished developing a lightweight ‘blog’ feature for Gwernnet which makes writing effort-posts off-site much less of a waste, but nevertheless, my priority these days is building up Gwernnet—not LW2.
LW1 was, back then, much more of a general-tech-interest website: closer to Hacker News than Alignment Forum. The latest meta-analysis on the Replication Crisis in psychology? “Sure, why not.” Dubious new Russian nootropic cerebrolysin? “Yeah, we can discuss that, we have >40 years until AGI, after all… - wait, chapter 19 of Methods of Rationality just dropped!” LW2 had to narrow down in scope under the pressure of ever-shorter AI timelines. (No one would be too interested in starting CFAR today to ‘raise the sanity waterline’.) So that has had a cost in diversity. There would not be much point in submitting my, say, ~11 cat psychology pages to LW2, although LW1 probably would’ve loved them all as a ‘catquences’. I also have made a few strategic decisions, like deciding back in October 2020 to set up /r/MLScaling on Reddit to aim at a more centrist coverage of AI scaling which got down-in-the-weeds with every relevant paper or link rather than flood LW2 with that material, and leave AI safety discussions to LW2/EAF/AF.
Also, writing careers change over time. Personally, I would be suspicious of anyone who write a lot but was writing as much on LW2 in 2025 as they were on LW1 in 2015. I would be thinking to myself, “what are you doing here still? Where are you going? Have you not grown up at all, nor chafe at your limits? Remember: if the chick is not able to break the shell of his egg, he will die without having ever been born.”
(I won’t try to analyze Eliezer’s trajectory here. I don’t understand his post-MoR trajectory from LW1 to Arbital to… Facebook… to forum glowfic… to Twitter? Nor what happened to lukeprog. I would also note that Roko not posting here is a feature, not a bug; have you read his tweets over the past decade...?)
So, I think the right question is not ‘why don’t Scott, Gwern et al write as much on LW2 as they did on LW1?’. It would be weird if we did!
The right question is, ‘where’s the next generation of writers on LW2?’
When I look at ACX, LW2, EAF, my Twitter/Reddit/HN, it does feel like there is a general shortage of good new writers online everywhere, not just LW2. In terms of ‘emerging bloggers’ (including in this Substack, ‘long tweets’ etc), it feels like it remains Millennial/GenX-dominated; I can think of few Zoomers/GenAlpha-type writers of note. (Even relatively new writers who come to mind as being of interest, like TracingWoodgrains or Henrik Karlsson or Cremieux, tend to be older and to have simply recently ramped up writing and have been around for a while beforehand.) There is no LW1 of today.
Maybe I’m just old and out of touch, and they’ve all moved to video? Videos are extremely popular… but so what? Lots of media are popular, in terms of profit or man-hours consumed, that doesn’t mean they are important to the long-term culture or the intellectual goals we have here. If there were incisive rationalist-related videos which were setting the zeitgeist, where are they? Where are the videos introducing new catchphrases I will be using 10 years from now once they’ve become endemic? Why does, eg, ACX seem to be so much more vastly influential?
Perhaps the answer is that the pipeline of writers has been jammed. Maybe “the culture is stuck” and people are hiding in “the dark forest” because the hypersonic winds of social media tear apart everything of immediate value, and destroys the normal progression of writers from low stake safe writings like small comments or interactions like upvotes or editing wikis to longer comments & debates to effort-posts to eventually their own site/newsletter/community for riskier ‘real’ writing. (Not that ‘video’ is the only culprit here—all walled gardens want to infantilize you. A black hole like Discord provides no way to ‘graduate’ from Discord; it wants you to be trapped there forever, writing short comments destined to be forgotten as soon as the screen scrolls past them, emoting and upvoting, and never going anywhere, and using Discord 10 years from now just like you use it today...)
I wouldn’t say the scope was narrowed, in fact the admin team took a lot of actions to preserve the scope, but a lot of people have shown up for AI or are now heavily interested in AI, simply making that the dominant topic. But, I like to think that people don’t think of LW as merely an “AI website”.
The word “people” is doing heavy lifting here, I have found a lot of people online who think just that in tech adjacent circles. Besides gwern seems to be operating under similar premises, so I won’t be surprised if other (less informed) people also had similar takeaway as his.
The YouTube channel Rational Animations seems pretty successful in terms of sheer numbers: 385K subscribers, which is comparable to YouTubers who talk about media and technology. Their videos “The True Story of How GPT-2 Became Maximally Lewd” and “The Goddess of Everything Else” have over two million views. Qualitatively, I have seen their biggest videos mentioned a few times where a LW post wouldn’t be. However, the channel principally adapts existing rationalist and AI-safety content. (Sort the videos by popular to see.) I think they’re good at it. Through their competence, new incisive rationalist-related videos exist—as adaptations of older incisive rationalist-related writing.
I don’t know of another channel like it, even though popular YouTube channels attract imitators, and it is hard to imagine them switching to new ideas. Part of it is the resources involved in producing animation compared to writing. With animation so labor-intensive, it makes sense to try out and refine ideas in text and only then adapt them to video. Posters on video-LW with original high-effort content would come to resent how much each mistake cost them compared to a textual post or comment. AI video generation will make it easier to create videos, but precise control over content and style will still demand significantly more effort than text.
I generally agree with any and all criticisms of Discord, but its search is pretty good. If you know what to search for, you can dig out that old post. Of course, leaving memorable breadcrumbs you can search for three years later is, at best, an art, and in my case seems like something that’s purely luck-of-the-draw when it comes to improbable phrases that you’ve mentioned only once or a handful of times.
On the other hand, Discord users do tend to have a lower threshold of reading stamina; “I ain’t reading all that — I’m happy for you, or sorry that happened” seems to happen more often in Discord unless you’re in a Discord guild that’s pre-selected for people who can read long things — a Gaming Lawyers guild, perhaps.
Yes, that has been my experience too. Sure, Discord (like Twitter) gives you fairly powerful search primitives, to a greater extent than most people ever notice. You can filter by user, date-ranges, that sort of thing… It was written by nerds for nerds, originally, and it shows. However, I have still struggled to find many older Discord comments by myself or others, because it is inherent to the nature of realtime shortform media that it can be extremely difficult to remember the exact breadcrumb, if any, or write in such a way that your hazy searches years later doesn’t pull up 500 other hits. (No one argues for doing all documentation as random IRC conversations “because you can just grep your IRC logs”, and I have also sometimes seriously struggled to find old IRC conversations despite remembering them fairly clearly.)
Without any kind of organization or summarization or FAQ/wiki-like accumulating document, this is inevitable. And Discord doesn’t particularly care about this because it wants you to spend all your time there, not consolidate knowledge or build on past comments or create public knowledge, so it optimizes for that, and no amount of Boolean queries can make up for a design which cares only about the most recent screen of comments.
Which is my point: everything is lost in the rain, despite your tears, and there is no path to growth or long content.
It seems worth noting that 𝕏 search has been broken for quite a while, and shows no sign of improvement.
I was on a few rationality adjacent discord and got politically corrected for typing short and frequent messages, which often amounted to a rambling. Eventually I became low status enough that people just ignored what I said or had nothing to add to my conversation.
Which is in strong contrast to other “I am your mogged sigma in ohio, wanna rizz me up with your gyatt skibidi? No? What the sigma? Sus, no cap or I will fanum tax.” types of replies I was used to.
(Tangent: I had no idea what that sentence meant; Sonnet 4 says
in case anyone else was as confused)
I meant the latter with “gyatt” , the newer meaning.
(wikipedia)
In normal speak, it would amount to “I am your guy, do you want to date with me attractive woman? No? are you sure? Seems suspicious, if you lie I will steal your lifeline/destroy you.” . But with multiple levels of irony and humorous tone.
Also check this video .
I think any explanation here must be compared to the null hypothesis that most people do not sustain blogging for 10 years. My guess is that most Twitter accounts that are popular these days were not big 10 years ago, nor most reddit accounts, and a similar thing is true of LessWrong accounts. Longevity in blogging like Cowen or Alexander is not the norm, most people change life circumstances and it no longer is one of their top hobbies.
This certainly sounds correct.
Nevertheless, there is a pretty serious problem here if you believe (as I do) that a large part of what made LW great early on in terms of rationality content was selection effects causing the best and most insightful rationality-interested writers to move here: people like Eliezer (obviously), Robin Hanson, Anna Salamon, Scott back when he was known as Yvain, gwern, (later on) Duncan Sabien etc.
Once the well of insightful new blood starts running dry (because the “lowest hanging fruit” potential contributors have already been attracted to the site) and the old guard starts retiring/moving on, keeping the lifeline going depends more on in-house training and “building” the culture and community to turn rationality learners into dojo teachers. (In my understanding, something similar caused the death of LW 1.0 as well.)
Not only is this hard in theory and mostly hasn’t panned out in practice, it also doesn’t seem to have been prioritized all that much (has LW-rationality been marketed as the common interest of many causes and types of individuals instead of just narrowly appealing to nerdy CS-inclined young STEM westerners? has there been an emphasis on raising the sanity waterline globally, or have projects marketed as doing that instead actually focused only on building narrow pipelines for math talent to join MIRI? was the challenge of writing Level 2 to the Sequences, as Eliezer approved of, actually undergone, or was it all left to Eliezer himself to come back and partly write it?).
It might be nice to move all AI content to the Alignment Forum. I’m not sure the effect you’re discussing is real, but if it is, it might be because LW has become a de facto academic journal for AI safety research, so many people are posting without significant engagement with the LW canon or any interested in rationality.
The current rules around who can post on the Alignment Forum seem a bit antiquated. I’ve been working on alignment research for over 2 years and I don’t know off the top of my head how to get permission to post there. And I expect the relevant people to see stuff if it’s on LW anyway.
https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days
I think you’re comparing the goals of past lesswrong to the goals of present lesswrong. I don’t think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:
How to Make Superbabies
AI 2027: What Superintelligence Looks Like
Explaining British Naval Dominance During the Age of Sail
Will Jesus Christ return in an election year?
Broad-Spectrum Cancer Treatments
And I do think, at this stage, this is the right collective move to make. I do often roll my eyes when I see new insight-porn on the front page. It is almost always useless. I think actually going out, doing some world-modeling, and solving problems is what it looks like to refine the art after you’ve read, like, the sequences, and superforecasting, and some linear algebra.
I’m not saying there should be no meta-thinking, but once your epistemics aren’t embarrassingly incompetent, and you’ve absorbed most of the good philosophical arguments here, the way to meta-think will end up looking like doing a bunch of deep dives, forecasts, and solving problems, then coming back to the community, presenting your results, and thinking about how you could’ve done that better.
To borrow an old metaphore, you don’t get good at martial arts by sitting alone in a room thinking about how to be a good martial artist. You get good by going out and actually fighting. And similarly, you shouldn’t trust people giving you “rationality advice” unless they themselves are accomplished in a wide variety of fields (or of course, have sufficiently good (read: mathematical) arguments on their side).
Edit: I think to a large extent whats going on here, also, is nostalgia on your part. The past of LessWrong was different but I don’t think it was better than what we have now. I for one wouldn’t trade one GeneSmith for 10 Duncans!
I’m not sure that all of this kind of wandering-away is something that could reasonably be prevented, though.
I think it’s a combination of:
Many of the Old Greats™ are moderately tapped out.
Many of the Old Greats™ have other, more appealing places to post, and there’s basically nothing LW could do to bait them back.
Gwern likes his own site. I don’t think his site was that attractive to him ten years ago.
Scott now has a Substack that gets him megabucks, and he’s much more of a conventional thinker now than he was ten years ago. For him, posting here basically means leaving money on the table.
Eliezer has his book that he’s finishing up. I’m not sure if he’s still posting a lot on Facebook.
I was going to say Roko has an active Twitter account, but apparently he’s nuked all his tweets. At any rate, he has a Substack (latest post: 12/12/2024), so he has an obvious other place to post things, and quite possibly a financial disincentive to POSSE them here. Plus, it’s not like his current publicly-stated political preferences are anywhere near the Bay Area Overton Window, so AFAICT he has a total lack of incentive to have people with Bay Area political preferences get better at systematized winning.
Now, you (i.e. anyone who might be reading this) might be wondering “well, where’re your current and future contributions, hotshot?”
A fair question!
I have roughly an order (or two) of magnitude fewer rationality-relevant ideas (compared to anything else) that I can post on the Internet. I have a “Rationalist-adjacent” folder in my Drafts folder, and it has 13 things in it. A few of them aren’t all that really rationalist-adjacent, a few of them I’m never going to publish because for reasons, and a few of them I can’t really get the politics out of, and I don’t want to be a political poster on the Internet.
So there are a couple good ones in there, but polishing any of them to the point where they’re explained at enough length to win someone over (as opposed to the people who will see a one-line summary of my idea and say “oh yeah, he’s right; I just never thought of it that way before”) sounds like pulling teeth (or maybe polishing turds) for not much extra benefit to me, or anyone else.
Sure, I could post on my Shortform, but I figure the odds of them getting picked up by a better writer who then polishes them into something good that advances the cause of rationality is…slim to none.
a couple related terms: skill corridor, or competency plateaus, exist when a community both fails to cultivate newbies (creating a skill floor) and brain drain as people above a certain skill ceiling tend to leave as they have better opportunities available.
I think this is mostly just the macro-trend of the internet shifting away from open forums and blogs and towards the “cozy web” of private groupchats etc., not anything specific about LessWrong. If anything, LessWrong seems to be bucking the trend here, since it remains much more active than most other sites that had their heyday in the late 00s.
I don’t have any dog in the Achmiz/Worley debate, but I’m having trouble getting in the headspace of someone who is driven away from posting here because of one specific commenter.
First of all, I don’t think anyone is ever under any obligation to reply to commenters at all—simply dropping out of a conversation thread doesn’t feel rude/confrontational in the way it would be to say IRL “I’m done talking to you now.”
Second, I would find it far more demotivating to just get zero engagement on my posts—if I didn’t think anybody was reading, it’s hard to justify the time and effort of posting. But otherwise, even if some commenters disagree with me, my post is still part of the discourse, which makes it worthwhile.
In the tradeoff between emphasis on intellectual exploration vs. emphasis on correctness and applicability LW seems to have moved closer to the latter, and I think you’re mourning the former. I do feel this has been driven largely by AI moving from a speculative idea to very much a reality.
Also re: Said Achmiz—to quote The Big Lebowski, “You’re not wrong Walter. You’re just an asshole.”
(I agree with Said on the object level more often than not but his tone can be more abrasive than necessary. But then again too much agreeableness can make it hard to get at the truth. Sometimes the truth hurts.)
That one is an April Fools post. Judging by lsusr’s user page, they’ve continued participating since then.
Oops! That’s a pretty embarrassing error. I remembered his comment complaining about contemporary LW and saying it might be more worthwhile for him to transition to video content on other platforms, and I incorrectly pattern-matched that to his post.
Nice catch!
I’m very sure that gwern’s lack of posting here has nothing to do with any sort of “Said might post annoying comments under my posts” concern. (Gwern can speak for himself, of course, but I am just registering my prediction about what he would/will say.)
… does not get paid large sums of cash money for posting on LW. This seems to quite suffice to explain his preference for Substack.
As far as I can tell, he hasn’t been seriously active anywhere else, either. (Heck, he doesn’t even seem to care about maintaining access to his old writings—try clicking on some links to his site, in older LW posts; most of them are dead links, now.)
… I rather think that the blame for that one can be placed squarely on a certain someone’s shoulders, and that someone sure ain’t me.
I remain highly skeptical of claims about my alleged effects (via my commenting, anyhow) on anyone or anything, but at least in principle I’m fine with accepting responsibility for the consequences of my actions (whether those consequences are good or bad is, of course, an entirely different question). But a lot of the stuff that you mention here has nothing whatsoever to do with me, or with anyone like me, or anything connected to me in any way.
P.S.: Apologies in advance for what will be a lack of prompt replies to comments; it seems that I am currently rate-limited such that I can only post one comment per day, on the whole site.
Out of curiosity, what evidence would change your mind? I naively expected habryka’s comment would but you don’t seem to agree.
(Feel free not to reply – the rate-limit on you is pretty severe, and my query is mere idle curiosity.)
This one seems pretty easy. If multiple notable past contributors speak out themselves and say that they stopped contributing to LW because of individual persistently annoying commenters, naming Said as one of them, that would be pretty clear evidence. Also socially awkward of course. But the general mindset of old-school internet forum discourse is that stuff people say publicly under their own accounts exists and claimed backchannel communications are shit someone made up to win an argument.
I disagree-voted this comment [edit: reversed now because I misread the comment I’m replying to] because the sort of pushback Said typically gives doesn’t remind me of “the good old days” (I think that’s a separate thing), but I want to flag that, as someone who’s had negative reactions to Said’s commenting style in the past, I feel like the past two years or so, I noticed several times where I thought he left valuable comments or criticism that felt on point, and I have noticed a lot less (possibly zero) instances of “omg this feels uncharitable and nitpicky/deliberately playing dense.” So, for my part at least, I no longer consider myself as having strong opinions on this topic.
(Note that I haven’t read the recent threads with Gordon Seidoh Worley, so this shouldn’t be interpreted as me taking a side on that.)
Your interpretation seems like the opposite of sunwillrise’s remark about Said?
Oh, thanks! Yeah, I should reverse my vote, then. I got confused by the sentence structure (and commenting before my morning coffee).
Could you, or someone who agrees with you, be specific about this. What exactly are the higher standards of discussion that are not being met? What are the endemic epistemic errors that are being allowed to flourish unchecked?
I wouldn’t say they are being allowed to flourish unchecked; au contraire, most of them get corrected by the Said-aligned group of users on this site (but in a way high-status authors and, apparently, also moderators seem to disapprove of strongly).
To give some illustrative examples:
A failure to comprehend the basics of what truth and evidence mean and how to communicate them properly (which sits at the core of what LW has been about ever since the Sequences)
Vague and mystifying “woo”-inspired nonsense getting reified without a moment’s thought, combined with a combative attitude at the suggestion that truth ought to be simple
The overuse of applause lights and semantic stopsigns to create happy death spirals over words and concepts that pay no rent
The overuse of LW jargon and emotional pulling on heartstrings to introduce epistemic superweapons while mask the lack of content and the failure to distinguish critical aspects of the topic being discussed
Getting upset at (and trying to argue against) the critical POC|GTFO aspect of asking for examples whenever an author asserts they have discovered a new, critical insight (and correctly concluding the author is full of shit when he/she fails to step up to the challenge)
Vague and nonsensical appeals to the wisdom of religion while ignoring the most important aspects that have been discussed ad nauseam on this site for one and a half decades
Same as 6, but this time it’s meditation or buddhism (it’s almost always meditation or buddhism)
Overt focuses on the social aspects of important matters while dismissing the relevance of anything about epistemics
… (I can go on for hours)
Overall, it’s a failure of (mostly high-status, mostly high-karma) users who ought to know a heck of a lot better to comprehend the foundational principles behind LW truth-seeking, namely the lessons of “Noticing Confusion” and “37 Ways That Words Can Be Wrong.” In particular, a deliberate such failure caused by selfish desires to manipulate discourse and accrue personal status.
From my perspective, there’s been something of a democritization of people sharing their opinions on LessWrong, where way more people feel comfortable writing and opining on the site than they did 10 years ago, including from many people who are less ideologically on board with the founding writing of LessWrong, which has led to far lower standards in the bottom ~30% of cases, but has allowed for a much wider set of ideas and considerations to be sorted through and to rise to the top (when weighted by karma & attention).
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed. This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
I think the main cause of less greatness is less great writing. Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong. That was what attracted much of the greatness. There’s been a lot of good on LessWrong that’s attracted good writers, and better than most places, but Eliezer-writing-the-sequences is not something one simply “does again”.[1]
(And this difficulty has essentially nothing to do with the sorts of comments that Said writes.)
Though I did spend all of yesterday and today working out the details of a project to cause something quite similar to happen, and got Eliezer’s feedback on it, which I continue to feel is promising. So I am trying!
Seriously, people, go back to a randomly selected comment section from 10 years ago. Go back to a random discussion post from 10 years ago. These were not, in the median, better posts or comments! Indeed, they were very consistently much much worse.
I don’t think it’s even the case that more bad takes are written now that we have more content. The ratio to the most active time of LW 1.0 is only like 2x or 3x, and indeed those most active times were the ones where you had a ton of really dumb political discussions, and pickup artistry discussions and everything was inundated with people who just showed up because of HPMoR, which to be clear included me, but I was still a dumb commenter with dumb takes.
Fwiw, I personally choose to write criticism only in spots where it’s important yet missing (sometimes to the point where it seems everyone else is dropping the ball by allowing the authors to push a frame that’s wrong/misleading/incomplete/insufficiently argued for). Illustrative examples include Critch’s post on LLM consciousness, Bensinger’s post (and Ruby’s curation) on computationalism and identity, Abram Demski’s post on Circular Reasoning, Said’s skepticism of “statements should be at least two of true, necessary/useful, and kind,” cursory references to CEV by many top users on this site (including Habryka), Rohin Shah arguing Eliezer’s presentation of coherence arguments is fine instead of deeply misleading, etc.
One thing virtually all of these have in common is that they all come from highly reputable users on this site, they often get praise from other top users, and yet I think they’re all wrong but nobody else seems to have identified (and enunciated!) the critical issues on my mind.
(Note all the examples I chose for the grandparent comment also follow the same pattern. It’s not average Joe Schmoe failing to apply basic rules of epistemics, it’s reputable users on the level of Valentine, as an example.)
This is, actually, far from obvious, at least to me. LW 1.0 really went downhill in the last several years before the relaunch, so it’s not implausible that what you say is true for the period of, say, 2014–2017… but even then, I wouldn’t bet a lot of money on it.
Bit of an overestimate. There aren’t even any curated posts “2–3x per week”, never mind “insightful + thoughtful” ones…
But that’s fine, “more rare but more good” is great, 1–2 a week is generally enough, if they’re good enough, and I’ve explicitly endorsed a move in that direction in the past; so let’s see how the last 20 curated posts (i.e., one full page of them, on GreaterWrong) stack up:
interesting language thing
the “armchair speculations about medical stuff” genre is really just way too easy to write bullshit in, so who knows whether this one’s any good (it’s not like we’ve got a bunch of real experts weighing in…)
interesting “field report”
very interesting review of the state of a field
good post, good point, no complaints
superficial appearance of usefulness, actually just a pile of worthless garbage; mod who curated this one clearly didn’t read it (just like most upvoters and commenters, probably)
contentless vibes (comments also full of contentless vibes)
technical(?) AI stufftechnical AI stuff; not my domain of expertise, I’ll just assume that this one is very good, why not
interesting examination of a concept, with useful examples
basically insight porn
one of the worst pieces of pernicious bullshit I’ve ever read on this website (par for the course for this author, though)
more AI stuff, mostly no comment on this one, but discussion in comments seems good (as in, I see important points being discussed sanely)
the subject matter is interesting and good to know, but the treatment here is amateurish; this would be fine if we had more people interested in this sort of thing who could correct misconceptions in the comments, but alas… still, probably good on net
technical AI stuff
glorified “shower thought” (also par for the course for this author); at least it started some not-completely-worthless discussion in the comments
technical AI stuff
seems useful for people who care more about the subject matter than I do, which is fine
technical AI stuff
also shower thoughts / insight porn, but this one is mildly interesting, I guess
(These are deliberately shuffled from their displayed reverse-chronological order, since my point here is the aggregate trends, not criticism of any particular post.)
Not a great record. The technical AI stuff is all fine, I don’t really have any complaints about such posts even if most of them sail over my head. The good:crap ratio in the rest of it is deeply unimpressive. And this is just the curated posts!
Why? Seems fairly easy, actually. (The “label” one, not the “make it better” one; as you know, I favor selective methods over corrective ones.)
Yes, well, here’s the thing about that…
First, you do not know in advance which writing is the top 20% and which is the bottom 20%. That’s a big part of what discussions in the comments are for. And yes, that includes comments like “examples?” or “what do you mean by [some word]?”, or “that part makes no sense”. That sort of thing makes good writing better (thereby revealing its goodness, which may’ve been somewhat obscured to begin with), while showing bad writing for what it is.
Second… the grandparent comment links to several posts on which I left critical comments. Now, were these posts in the top 20%, or in the bottom 20%?
If they were in the top 20%, then my critiques of these posts satisfy your expressed desire for critique of the top 20% of writing.
But if they were in the bottom 20% of writing, then their authors can hardly be claimed to be the sort of “good writers” of “good content” whom we wish to retain on Less Wrong…
Including Eliezer himself.
I think, at this point, that the “attract more writers, and then somehow this results in LW producing stuff as great as [some old stuff from back in the day]” is a failed project. You can’t get quality out of quantity like this.
I… don’t get your overall judgement. Didn’t you just say that within the last few weeks the curated feed included:
interesting – 3
AI stuff – 4
very interesting – 1
mildly interesting – 1
who knows – 1
good post no complaints – 1
good – 1
good on net – 1
assume is very good – 1
seems useful though not for me – 1
contentless vibes – 1
insight porn – 1
glorified shower thought – 1
worthless garbage – 1
worst pieces of pernicious bullshit – 1
If we include the AI stuff you didn’t comment on in good, which I think it generally is and makes sense by your lights for the judgement of LessWrong, then that’s like 5⁄15 being bad by your lights, and like 8⁄15 actively good by your lights.
That… seems like a pretty solid hit rate? In your own words, if you are bothered by the bad ones, why not just move on and ignore them? You don’t have to engage with them, and this hit rate by your own judgement seems hardly indicative of something terrible going on.
IDK, maybe you meant to convey some different vibe with your list of judgements, but I was very confused by the contrast of your list seeming pretty positive, and then somehow, because you don’t like 1⁄3 of posts, you ending up at the conclusion of “The good:crap ratio in the rest of it is deeply unimpressive”.
Well, first of all, you’ve miscounted somehow… I don’t want to get too far into the weeds about each individual example, but here’s how I’d characterize my list:
shouldn’t be on LW at all (but since that’s not really how LW is run now, let’s call this one “shouldn’t be anywhere but the author’s personal blog section”): 5
fine for LW, but definitely not “curated”-quality (and says something very sad about LW if it is included in “curated”): 6
worthy of “curated”: 3
~technical AI stuff: 6
Remember, this is just the “curated” posts. If I were listing from the “All Posts” feed, or probably even from the “Frontpage Posts” feed, then of course you would be right to say “don’t like? don’t read!”. But my point isn’t “sometimes people post bad or mediocre posts on Less Wrong dot com—the horror!”. Recall that I wrote this in response to Ben’s claim about how much good stuff there is:
And I am saying: no, actually, this is false. If you just read the curated posts, you will not, in fact, find “post after post of insightful + thoughtful content 2-3x per week”. Not even close.
This is important, because the “but look how much good stuff there is!” argument gets brought out whenever we have this “look how much bad stuff there is!” argument. In other words, the claim that gets made is “yes we have lower standards than you might like, but that’s the price of attracting all of this good stuff that we’ve got”. If it is not in fact true that there is a lot of good stuff, then that reply loses all of its force.
The big question for me is where I should post (LW, or somewhere else) if I want Said-style feedback on something I think is rationality-related. Sure, I could just e-mail him directly, but I’d rather have that kind of feedback from more than one person.
The answer is not obviously “Less Wrong”, which is alarming.
I am imagining an occasional Hard Mode Thread on Less Wrong, with specific rules for conversation.
Or maybe there is a proper prompt you could give an AI.
I think if you ask for it, you will get plenty of it, I am sure. I also expect from more people than just Said.
Why alarming? I don’t think LessWrong is the hub for any one sort of feedback, but on balance it seems like a good source of feedback. Certainly Said & his approach isn’t the best possible response in every circumstance, I’m sure even he would agree with that, even if he thinks there should be more of it.
Because it used to be the obvious place to post something rationality-related where one could get good critical feedback, up to and including “you’re totally wrong, here’s why” or “have you considered…?” (where considering the thing totally invalidates or falsifies the idea I was trying to put forward).
Lesswrong was at its best from 2008 to 2012, now it’s just groupthink for AI doomers.
I haven’t looked much into AI doom, I still find some posters here useful. Just to note a lot of AI doomer criticism posts do get a lot of upvotes if they appeal to the attitudes of lesswrong users.