From my perspective, there’s been something of a democritization of people sharing their opinions on LessWrong, where way more people feel comfortable writing and opining on the site than they did 10 years ago, including from many people who are less ideologically on board with the founding writing of LessWrong, which has led to far lower standards in the bottom ~30% of cases, but has allowed for a much wider set of ideas and considerations to be sorted through and to rise to the top (when weighted by karma & attention).
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed. This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
I think the main cause of less greatness is less great writing. Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong. That was what attracted much of the greatness. There’s been a lot of good on LessWrong that’s attracted good writers, and better than most places, but Eliezer-writing-the-sequences is not something one simply “does again”.[1]
(And this difficulty has essentially nothing to do with the sorts of comments that Said writes.)
Though I did spend all of yesterday and today working out the details of a project to cause something quite similar to happen, and got Eliezer’s feedback on it, which I continue to feel is promising. So I am trying!
I do think there are a lot more bad takes on LW than before
Seriously, people, go back to a randomly selected comment section from 10 years ago. Go back to a random discussion post from 10 years ago. These were not, in the median, better posts or comments! Indeed, they were very consistently much much worse.
I don’t think it’s even the case that more bad takes are written now that we have more content. The ratio to the most active time of LW 1.0 is only like 2x or 3x, and indeed those most active times were the ones where you had a ton of really dumb political discussions, and pickup artistry discussions and everything was inundated with people who just showed up because of HPMoR, which to be clear included me, but I was still a dumb commenter with dumb takes.
This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
Fwiw, I personally choose to write criticism only in spots where it’s important yet missing (sometimes to the point where it seems everyone else is dropping the ball by allowing the authors to push a frame that’s wrong/misleading/incomplete/insufficiently argued for). Illustrative examples include Critch’s post on LLM consciousness, Bensinger’s post (and Ruby’s curation) on computationalism and identity, Abram Demski’s post on Circular Reasoning, Said’s skepticism of “statements should be at least two of true, necessary/useful, and kind,” cursory references to CEV by many top users on this site (including Habryka), Rohin Shah arguing Eliezer’s presentation of coherence arguments is fine instead of deeply misleading, etc.
One thing virtually all of these have in common is that they all come from highly reputable users on this site, they often get praise from other top users, and yet I think they’re all wrong but nobody else seems to have identified (and enunciated!) the critical issues on my mind.
(Note all the examples I chose for the grandparent comment also follow the same pattern. It’s not average Joe Schmoe failing to apply basic rules of epistemics, it’s reputable users on the level of Valentine, as an example.)
obviously just way more frequent good content than there was on LW 1.0
This is, actually, far from obvious, at least to me. LW 1.0 really went downhill in the last several years before the relaunch, so it’s not implausible that what you say is true for the period of, say, 2014–2017… but even then, I wouldn’t bet a lot of money on it.
If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week
Bit of an overestimate. There aren’t even any curated posts “2–3x per week”, never mind “insightful + thoughtful” ones…
But that’s fine, “more rare but more good” is great, 1–2 a week is generally enough, if they’re good enough, and I’ve explicitly endorsed a move in that direction in the past; so let’s see how the last 20 curated posts (i.e., one full page of them, on GreaterWrong) stack up:
interesting language thing
the “armchair speculations about medical stuff” genre is really just way too easy to write bullshit in, so who knows whether this one’s any good (it’s not like we’ve got a bunch of real experts weighing in…)
interesting “field report”
very interesting review of the state of a field
good post, good point, no complaints
superficial appearance of usefulness, actually just a pile of worthless garbage; mod who curated this one clearly didn’t read it (just like most upvoters and commenters, probably)
contentless vibes (comments also full of contentless vibes)
technical (?) AI stuff
technical AI stuff; not my domain of expertise, I’ll just assume that this one is very good, why not
interesting examination of a concept, with useful examples
basically insight porn
one of the worst pieces of pernicious bullshit I’ve ever read on this website (par for the course for this author, though)
more AI stuff, mostly no comment on this one, but discussion in comments seems good (as in, I see important points being discussed sanely)
the subject matter is interesting and good to know, but the treatment here is amateurish; this would be fine if we had more people interested in this sort of thing who could correct misconceptions in the comments, but alas… still, probably good on net
technical AI stuff
glorified “shower thought” (also par for the course for this author); at least it started some not-completely-worthless discussion in the comments
technical AI stuff
seems useful for people who care more about the subject matter than I do, which is fine
technical AI stuff
also shower thoughts / insight porn, but this one is mildly interesting, I guess
(These are deliberately shuffled from their displayed reverse-chronological order, since my point here is the aggregate trends, not criticism of any particular post.)
Not a great record. The technical AI stuff is all fine, I don’t really have any complaints about such posts even if most of them sail over my head. The good:crap ratio in the rest of it is deeply unimpressive. And this is just the curated posts!
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed.
This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
Yes, well, here’s the thing about that…
First, you do not know in advance which writing is the top 20% and which is the bottom 20%. That’s a big part of what discussions in the comments are for. And yes, that includes comments like “examples?” or “what do you mean by [some word]?”, or “that part makes no sense”. That sort of thing makes good writing better (thereby revealing its goodness, which may’ve been somewhat obscured to begin with), while showing bad writing for what it is.
Second… the grandparent comment links to several posts on which I left critical comments. Now, were these posts in the top 20%, or in the bottom 20%?
If they were in the top 20%, then my critiques of these posts satisfy your expressed desire for critique of the top 20% of writing.
But if they were in the bottom 20% of writing, then their authors can hardly be claimed to be the sort of “good writers” of “good content” whom we wish to retain on Less Wrong…
Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong.
Including Eliezer himself.
I think, at this point, that the “attract more writers, and then somehow this results in LW producing stuff as great as [some old stuff from back in the day]” is a failed project. You can’t get quality out of quantity like this.
I… don’t get your overall judgement. Didn’t you just say that within the last few weeks the curated feed included:
interesting – 3
AI stuff – 4
very interesting – 1
mildly interesting – 1
who knows – 1
good post no complaints – 1
good – 1
good on net – 1
assume is very good – 1
seems useful though not for me – 1
contentless vibes – 1
insight porn – 1
glorified shower thought – 1
worthless garbage – 1
worst pieces of pernicious bullshit – 1
If we include the AI stuff you didn’t comment on in good, which I think it generally is and makes sense by your lights for the judgement of LessWrong, then that’s like 5⁄15 being bad by your lights, and like 8⁄15 actively good by your lights.
That… seems like a pretty solid hit rate? In your own words, if you are bothered by the bad ones, why not just move on and ignore them? You don’t have to engage with them, and this hit rate by your own judgement seems hardly indicative of something terrible going on.
IDK, maybe you meant to convey some different vibe with your list of judgements, but I was very confused by the contrast of your list seeming pretty positive, and then somehow, because you don’t like 1⁄3 of posts, you ending up at the conclusion of “The good:crap ratio in the rest of it is deeply unimpressive”.
Well, first of all, you’ve miscounted somehow… I don’t want to get too far into the weeds about each individual example, but here’s how I’d characterize my list:
shouldn’t be on LW at all (but since that’s not really how LW is run now, let’s call this one “shouldn’t be anywhere but the author’s personal blog section”): 5
fine for LW, but definitely not “curated”-quality (and says something very sad about LW if it is included in “curated”): 6
worthy of “curated”: 3
~technical AI stuff: 6
Remember, this is just the “curated” posts. If I were listing from the “All Posts” feed, or probably even from the “Frontpage Posts” feed, then of course you would be right to say “don’t like? don’t read!”. But my point isn’t “sometimes people post bad or mediocre posts on Less Wrong dot com—the horror!”. Recall that I wrote this in response to Ben’s claim about how much good stuff there is:
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
And I am saying: no, actually, this is false. If you just read the curated posts, you will not, in fact, find “post after post of insightful + thoughtful content 2-3x per week”. Not even close.
This is important, because the “but look how much good stuff there is!” argument gets brought out whenever we have this “look how much bad stuff there is!” argument. In other words, the claim that gets made is “yes we have lower standards than you might like, but that’s the price of attracting all of this good stuff that we’ve got”. If it is not in fact true that there is a lot of good stuff, then that reply loses all of its force.
From my perspective, there’s been something of a democritization of people sharing their opinions on LessWrong, where way more people feel comfortable writing and opining on the site than they did 10 years ago, including from many people who are less ideologically on board with the founding writing of LessWrong, which has led to far lower standards in the bottom ~30% of cases, but has allowed for a much wider set of ideas and considerations to be sorted through and to rise to the top (when weighted by karma & attention).
I do think there are a lot more bad takes on LW than before, but obviously just way more frequent good content than there was on LW 1.0. If you just read the curated posts, you’ll just find post after post of insightful + thoughtful content 2-3x per week, which I expect is probably way faster than the old Featured/Main updates were in like 2011-15 (i.e. most of the period after which Eliezer ceased his daily posting).
I continue to think a frame of “we need to make all the worst content better” or “we need to accurately label all the worst content as ‘bad’” is a deep black hole that will eat all of your effort and you will never succeed. This isn’t to say I wouldn’t like to see more critique, but I want to encourage more critique of the top ~20% of writing, not the bottom ~20% of writing.
I think the main cause of less greatness is less great writing. Nobody on the entire damn internet has in my opinion matched Eliezer’s writing quality x frequency x insight during the publication of the sequences, and certainly not on LessWrong. That was what attracted much of the greatness. There’s been a lot of good on LessWrong that’s attracted good writers, and better than most places, but Eliezer-writing-the-sequences is not something one simply “does again”.[1]
(And this difficulty has essentially nothing to do with the sorts of comments that Said writes.)
Though I did spend all of yesterday and today working out the details of a project to cause something quite similar to happen, and got Eliezer’s feedback on it, which I continue to feel is promising. So I am trying!
Seriously, people, go back to a randomly selected comment section from 10 years ago. Go back to a random discussion post from 10 years ago. These were not, in the median, better posts or comments! Indeed, they were very consistently much much worse.
I don’t think it’s even the case that more bad takes are written now that we have more content. The ratio to the most active time of LW 1.0 is only like 2x or 3x, and indeed those most active times were the ones where you had a ton of really dumb political discussions, and pickup artistry discussions and everything was inundated with people who just showed up because of HPMoR, which to be clear included me, but I was still a dumb commenter with dumb takes.
Fwiw, I personally choose to write criticism only in spots where it’s important yet missing (sometimes to the point where it seems everyone else is dropping the ball by allowing the authors to push a frame that’s wrong/misleading/incomplete/insufficiently argued for). Illustrative examples include Critch’s post on LLM consciousness, Bensinger’s post (and Ruby’s curation) on computationalism and identity, Abram Demski’s post on Circular Reasoning, Said’s skepticism of “statements should be at least two of true, necessary/useful, and kind,” cursory references to CEV by many top users on this site (including Habryka), Rohin Shah arguing Eliezer’s presentation of coherence arguments is fine instead of deeply misleading, etc.
One thing virtually all of these have in common is that they all come from highly reputable users on this site, they often get praise from other top users, and yet I think they’re all wrong but nobody else seems to have identified (and enunciated!) the critical issues on my mind.
(Note all the examples I chose for the grandparent comment also follow the same pattern. It’s not average Joe Schmoe failing to apply basic rules of epistemics, it’s reputable users on the level of Valentine, as an example.)
This is, actually, far from obvious, at least to me. LW 1.0 really went downhill in the last several years before the relaunch, so it’s not implausible that what you say is true for the period of, say, 2014–2017… but even then, I wouldn’t bet a lot of money on it.
Bit of an overestimate. There aren’t even any curated posts “2–3x per week”, never mind “insightful + thoughtful” ones…
But that’s fine, “more rare but more good” is great, 1–2 a week is generally enough, if they’re good enough, and I’ve explicitly endorsed a move in that direction in the past; so let’s see how the last 20 curated posts (i.e., one full page of them, on GreaterWrong) stack up:
interesting language thing
the “armchair speculations about medical stuff” genre is really just way too easy to write bullshit in, so who knows whether this one’s any good (it’s not like we’ve got a bunch of real experts weighing in…)
interesting “field report”
very interesting review of the state of a field
good post, good point, no complaints
superficial appearance of usefulness, actually just a pile of worthless garbage; mod who curated this one clearly didn’t read it (just like most upvoters and commenters, probably)
contentless vibes (comments also full of contentless vibes)
technical(?) AI stufftechnical AI stuff; not my domain of expertise, I’ll just assume that this one is very good, why not
interesting examination of a concept, with useful examples
basically insight porn
one of the worst pieces of pernicious bullshit I’ve ever read on this website (par for the course for this author, though)
more AI stuff, mostly no comment on this one, but discussion in comments seems good (as in, I see important points being discussed sanely)
the subject matter is interesting and good to know, but the treatment here is amateurish; this would be fine if we had more people interested in this sort of thing who could correct misconceptions in the comments, but alas… still, probably good on net
technical AI stuff
glorified “shower thought” (also par for the course for this author); at least it started some not-completely-worthless discussion in the comments
technical AI stuff
seems useful for people who care more about the subject matter than I do, which is fine
technical AI stuff
also shower thoughts / insight porn, but this one is mildly interesting, I guess
(These are deliberately shuffled from their displayed reverse-chronological order, since my point here is the aggregate trends, not criticism of any particular post.)
Not a great record. The technical AI stuff is all fine, I don’t really have any complaints about such posts even if most of them sail over my head. The good:crap ratio in the rest of it is deeply unimpressive. And this is just the curated posts!
Why? Seems fairly easy, actually. (The “label” one, not the “make it better” one; as you know, I favor selective methods over corrective ones.)
Yes, well, here’s the thing about that…
First, you do not know in advance which writing is the top 20% and which is the bottom 20%. That’s a big part of what discussions in the comments are for. And yes, that includes comments like “examples?” or “what do you mean by [some word]?”, or “that part makes no sense”. That sort of thing makes good writing better (thereby revealing its goodness, which may’ve been somewhat obscured to begin with), while showing bad writing for what it is.
Second… the grandparent comment links to several posts on which I left critical comments. Now, were these posts in the top 20%, or in the bottom 20%?
If they were in the top 20%, then my critiques of these posts satisfy your expressed desire for critique of the top 20% of writing.
But if they were in the bottom 20% of writing, then their authors can hardly be claimed to be the sort of “good writers” of “good content” whom we wish to retain on Less Wrong…
Including Eliezer himself.
I think, at this point, that the “attract more writers, and then somehow this results in LW producing stuff as great as [some old stuff from back in the day]” is a failed project. You can’t get quality out of quantity like this.
I… don’t get your overall judgement. Didn’t you just say that within the last few weeks the curated feed included:
interesting – 3
AI stuff – 4
very interesting – 1
mildly interesting – 1
who knows – 1
good post no complaints – 1
good – 1
good on net – 1
assume is very good – 1
seems useful though not for me – 1
contentless vibes – 1
insight porn – 1
glorified shower thought – 1
worthless garbage – 1
worst pieces of pernicious bullshit – 1
If we include the AI stuff you didn’t comment on in good, which I think it generally is and makes sense by your lights for the judgement of LessWrong, then that’s like 5⁄15 being bad by your lights, and like 8⁄15 actively good by your lights.
That… seems like a pretty solid hit rate? In your own words, if you are bothered by the bad ones, why not just move on and ignore them? You don’t have to engage with them, and this hit rate by your own judgement seems hardly indicative of something terrible going on.
IDK, maybe you meant to convey some different vibe with your list of judgements, but I was very confused by the contrast of your list seeming pretty positive, and then somehow, because you don’t like 1⁄3 of posts, you ending up at the conclusion of “The good:crap ratio in the rest of it is deeply unimpressive”.
Well, first of all, you’ve miscounted somehow… I don’t want to get too far into the weeds about each individual example, but here’s how I’d characterize my list:
shouldn’t be on LW at all (but since that’s not really how LW is run now, let’s call this one “shouldn’t be anywhere but the author’s personal blog section”): 5
fine for LW, but definitely not “curated”-quality (and says something very sad about LW if it is included in “curated”): 6
worthy of “curated”: 3
~technical AI stuff: 6
Remember, this is just the “curated” posts. If I were listing from the “All Posts” feed, or probably even from the “Frontpage Posts” feed, then of course you would be right to say “don’t like? don’t read!”. But my point isn’t “sometimes people post bad or mediocre posts on Less Wrong dot com—the horror!”. Recall that I wrote this in response to Ben’s claim about how much good stuff there is:
And I am saying: no, actually, this is false. If you just read the curated posts, you will not, in fact, find “post after post of insightful + thoughtful content 2-3x per week”. Not even close.
This is important, because the “but look how much good stuff there is!” argument gets brought out whenever we have this “look how much bad stuff there is!” argument. In other words, the claim that gets made is “yes we have lower standards than you might like, but that’s the price of attracting all of this good stuff that we’ve got”. If it is not in fact true that there is a lot of good stuff, then that reply loses all of its force.