Six years ago we introduced Shortform (later renamed Quick Takes) to LessWrong. Here’s a meme-format thing we made at the time. How do people reckon it’s gone? h/t @Raemon
LessWrong indeed isn’t epistemically rigorous enough to consistently generate important and correct rationality insights.[1] And I observe that it has mostly stopped trying.[2] But the reason it isn’t rigorous enough isn’t because of shortform. It’s for two other, different reasons.
One is that users don’t consistently follow proper norms and standards of discourse that reliably result in genuine truthseeking. I have summarized the recurrent flaws here. These are far more basic mistakes than merely “shortform-induced laziness generating less thought-out takes,” so introducing shortform had little to do with it.
The other is far more fundamental, and basically impossible to resolve in its current state. That’s because it’s the result of the structural feedback and reward loops embedded in the very notion of an online forum and “community,” and the net effect of introducing shortforms into this is negligible at best. The shortest and most compact descriptions I can point to for what I mean are Richard Ngo’s comments here and here, along with drossbucket’s comment here. Long story short, as Richard put it:
I wanted to register that I don’t like “babble and prune” as a model of intellectual development. I think intellectual development actually looks more like:
1. Babble
2. Prune
3. Extensive scholarship
4. More pruning
5. Distilling scholarship to form common knowledge
And that my main criticism is the lack of 3 and 5, not the lack of 2 or 4.
After spending a lot of time observing this site and how people interact with it, I have concluded users generally find it just too effortful and time-consuming and unfun to do steps 3 and 5 above consistently.[3] Even though those are precisely the most important steps that weed out the vast majority of incoherent, poorly thought-out, or just plain wrong ideas and allow the actually important insights to flourish.
If you want LW to stop “teetering on the edge of epistemic rigor,” as the meme says, you don’t need to get users to take shortforms and pad them out with a few paragraphs to make them into proper posts. You need something totally different. You basically need @Steven Byrnes-type analyses, like this one.[4] And you need them everywhere.
That perhaps shouldn’t be too surprising, since it seems too effortful for them to even write out or link to a single example illustrating what they mean when they use certain words. And that’s mere child’s play compared actually reading papers (or books, or even blog posts) and analyzing their ideas rigorously and writing them out to clarify and distill them.
And I have observed before that not only are there way, way too few of those kinds of posts around, but also when they appear, they don’t get nearly the engagement and praise they deserve. No wonder they’re in such short supply!
What is the point of scholarship , steps 3 and 5? If the point of 3 is to find the flaws in a seemingly attractive theory, that’s very commendable, but it’s not necessary to do the work yourself … you can go to a forum populated by experts , and ask for comments.
There’s also an issue where ideas that don’t appear defensible at first glance can get pruned prematurely. (Just because you can’t see how it could work...) Indeterminism, libertarian free will and moral realism are prominent examples here.
If the point of 3 is to find the flaws in a seemingly attractive theory, that’s very commendable, but it’s not necessary to do the work yourself … you can go to a forum populated by experts , and ask for comments.
And what forum is that? The point of Richard Ngo’s comment is that LW doesn’t do steps 3 and 5 either,[1] not frequently and reliably enough to be sufficient. And sadly, I’m not aware of any other forums that do this either. Particularly because the kinds of questions LW is interested in (rationality, AI, general sociological speculation) are not ones where established experts can directly point to existing literature that answers those questions clearly and unequivocally. Take something like Raemon’s attempts to do feedbackloop-first rationality as a representative example.
Doing steps 3 and 5 reliably seems to basically require a commitment similar to that of a full-time job.[2] And it requires a tremendous amount of already-existing expertise, and a fair bit of research taste, and a commitment to norms and principles of epistemic rationality, etc. All without pay and without the allure of increased reputation and publications on your CV that working in regular academia gives you.
Particularly because the kinds of questions LW is interested in (rationality, AI, general sociological speculation) are not ones where established experts can directly point to existing literature that answers those questions clearly and unequivocally
I was taking that for granted that you are unlikely to get definitive answers, and the exercise was more about avoiding known errors—“reinventing the wheel and making it square”.
Doing steps 3 and 5 reliably seems to basically require a commitment similar to that of a full-time job.[2]
What is the point of steps 3 and 5?
And it requires a tremendous amount of already-existing expertise, and a fair bit of research taste, and a commitment to norms and principles of epistemic rationality, etc.
Even if you are soliciting expert advice, not delving into primary sources?
Not even in comments
It’s possibly for experts to turn up and offer unsolicited critique as well...but the recipient needs to listen to benefit.
Six years ago we introduced Shortform (later renamed Quick Takes) to LessWrong. Here’s a meme-format thing we made at the time. How do people reckon it’s gone? h/t @Raemon
It’s everything I ever wanted when I repeatedly argued it was the most important thing
I think that the Shortform feature has gone just fine, but that meme shows a complete misunderstanding of the problem.
I agree.
LessWrong indeed isn’t epistemically rigorous enough to consistently generate important and correct rationality insights.[1] And I observe that it has mostly stopped trying.[2] But the reason it isn’t rigorous enough isn’t because of shortform. It’s for two other, different reasons.
One is that users don’t consistently follow proper norms and standards of discourse that reliably result in genuine truthseeking. I have summarized the recurrent flaws here. These are far more basic mistakes than merely “shortform-induced laziness generating less thought-out takes,” so introducing shortform had little to do with it.
The other is far more fundamental, and basically impossible to resolve in its current state. That’s because it’s the result of the structural feedback and reward loops embedded in the very notion of an online forum and “community,” and the net effect of introducing shortforms into this is negligible at best. The shortest and most compact descriptions I can point to for what I mean are Richard Ngo’s comments here and here, along with drossbucket’s comment here. Long story short, as Richard put it:
After spending a lot of time observing this site and how people interact with it, I have concluded users generally find it just too effortful and time-consuming and unfun to do steps 3 and 5 above consistently.[3] Even though those are precisely the most important steps that weed out the vast majority of incoherent, poorly thought-out, or just plain wrong ideas and allow the actually important insights to flourish.
If you want LW to stop “teetering on the edge of epistemic rigor,” as the meme says, you don’t need to get users to take shortforms and pad them out with a few paragraphs to make them into proper posts. You need something totally different. You basically need @Steven Byrnes-type analyses, like this one.[4] And you need them everywhere.
I’m not holding my breath.
Despite occasional half-hearted assertions to the contrary from the mods. See this and this for further discussion.
Except for a few that have tried picking up the slack.
That perhaps shouldn’t be too surprising, since it seems too effortful for them to even write out or link to a single example illustrating what they mean when they use certain words. And that’s mere child’s play compared actually reading papers (or books, or even blog posts) and analyzing their ideas rigorously and writing them out to clarify and distill them.
And I have observed before that not only are there way, way too few of those kinds of posts around, but also when they appear, they don’t get nearly the engagement and praise they deserve. No wonder they’re in such short supply!
What is the point of scholarship , steps 3 and 5? If the point of 3 is to find the flaws in a seemingly attractive theory, that’s very commendable, but it’s not necessary to do the work yourself … you can go to a forum populated by experts , and ask for comments.
There’s also an issue where ideas that don’t appear defensible at first glance can get pruned prematurely. (Just because you can’t see how it could work...) Indeterminism, libertarian free will and moral realism are prominent examples here.
And what forum is that? The point of Richard Ngo’s comment is that LW doesn’t do steps 3 and 5 either,[1] not frequently and reliably enough to be sufficient. And sadly, I’m not aware of any other forums that do this either. Particularly because the kinds of questions LW is interested in (rationality, AI, general sociological speculation) are not ones where established experts can directly point to existing literature that answers those questions clearly and unequivocally. Take something like Raemon’s attempts to do feedbackloop-first rationality as a representative example.
Doing steps 3 and 5 reliably seems to basically require a commitment similar to that of a full-time job.[2] And it requires a tremendous amount of already-existing expertise, and a fair bit of research taste, and a commitment to norms and principles of epistemic rationality, etc. All without pay and without the allure of increased reputation and publications on your CV that working in regular academia gives you.
Not even in comments
As a necessary, but surely not sufficient, condition
If it’s philosophy, you can go to philosophyforums.com or philosophy.stackexchange.com or r/askphilosophy.
If it’s physics, you can go to physicsforums.com...etc.
I was taking that for granted that you are unlikely to get definitive answers, and the exercise was more about avoiding known errors—“reinventing the wheel and making it square”.
What is the point of steps 3 and 5?
Even if you are soliciting expert advice, not delving into primary sources?
It’s possibly for experts to turn up and offer unsolicited critique as well...but the recipient needs to listen to benefit.