Yepp, I agree with this. I guess our main disagreement is whether the “low epistemic standards” framing is a useful way to shape that energy. I think it is because it’ll push people towards realising how little evidence they actually have for many plausible-seeming hypotheses on this website.
A housemate of mine said to me they think LW has a lot of breadth, but could benefit from more depth.
I think in general when we do intellectual work we have excellent epistemic standards, capable of listening to all sorts of evidence that other communities and fields would throw out, and listening to subtler evidence than most scientists (“faster than science”), but that our level of coordination and depth is often low. “LessWrongers should collaborate more and go into more depth in fleshing out their ideas” sounds more true to me than “LessWrongers have very low epistemic standards”.
In general when we do intellectual work we have excellent epistemic standards, capable of listening to all sorts of evidence that other communities and fields would throw out, and listening to subtler evidence than most scientists (“faster than science”)
“Being more openminded about what evidence to listen to” seems like a way in which we have lower epistemic standards than scientists, and also that’s beneficial. It doesn’t rebut my claim that there are some ways in which we have lower epistemic standards than many academic communities, and that’s harmful.
In particular, the relevant question for me is: why doesn’t LW have more depth? Sure, more depth requires more work, but on the timeframe of several years, and hundreds or thousands of contributors, it seems viable. And I’m proposing, as a hypothesis, that LW doesn’t have enough depth because people don’t care enough about depth—they’re willing to accept ideas even before they’ve been explored in depth. If this explanation is correct, then it seems accurate to call it a problem with our epistemic standards—specifically, the standard of requiring (and rewarding) deep investigation and scholarship.
LW doesn’t have enough depth because people don’t care enough about depth—they’re willing to accept ideas even before they’ve been explored in depth. If this explanation is correct, then it seems accurate to call it a problem with our epistemic standards—specifically, the standard of requiring (and rewarding) deep investigation and scholarship.
Your solution to the “willingness to accept ideas even before they’ve been explored in depth” problem is to explore ideas in more depth. But another solution is to accept fewer ideas, or hold them much more provisionally.
I suspect trying to browbeat people to explore ideas in more depth works against the grain of an online forum as an institution. Browbeating works in academia because your career is at stake, but in an online forum, it just hurts intrinsic motivation and cuts down on forum use (the forum runs on what Clay Shirky called “cognitive surplus”, essentially a term for peoples’ spare time and motivation). I’d say one big problem with LW 1.0 that LW 2.0 had to solve before flourishing was people felt too browbeaten to post much of anything.
If we accept fewer ideas / hold them much more provisionally, but provide a clear path to having an idea be widely held as true, that creates an incentive for people to try & jump through hoops—and this incentive is a positive one, not a punishment-driven browbeating incentive.
Maybe part of the issue is that on LW, peer review generally happens in the comments after you publish, not before. So there’s no publication carrot to offer in exchange for overcoming the objections of peer reviewers.
“If we accept fewer ideas / hold them much more provisionally, but provide a clear path to having an idea be widely held as true, that creates an incentive for people to try & jump through hoops—and this incentive is a positive one, not a punishment-driven browbeating incentive.”
Hmm, it sounds like we agree on the solution but are emphasising different parts of it. For me, the question is: who’s this “we” that should accept fewer ideas? It’s the set of people who agree with my argument that you shouldn’t believe things which haven’t been fleshed out very much. But the easiest way to add people to that set is just to make the argument, which is what I’ve done. Specifically, note that I’m not criticising anyone for producing posts that are short and speculative: I’m criticising the people who update too much on those posts.
Fair enough. I’m reminded of a time someone summarized one of my posts as being a definitive argument against some idea X and me thinking to myself “even I don’t think my post definitively settles this issue” haha.
I do think right now LessWrong should lean more in the direction the Richard is suggesting – I think it was essential to establish better Babble procedures but now we’re doing well enough on that front that I think setting clearer expectations of how the eventual pruning works is reasonable.
I wanted to register that I don’t like “babble and prune” as a model of intellectual development. I think intellectual development actually looks more like:
1. Babble
2. Prune
3. Extensive scholarship
4. More pruning
5. Distilling scholarship to form common knowledge
And that my main criticism is the lack of 3 and 5, not the lack of 2 or 4.
I also note that: a) these steps get monotonically harder, so that focusing on the first two misses *almost all* the work; b) maybe I’m being too harsh on the babble and prune framework because it’s so thematically appropriate for me to dunk on it here; I’m not sure if your use of the terminology actually reveals a substantive disagreement.
I basically agree with your 5-step model (I at least agree it’s a more accurate description than Babel and Prune, which I just meant as rough shorthand). I’d add things like “original research/empiricism” or “more rigorous theorizing” to the “Extensive Scholarship” step.
I see the LW Review as basically the first of (what I agree should essentially be at least) a 5 step process. It’s adding a stronger Step 2, and a bit of Step 5 (at least some people chose to rewrite their posts to be clearer and respond to criticism)
...
Currently, we do get non-zero Extensive Scholarship and Original Empiricism. (Kaj’s Multi-Agent Models of Mind seems like it includes real scholarship. Scott Alexander / Eli Tyre and Bucky’s exploration into Birth Order Effects seemed like real empiricism). Not nearly as much as I’d like.
If the cost of evaluating a hypothesis is high, and hypotheses are cheap to generate, I would like to generate a great deal before selecting one to evaluate.
But, honestly… I’m not sure it’s actually a question that was worth asking. I’d like to know if Eliezer’s hypothesis about mathematicians is true, but I’m not sure it ranks near the top of questions I’d want people to put serious effort into answering.
I do want LessWrong to be able to followup Good Hypotheses with Actual Research, but it’s not obvious which questions are worth answering. OpenPhil et al are paying for some types of answers, I think usually by hiring researchers full time. It’s not quite clear what the right role for LW to play in the ecosystem.
All else equal, the harder something is, the less we should do it.
My quick take is that writing lit reviews/textbooks is a comparative disadvantage of LW relative to the mainstream academic establishment.
In terms of producing reliable knowledge… if people actually care about whether something is true, they can always offer a cash prize for the best counterargument (which could of course constitute citation of academic research). The fact that people aren’t doing this suggests to me that for most claims on LW, there isn’t any (reasonably rich) person who cares deeply re: whether the claim is true. I’m a little wary of putting a lot of effort into supply if there is an absence of demand.
(I guess the counterargument is that accurate knowledge is a public good so an individual’s willingness to pay doesn’t get you complete picture of the value accurate knowledge brings. Maybe what we need is a way to crowdfund bounties for the best argument related to something.)
(I agree that LW authors would ideally engage more with each other and academic literature on the margin.)
I’ve been thinking about the idea of “social rationality” lately, and this is related. We do so much here in the way of training individual rationality—the inputs, functions, and outputs of a single human mind. But if truth is a product, then getting human minds well-coordinated to produce it might be much more important than training them to be individually stronger. Just as assembly line production is much more effective in producing almost anything than teaching each worker to be faster in assembling a complete product by themselves.
My guess is that this could be effective not only in producing useful products, but also in overcoming biases. Imagine you took 5 separate LWers and asked them to create a unified consensus response to a given article. My guess is that they’d learn more through that collective effort, and produce a more useful response, than if they spent the same amount of time individually evaluating the article and posting their separate replies.
Of course, one of the reasons we don’t to that so much is that coordination is an up-front investment and is unfamiliar. Figuring out social technology to make it easier to participate in might be a great project for LW.
There’s been a fair amount of discussion of that sort of thing here: https://www.lesswrong.com/tag/group-rationality There are also groups outside LW thinking about social technology such as RadicalxChange.
Imagine you took 5 separate LWers and asked them to create a unified consensus response to a given article. My guess is that they’d learn more through that collective effort, and produce a more useful response, than if they spent the same amount of time individually evaluating the article and posting their separate replies.
I’m not sure. If you put those 5 LWers together, I think there’s a good chance that the highest status person speaks first and then the others anchor on what they say and then it effectively ends up being like a group project for school with the highest status person in charge. Somerelatedlinks.
That’s definitely a concern too! I imagine such groups forming among people who either already share a basic common view, and collaborate to investigate more deeply. That way, any status-anchoring effects are mitigated.
Alternatively, it could be an adversarial collaboration. For me personally, some of the SSC essays in this format have led me to change my mind in a lasting way.
they’re willing to accept ideas even before they’ve been explored in depth
People also reject ideas before they’ve been explored in depth. I’ve tried to discuss similar issues with LW before but the basic response was roughly “we like chaos where no one pays attention to whether an argument has ever been answered by anyone; we all just do our own thing with no attempt at comprehensiveness or organizing who does what; having organized leadership of any sort, or anyone who is responsible for anything, would be irrational” (plus some suggestions that I’m low social status and that therefore I personally deserve to be ignored. there were also suggestions – phrased rather differently but amounting to this – that LW will listen more if published ideas are rewritten, not to improve on any flaws, but so that the new versions can be published at LW before anywhere else, because the LW community’s attention allocation is highly biased towards that).
I feel somewhat inclined to wrap up this thread at some point, even while there’s more to say. We can continue if you like and have something specific or strong you’d like to ask, but otherwise will pause here.
You have to realise that what you are doing isn’t adequate in order to gain the motivation to do it better, and that is unlikely to happen if you are mostly communicating with other people who think everything is OK.
Lesswrong is competing against philosophy as well as science, and philosophy has broader criterion of evidence still. In fact , lesswrongians are often frustrated that mainstream philosophy takes such topics as dualism or theism seriously.. even though theres an abundance of Bayesian evidence for them.
A housemate of mine said to me they think LW has a lot of breadth, but could benefit from more depth.
I think in general when we do intellectual work we have excellent epistemic standards, capable of listening to all sorts of evidence that other communities and fields would throw out, and listening to subtler evidence than most scientists (“faster than science”), but that our level of coordination and depth is often low. “LessWrongers should collaborate more and go into more depth in fleshing out their ideas” sounds more true to me than “LessWrongers have very low epistemic standards”.
“Being more openminded about what evidence to listen to” seems like a way in which we have lower epistemic standards than scientists, and also that’s beneficial. It doesn’t rebut my claim that there are some ways in which we have lower epistemic standards than many academic communities, and that’s harmful.
In particular, the relevant question for me is: why doesn’t LW have more depth? Sure, more depth requires more work, but on the timeframe of several years, and hundreds or thousands of contributors, it seems viable. And I’m proposing, as a hypothesis, that LW doesn’t have enough depth because people don’t care enough about depth—they’re willing to accept ideas even before they’ve been explored in depth. If this explanation is correct, then it seems accurate to call it a problem with our epistemic standards—specifically, the standard of requiring (and rewarding) deep investigation and scholarship.
Your solution to the “willingness to accept ideas even before they’ve been explored in depth” problem is to explore ideas in more depth. But another solution is to accept fewer ideas, or hold them much more provisionally.
I’m a proponent of the second approach because:
I suspect even academia doesn’t hold ideas as provisionally as it should. See Hamming on expertise: https://forum.effectivealtruism.org/posts/mG6mckPHAisEbtKv5/should-you-familiarize-yourself-with-the-literature-before?commentId=SaXXQXLfQBwJc9ZaK
I suspect trying to browbeat people to explore ideas in more depth works against the grain of an online forum as an institution. Browbeating works in academia because your career is at stake, but in an online forum, it just hurts intrinsic motivation and cuts down on forum use (the forum runs on what Clay Shirky called “cognitive surplus”, essentially a term for peoples’ spare time and motivation). I’d say one big problem with LW 1.0 that LW 2.0 had to solve before flourishing was people felt too browbeaten to post much of anything.
If we accept fewer ideas / hold them much more provisionally, but provide a clear path to having an idea be widely held as true, that creates an incentive for people to try & jump through hoops—and this incentive is a positive one, not a punishment-driven browbeating incentive.
Maybe part of the issue is that on LW, peer review generally happens in the comments after you publish, not before. So there’s no publication carrot to offer in exchange for overcoming the objections of peer reviewers.
“If we accept fewer ideas / hold them much more provisionally, but provide a clear path to having an idea be widely held as true, that creates an incentive for people to try & jump through hoops—and this incentive is a positive one, not a punishment-driven browbeating incentive.”
Hmm, it sounds like we agree on the solution but are emphasising different parts of it. For me, the question is: who’s this “we” that should accept fewer ideas? It’s the set of people who agree with my argument that you shouldn’t believe things which haven’t been fleshed out very much. But the easiest way to add people to that set is just to make the argument, which is what I’ve done. Specifically, note that I’m not criticising anyone for producing posts that are short and speculative: I’m criticising the people who update too much on those posts.
Fair enough. I’m reminded of a time someone summarized one of my posts as being a definitive argument against some idea X and me thinking to myself “even I don’t think my post definitively settles this issue” haha.
Yeah, this is roughly how I think about it.
I do think right now LessWrong should lean more in the direction the Richard is suggesting – I think it was essential to establish better Babble procedures but now we’re doing well enough on that front that I think setting clearer expectations of how the eventual pruning works is reasonable.
I wanted to register that I don’t like “babble and prune” as a model of intellectual development. I think intellectual development actually looks more like:
1. Babble
2. Prune
3. Extensive scholarship
4. More pruning
5. Distilling scholarship to form common knowledge
And that my main criticism is the lack of 3 and 5, not the lack of 2 or 4.
I also note that: a) these steps get monotonically harder, so that focusing on the first two misses *almost all* the work; b) maybe I’m being too harsh on the babble and prune framework because it’s so thematically appropriate for me to dunk on it here; I’m not sure if your use of the terminology actually reveals a substantive disagreement.
I basically agree with your 5-step model (I at least agree it’s a more accurate description than Babel and Prune, which I just meant as rough shorthand). I’d add things like “original research/empiricism” or “more rigorous theorizing” to the “Extensive Scholarship” step.
I see the LW Review as basically the first of (what I agree should essentially be at least) a 5 step process. It’s adding a stronger Step 2, and a bit of Step 5 (at least some people chose to rewrite their posts to be clearer and respond to criticism)
...
Currently, we do get non-zero Extensive Scholarship and Original Empiricism. (Kaj’s Multi-Agent Models of Mind seems like it includes real scholarship. Scott Alexander / Eli Tyre and Bucky’s exploration into Birth Order Effects seemed like real empiricism). Not nearly as much as I’d like.
But John’s comment elsethread seems significant:
This reminded of a couple posts in the 2018 Review, Local Validity as Key to Sanity and Civilization, and Is Clickbait Destroying Our General Intelligence?. Both of those seemed like “sure, interesting hypothesis. Is it real tho?”
During the Review I created a followup “How would we check if Mathematicians are Generally More Law Abiding?” question, trying to move the question from Stage 2 to 3. I didn’t get much serious response, probably because, well, it was a much harder question.
But, honestly… I’m not sure it’s actually a question that was worth asking. I’d like to know if Eliezer’s hypothesis about mathematicians is true, but I’m not sure it ranks near the top of questions I’d want people to put serious effort into answering.
I do want LessWrong to be able to followup Good Hypotheses with Actual Research, but it’s not obvious which questions are worth answering. OpenPhil et al are paying for some types of answers, I think usually by hiring researchers full time. It’s not quite clear what the right role for LW to play in the ecosystem.
All else equal, the harder something is, the less we should do it.
My quick take is that writing lit reviews/textbooks is a comparative disadvantage of LW relative to the mainstream academic establishment.
In terms of producing reliable knowledge… if people actually care about whether something is true, they can always offer a cash prize for the best counterargument (which could of course constitute citation of academic research). The fact that people aren’t doing this suggests to me that for most claims on LW, there isn’t any (reasonably rich) person who cares deeply re: whether the claim is true. I’m a little wary of putting a lot of effort into supply if there is an absence of demand.
(I guess the counterargument is that accurate knowledge is a public good so an individual’s willingness to pay doesn’t get you complete picture of the value accurate knowledge brings. Maybe what we need is a way to crowdfund bounties for the best argument related to something.)
(I agree that LW authors would ideally engage more with each other and academic literature on the margin.)
I’ve been thinking about the idea of “social rationality” lately, and this is related. We do so much here in the way of training individual rationality—the inputs, functions, and outputs of a single human mind. But if truth is a product, then getting human minds well-coordinated to produce it might be much more important than training them to be individually stronger. Just as assembly line production is much more effective in producing almost anything than teaching each worker to be faster in assembling a complete product by themselves.
My guess is that this could be effective not only in producing useful products, but also in overcoming biases. Imagine you took 5 separate LWers and asked them to create a unified consensus response to a given article. My guess is that they’d learn more through that collective effort, and produce a more useful response, than if they spent the same amount of time individually evaluating the article and posting their separate replies.
Of course, one of the reasons we don’t to that so much is that coordination is an up-front investment and is unfamiliar. Figuring out social technology to make it easier to participate in might be a great project for LW.
There’s been a fair amount of discussion of that sort of thing here: https://www.lesswrong.com/tag/group-rationality There are also groups outside LW thinking about social technology such as RadicalxChange.
I’m not sure. If you put those 5 LWers together, I think there’s a good chance that the highest status person speaks first and then the others anchor on what they say and then it effectively ends up being like a group project for school with the highest status person in charge. Some related links.
That’s definitely a concern too! I imagine such groups forming among people who either already share a basic common view, and collaborate to investigate more deeply. That way, any status-anchoring effects are mitigated.
Alternatively, it could be an adversarial collaboration. For me personally, some of the SSC essays in this format have led me to change my mind in a lasting way.
People also reject ideas before they’ve been explored in depth. I’ve tried to discuss similar issues with LW before but the basic response was roughly “we like chaos where no one pays attention to whether an argument has ever been answered by anyone; we all just do our own thing with no attempt at comprehensiveness or organizing who does what; having organized leadership of any sort, or anyone who is responsible for anything, would be irrational” (plus some suggestions that I’m low social status and that therefore I personally deserve to be ignored. there were also suggestions – phrased rather differently but amounting to this – that LW will listen more if published ideas are rewritten, not to improve on any flaws, but so that the new versions can be published at LW before anywhere else, because the LW community’s attention allocation is highly biased towards that).
I feel somewhat inclined to wrap up this thread at some point, even while there’s more to say. We can continue if you like and have something specific or strong you’d like to ask, but otherwise will pause here.
You have to realise that what you are doing isn’t adequate in order to gain the motivation to do it better, and that is unlikely to happen if you are mostly communicating with other people who think everything is OK.
Lesswrong is competing against philosophy as well as science, and philosophy has broader criterion of evidence still. In fact , lesswrongians are often frustrated that mainstream philosophy takes such topics as dualism or theism seriously.. even though theres an abundance of Bayesian evidence for them.