My own thoughts on LessWrong culture, specifically focused on things I personally don’t like about it (while acknowledging it does many things well). I say this as someone who cares a lot about epistemic rationality in my own thinking, and aspire to be more rational and calibrated in a number of ways.
Broadly, I tend not to like many of the posts here that are not about AI. The main exception are posts that are focused on objective reality, with specific, tightly focused arguments (eg).
I think many of the posts here tend to be overtheorized, and not enough effort being spent on studying facts and categorizing empirical regularities about the world (in science, the difference between a “Theory” and a “Law”).
My premium of life post is an example of the type of post I wish other people write more of.
Many of the commentators also some seem to have background theories about the world that to me seem implausibly neat (eg a lot of folk evolutionary-psychology, a common belief regulation drives everything)
Good epistemics is built on a scaffolding of facts, and I do not believe that many people on LessWrong spent enough effort checking whether their load-bearing facts are true.
Many of the posts here have a high verbal tilt; I think verbal explanations are good for explaining concepts you already understand very well to normal people, but verbal reasoning is not a reliable guide to discovering truth. In contrast, they tend to be lighter on statistics, data and simple mathematical reasoning.
The community overall seems more tolerant of post-rationality and “woo” than I would’ve expected the standard-bearers of rationality to be.
The comments I like the most tend to be ones that are a) tightly focused, b) easy to understand factual or logical claims, c) either/both focused on important load-bearing elements of the original arguments and/or easy to address, and d) crafted in a way to not elicit strong emotional reactions from either your debate partner or any onlookers.
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
Relatedly, this is why I don’t agree with broad generalizations of LW in general having “better epistemics”
Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong. Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.
I realize that this quick take is quite hypocritical in that it displays the same flaws I criticized, as are some of my recentposts. I’m also drafting a post arguing against hypocrisy being a major anti-desiderata, so at least I’m not meta-hypocritical about the whole thing.
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
My problems with the EA forum (or really EA-style reasoning as I’ve seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction. But I think the way EAs think about these things is wrong, even directionally for LessWrong (though not for people in general), to do.
I think this leads EAs to have a pretty big streetlight bias in their thinking, and the same with forecasters, and in particular EAs seem like they should focus more on bottleneck-style reasoning (eg focus on understanding & influencing a small number of key important factors).
See here for concretely how I think this caches out into different recommendations I think we’d give to LessWrong.
over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool
My problems with the EA forum (or really EA-style reasoning as I’ve seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction.
Interestingly I think this mirrors the debates of “hard” vs “soft” obscurantism in the social sciences: hard obscurantism (as is common in old-school economics) relies on over-focus on mathematical modeling and complicated equations based on scant data and debatable theory, while soft obscurantism (as is common in most of the social sciences and humanities, outside of econ and maybe modern psychology) relies on complicated verbal debates, and dense jargon. I think my complaints about LW (outside of AI) mirror that of soft obscurantism, and your complaints of EA-forum style math modeling mirror that of hard obscurantism.
To be clear I don’t think our critiques are at odds with each other.
In economics, the main solution over the last few decades appears mostly to be to limit their scope and turn to greater empiricism (“better data beats better theory”), though that of course has its own downsides (streetlight bias, less investigation into the more important issues, replication crises). I think my suggestions to focus more on data is helpful in that regard.
The community overall seems more tolerant of post-rationality and “woo” than I would’ve expected the standard-bearers of rationality to be.
Could you say more about what you’re referring to? One of my criticisms of the community is how it’s often intolerant of things that pattern-match to “woo”, so I’m curious whether these overlap.
Religion for Rationalists—very low, maybe 1/10? It just doesn’t seem like the type of thing that has an easy truth-value to it, which is frustrating. I definitely buy the theoretical argument that religion is instrumentally rational for many people[1], what’s lacking is empirics and/or models. But nothing in the post itself is woo-y.
Symmetry Theory of Valence -- 5-6/10? I dunno I’ve looked into it a bit over the years but it’s far from any of the things I’ve personally deeply studied. They trigger a bunch of red flags; however I’d be surprised but not shocked if it turns out I’m completely wrong here. I know Scott (whose epistemics I broadly trust) and somebody else I know endorses them.
But tbc I’m not the arbiter of what is and is not woo lol.
And I’m open to the instrumental rationality being large enough that it even increases epistemic rationality. Analogy: if you’re a scientist who’s asked to believe a false thing to retain your funding, it might well be worth it even from a purely truth-seeking perspective, though of course it’s a dangerous path.
But tbc I’m not the arbiter of what is and is not woo lol.
Totally. Asked only to get a better model of what you were pointing at.
And now my understanding is that we’re mostly aligned and this isn’t a deep disagreement about what’s valuable, just a labeling and/or style/standard of effort issue.
E.g. Symmetry Theory of Valence seems like the most cruxy example because it combines above-average standard of effort and clarity of reasoning (I believe X, because Y, which could be tested through Z), with a whole bunch of things that I’d agree pass the duck test standard as red flags.
I think we both agree insofar as we would give similar diagnoses, but we maybe disagree insofar as we would give different recommendations about what to change.
I would recommend LessWrongers read more history, do more formal math and physics, and make more mathematical arguments[1].
I would expect you would recommend LessWrongers spend more time looking at statistics (in particular, our world in data), spend more time forecasting, and make more mathematical models.
I don’t feel strongly about what the specific solutions are. I think it’s easier to diagnose a problem than to propose a fix.
In particular, I worry about biases in proposing solutions that favor my background and things I’m good at.
I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions. Though again, this could easily be because I didn’t invest in the points to learn physics that much myself, so there might be major holes in what I don’t know and my own epistemics.
I think looking at relevant statistics (including Our World In Data) is often good, though it depends on the specific questions you’re interested in investigating. I think questions you should often ask yourself for any interesting discovery or theory you want to propose is something like “how can I cheaply gather more data?” and “Is the data already out there?” Some questions you might be interested in are OWID-shaped, and most probably will not be.
I found forecasting edifying for my own education and improving my own epistemics, but I don’t know what percentage of LessWrongers currently forecast, and I don’t have a good sense of whether it’s limiting LessWrongers. Forecasting/reading textbooks/reading papers/reading high-quality blogposts all seem like plausible contenders for good uses of time.
I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions.
My own thoughts on LessWrong culture, specifically focused on things I personally don’t like about it (while acknowledging it does many things well). I say this as someone who cares a lot about epistemic rationality in my own thinking, and aspire to be more rational and calibrated in a number of ways.
Broadly, I tend not to like many of the posts here that are not about AI. The main exception are posts that are focused on objective reality, with specific, tightly focused arguments (eg).
I think many of the posts here tend to be overtheorized, and not enough effort being spent on studying facts and categorizing empirical regularities about the world (in science, the difference between a “Theory” and a “Law”).
My premium of life post is an example of the type of post I wish other people write more of.
Many of the commentators also some seem to have background theories about the world that to me seem implausibly neat (eg a lot of folk evolutionary-psychology, a common belief regulation drives everything)
Good epistemics is built on a scaffolding of facts, and I do not believe that many people on LessWrong spent enough effort checking whether their load-bearing facts are true.
Many of the posts here have a high verbal tilt; I think verbal explanations are good for explaining concepts you already understand very well to normal people, but verbal reasoning is not a reliable guide to discovering truth. In contrast, they tend to be lighter on statistics, data and simple mathematical reasoning.
The community overall seems more tolerant of post-rationality and “woo” than I would’ve expected the standard-bearers of rationality to be.
The comments I like the most tend to be ones that are a) tightly focused, b) easy to understand factual or logical claims, c) either/both focused on important load-bearing elements of the original arguments and/or easy to address, and d) crafted in a way to not elicit strong emotional reactions from either your debate partner or any onlookers.
Here are some comments of mine I like in this vein.
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
Relatedly, this is why I don’t agree with broad generalizations of LW in general having “better epistemics”
Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong. Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.
I realize that this quick take is quite hypocritical in that it displays the same flaws I criticized, as are some of my recent posts. I’m also drafting a post arguing against hypocrisy being a major anti-desiderata, so at least I’m not meta-hypocritical about the whole thing.
My problems with the EA forum (or really EA-style reasoning as I’ve seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction. But I think the way EAs think about these things is wrong, even directionally for LessWrong (though not for people in general), to do.
I think this leads EAs to have a pretty big streetlight bias in their thinking, and the same with forecasters, and in particular EAs seem like they should focus more on bottleneck-style reasoning (eg focus on understanding & influencing a small number of key important factors).
See here for concretely how I think this caches out into different recommendations I think we’d give to LessWrong.
@Mo Putera has asked for concrete examples of
I think AI 2027 is a good example of this sort of thing. Similarly, the notorious Rethink Priorities welfare range estimates on animal welfare, and though I haven’t thought deeply about it enough to be confident, GiveWell’s famous giant spreadsheets (see the links in the last section) are the sort of thing I am very nervous about. I’ll also point to Ajeya Cotra’s bioanchors report.
Interestingly I think this mirrors the debates of “hard” vs “soft” obscurantism in the social sciences: hard obscurantism (as is common in old-school economics) relies on over-focus on mathematical modeling and complicated equations based on scant data and debatable theory, while soft obscurantism (as is common in most of the social sciences and humanities, outside of econ and maybe modern psychology) relies on complicated verbal debates, and dense jargon. I think my complaints about LW (outside of AI) mirror that of soft obscurantism, and your complaints of EA-forum style math modeling mirror that of hard obscurantism.
To be clear I don’t think our critiques are at odds with each other.
In economics, the main solution over the last few decades appears mostly to be to limit their scope and turn to greater empiricism (“better data beats better theory”), though that of course has its own downsides (streetlight bias, less investigation into the more important issues, replication crises). I think my suggestions to focus more on data is helpful in that regard.
Could you say more about what you’re referring to? One of my criticisms of the community is how it’s often intolerant of things that pattern-match to “woo”, so I’m curious whether these overlap.
I came to ask something similar.
Could you (@Linch) provide an example or two of woo? Could you score the following examples:
How much woo/10 is the Beneath Psychology sequence? Or ‘Religion for Rationalists’? Or Symmetry Theory of Valence?
Beneath Psychology sequence too long. Sorry!
Religion for Rationalists—very low, maybe 1/10? It just doesn’t seem like the type of thing that has an easy truth-value to it, which is frustrating. I definitely buy the theoretical argument that religion is instrumentally rational for many people[1], what’s lacking is empirics and/or models. But nothing in the post itself is woo-y.
Symmetry Theory of Valence -- 5-6/10? I dunno I’ve looked into it a bit over the years but it’s far from any of the things I’ve personally deeply studied. They trigger a bunch of red flags; however I’d be surprised but not shocked if it turns out I’m completely wrong here. I know Scott (whose epistemics I broadly trust) and somebody else I know endorses them.
But tbc I’m not the arbiter of what is and is not woo lol.
And I’m open to the instrumental rationality being large enough that it even increases epistemic rationality. Analogy: if you’re a scientist who’s asked to believe a false thing to retain your funding, it might well be worth it even from a purely truth-seeking perspective, though of course it’s a dangerous path.
Totally. Asked only to get a better model of what you were pointing at.
And now my understanding is that we’re mostly aligned and this isn’t a deep disagreement about what’s valuable, just a labeling and/or style/standard of effort issue.
E.g. Symmetry Theory of Valence seems like the most cruxy example because it combines above-average standard of effort and clarity of reasoning (I believe X, because Y, which could be tested through Z), with a whole bunch of things that I’d agree pass the duck test standard as red flags.
I think we both agree insofar as we would give similar diagnoses, but we maybe disagree insofar as we would give different recommendations about what to change.
I would recommend LessWrongers read more history, do more formal math and physics, and make more mathematical arguments[1].
I would expect you would recommend LessWrongers spend more time looking at statistics (in particular, our world in data), spend more time forecasting, and make more mathematical models.
Is this accurate?
This is not an exclusive list. I think also LessWrongers should read more textbooks about pretty much everything.
I don’t feel strongly about what the specific solutions are. I think it’s easier to diagnose a problem than to propose a fix.
In particular, I worry about biases in proposing solutions that favor my background and things I’m good at.
I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions. Though again, this could easily be because I didn’t invest in the points to learn physics that much myself, so there might be major holes in what I don’t know and my own epistemics.
I think looking at relevant statistics (including Our World In Data) is often good, though it depends on the specific questions you’re interested in investigating. I think questions you should often ask yourself for any interesting discovery or theory you want to propose is something like “how can I cheaply gather more data?” and “Is the data already out there?” Some questions you might be interested in are OWID-shaped, and most probably will not be.
I found forecasting edifying for my own education and improving my own epistemics, but I don’t know what percentage of LessWrongers currently forecast, and I don’t have a good sense of whether it’s limiting LessWrongers. Forecasting/reading textbooks/reading papers/reading high-quality blogposts all seem like plausible contenders for good uses of time.
Yeah and I think if done well it’s well-received here, e.g. AdamShimi’s My Number 1 Epistemology Book Recommendation: Inventing Temperature or Ben Pace’s 12 interesting things I learned studying the discovery of nature’s laws. (It’s hard to do well though it seems, I’m certainly dissatisfied with my own writeup attempts.)
I would use a forum that awarded more upvotes to people with better score on manifold.
What does this mean?