I disagree with a lot of John’s sociological theories, but this is one I independently have fairly high credence in. I think it elegantly explains poor decisions by seemingly smart people like Putin, SBF, etc, as well as why dictators often perform poorly (outside of a few exceptions like LKY).
Linch
The other complaint I had about that segment is that I do not believe microeconomics-informed reading of criminal punishment (as exemplified by Gary Becker’s work) has held up well.
I think it’s often given as an example of where microeconomics-informed reasoning has led policymakers astray (as criminals are often bad at expected value calculations, even intuitively), and certainty of punishment >> expected cost of punishment. I don’t have a direct source for this but I think it’s a common position among economists.
Beneath Psychology sequence too long. Sorry!
Religion for Rationalists—very low, maybe 1/10? It just doesn’t seem like the type of thing that has an easy truth-value to it, which is frustrating. I definitely buy the theoretical argument that religion is instrumentally rational for many people[1], what’s lacking is empirics and/or models. But nothing in the post itself is woo-y.
Symmetry Theory of Valence -- 5-6/10? I dunno I’ve looked into it a bit over the years but it’s far from any of the things I’ve personally deeply studied. They trigger a bunch of red flags; however I’d be surprised but not shocked if it turns out I’m completely wrong here. I know Scott (whose epistemics I broadly trust) and somebody else I know endorses them.
But tbc I’m not the arbiter of what is and is not woo lol.
- ^
And I’m open to the instrumental rationality being large enough that it even increases epistemic rationality. Analogy: if you’re a scientist who’s asked to believe a false thing to retain your funding, it might well be worth it even from a purely truth-seeking perspective, though of course it’s a dangerous path.
- ^
I don’t feel strongly about what the specific solutions are. I think it’s easier to diagnose a problem than to propose a fix.
In particular, I worry about biases in proposing solutions that favor my background and things I’m good at.
I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions. Though again, this could easily be because I didn’t invest in the points to learn physics that much myself, so there might be major holes in what I don’t know and my own epistemics.
I think looking at relevant statistics (including Our World In Data) is often good, though it depends on the specific questions you’re interested in investigating. I think questions you should often ask yourself for any interesting discovery or theory you want to propose is something like “how can I cheaply gather more data?” and “Is the data already out there?” Some questions you might be interested in are OWID-shaped, and most probably will not be.
I found forecasting edifying for my own education and improving my own epistemics, but I don’t know what percentage of LessWrongers currently forecast, and I don’t have a good sense of whether it’s limiting LessWrongers. Forecasting/reading textbooks/reading papers/reading high-quality blogposts all seem like plausible contenders for good uses of time.
My problems with the EA forum (or really EA-style reasoning as I’ve seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction.
Interestingly I think this mirrors the debates of “hard” vs “soft” obscurantism in the social sciences: hard obscurantism (as is common in old-school economics) relies on over-focus on mathematical modeling and complicated equations based on scant data and debatable theory, while soft obscurantism (as is common in most of the social sciences and humanities, outside of econ and maybe modern psychology) relies on complicated verbal debates, and dense jargon. I think my complaints about LW (outside of AI) mirror that of soft obscurantism, and your complaints of EA-forum style math modeling mirror that of hard obscurantism.
To be clear I don’t think our critiques are at odds with each other.
In economics, the main solution over the last few decades appears mostly to be to limit their scope and turn to greater empiricism (“better data beats better theory”), though that of course has its own downsides (streetlight bias, less investigation into the more important issues, replication crises). I think my suggestions to focus more on data is helpful in that regard.
My own thoughts on LessWrong culture, specifically focused on things I personally don’t like about it (while acknowledging it does many things well). I say this as someone who cares a lot about epistemic rationality in my own thinking, and aspire to be more rational and calibrated in a number of ways.
Broadly, I tend not to like many of the posts here that are not about AI. The main exception are posts that are focused on objective reality, with specific, tightly focused arguments (eg).
I think many of the posts here tend to be overtheorized, and not enough effort being spent on studying facts and categorizing empirical regularities about the world (in science, the difference between a “Theory” and a “Law”).
My premium of life post is an example of the type of post I wish other people write more of.
Many of the commentators also some seem to have background theories about the world that to me seem implausibly neat (eg a lot of folk evolutionary-psychology, a common belief regulation drives everything)
Good epistemics is built on a scaffolding of facts, and I do not believe that many people on LessWrong spent enough effort checking whether their load-bearing facts are true.
Many of the posts here have a high verbal tilt; I think verbal explanations are good for explaining concepts you already understand very well to normal people, but verbal reasoning is not a reliable guide to discovering truth. In contrast, they tend to be lighter on statistics, data and simple mathematical reasoning.
The community overall seems more tolerant of post-rationality and “woo” than I would’ve expected the standard-bearers of rationality to be.
The comments I like the most tend to be ones that are a) tightly focused, b) easy to understand factual or logical claims, c) either/both focused on important load-bearing elements of the original arguments and/or easy to address, and d) crafted in a way to not elicit strong emotional reactions from either your debate partner or any onlookers.
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
Relatedly, this is why I don’t agree with broad generalizations of LW in general having “better epistemics”
Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong. Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.
I realize that this quick take is quite hypocritical in that it displays the same flaws I criticized, as are some of my recent posts. I’m also drafting a post arguing against hypocrisy being a major anti-desiderata, so at least I’m not meta-hypocritical about the whole thing.
Dario Amodei (Anthropic cofounder and CEO), Shane Legg(co-founder and Chief AGI scientist of Google DeepMind), and others have numbers that are not plausibly construed as “very low.”
Not sure why this article was downvoted so much! I think it’s better researched, more careful, and the arguments are overall substantially better than the “Case For Trump” published here last year. But that one had net 12 karma and (as of my comment) this post is sitting at −21.
Wrote a review of Ted Chiang focusing on what I think makes him unique:
he imagines entirely different principles of science for his science fiction and carefully treats them step by step
technology enhances his characters lives’ and their humanity, rather than serve as a torment nexus.
he treats philosophical problems as lived experiences rather than intellectual exercises
In (attempted) blinded trials, my review is consistently ranked #1 by our AI overlords, so check out the one book review that all the LLMs are raving about!!!
It’s only ~2x the length of the “EA Case for Trump” argument that was passed around last year and I at least had very little trouble responding point-by-point to.
Wow I’m a moron.
This just seems so obviously disanalogous given the context of the rest of her monologue and the play!
I don’t know how to make progress on this dialectic. I think you’re obviously wrong, and either “Shakespeare wanting to sacrifice reasonableness for a better sound in that line” or “Juliet was supposed to be going a little bit crazy in that scene” are more reasonable hypotheses than a galaxy-brained take that the whole soliloquy actually makes sense on a literal level. You think I’m obviously wrong. I don’t know if there’s enough textual evidence to differentiate our hypotheses given that what I think of as overwhelming evidence for my position you somehow are updating in the opposite direction of, and think I’m being unreasonable for not seeing your side. The question also doesn’t matter.The only real piece of empirical evidence I can imagine updating people here is historical evidence, actors’ instructions, etc, which I doubt we have access to. I’d love there to be a good answer to this conundrum, but I don’t see it. So I think I’m tapping out.
My impression is that genetic variability in intelligence is much closer to strength than in height!
Why do you think intelligence is much more rigid? I don’t think this is true, especially at the lower end.
I also think people’s popular conceptions of strength training are swamped by “beginner gains” which I expect would be applicable to intelligence as well if we didn’t have a public schooling system.
O Romeo, Romeo, wherefore art thou Romeo?
Deny thy father and refuse thy name.
Or if thou wilt not, be but sworn my love
And I’ll no longer be a Capulet.
’Tis but thy name that is my enemy:
Thou art thyself, though not a Montague.
What’s Montague? It is nor hand nor foot
Nor arm nor face nor any other part
Belonging to a man. O be some other name.
What’s in a name? That which we call a rose
By any other name would smell as sweet;
So Romeo would, were he not Romeo call’d,
Retain that dear perfection which he owes
Without that title. Romeo, doff thy name,
And for that name, which is no part of thee,
Take all myself.
This is a common response, but implausible in the direct reading of the text I think.
This is a common response, but I don’t think it makes sense if you read the rest of the soliloquy, much of which is specifically about meditating on the nature of names (“a rose by any other name smells just as sweet”)
Shakespeare’s “O Romeo, Romeo, wherefore art thou Romeo?” doesn’t actually make any sense.
(One quick point of confusion: “Wherefore” in Shakespeare’s time means “Why”, not “Where?” In modern terms, it might be translated as “Romeo, Romeo, why you gotta be Romeo, yo?”)
But think a bit more about the context of her lament: Juliet’s upset that her crush, Romeo, comes from an enemy family, the Montagues. But why would she be upset that he’s named Romeo? Juliet’s problem with the “Romeo Montague” name isn’t (or shouldn’t be) the “Romeo” part, it’s clearly the Montague!
I pointed this out before multiple times and as far as I know nobody has proffered a convincing explanation.
If you agree with my analysis, there are several interesting points:
There is a common misconception: the meaning of “wherefore”
Given that, there’s also common knowledge that this is a common misconception
After the misconception is fixed and people knew what Shakespeare was talking about “why”, it STILL doesn’t make sense
Very few people appear to notice that it doesn’t make sense.
I believe points 2-4 are not unrelated to each other! I think a lot of people subconsciously go through this process:
Dumb people think that line in Shakespeare’s dialogue doesn’t make sense
I do not wish to be dumb
Therefore I believe that line in Shakespeake’s dialogue makes sense.
I think the generalization of this phenemenon is a gaping hole in intellectual spaces. At least, if you care about truth-seeking! Just because a position is commonly held by stupid people doesn’t mean it’s false! Further, just because stupid people believe bad reason X for Y doesn’t mean you ought to turn off your brain in independently evaluating whether Y is true!
Put another way, people should have much higher affordance to “dare to be stupid.”
Yeah that’s a reasonable perspective. I think my issue is just that many/enough people mimic the wrong things/people, and there isn’t enough self-correction, so it’s hard to go up higher in tiers as a result.
https://linch.substack.com/p/the-puzzle-of-war
I wrote about Fearon (1995)’s puzzle: reasonable countries, under most realistic circumstances, always have better options than to go to war. Yet wars still happen. Why?
I discuss 4 different explanations, including 2 of Fearon’s (private information with incentives to mislead, commitment problems) and 2 others (irrational decisionmakers, and decisionmakers that are game-theoretically rational but have unreasonable and/or destructive preferences)