I’m doing Inkhaven! For people interested in reading my daily content starting November 1st, consider subscribing to inchpin.substack.com!
Linch
Do you want to come up with some other “obvious exceptions” to your “Nobody says X” claim?
Tbh, I find this comment kinda bizarre.
Popular belief analogizes internet arguments to pig-wrestling: “Never wrestle with a pig because you both get dirty and the pig likes it” But does the pig, in fact, like it? I set out to investigate.
Nobody writes a story whose moral is that you should be selfish and ignore the greater good,
This seems obviously false. Ayn Rand comes to mind as the most iconic example, but eg Camus’ The Stranger also had this as a major theme, as does various self-help books. It is also the implicit moral of JJ Thompson’s violinist thought-experiment. My impression from reading summaries is that it’s also a common theme for early 20th century Japanese novels (though I don’t like them so I never read one myself).
[Linkpost] A Field Guide to Writing Styles
I agree if you model people as along some Pareto frontier of perfectly selfish to perfectly (direct) utilitarian, then in no point on that frontier does offsetting ever make sense. However, I think most people have, and endorse, having other moral goals.
For example, a lot of the intuition for offsetting may come from believing you want to be the type of person who internalizes the (large, predictably negative) externalities of your actions, so offsetting comes from your consumption rather than altruism budget.
Though again, I agree that perfect utilitarians, or people aspiring to be perfect utilitarians, should not offset. And this generalizes also to people whose idealized behavior is best described as a linear combination of perfectly utilitarian and perfectly selfish.
I think this post underestimates the value of practicing and thinking in classic style, even if you chose to ultimately discard it, or not write serious posts in that style. Because writing in classic style is so unnatural to most LessWrong dwellers, forcing yourself to write in that way, unironically, and inhabiting the style in its own lights, and especially doing it in a way that doesn’t leave you unsatisfied in the end, is a great way to grow and improve as a writer, and understand the strengths and weaknesses of your own style of writing.
I think most people shouldn’t write in classic style, for various reasons. But I have a different take here. I think writing in classic style is just very hard for most people, for a number of subtle reasons. A central tenet of classic style is presentation: the writing should look smooth and effortless. But this effortlessness is almost always a mirage, like an Instagram model who spends three hours in front of a mirror to apply the “just woke up”, au naturel, “no makeup makeup” look. Of all the (mostly) internet writers I read, only two writers jump out to me as writing in mostly classic style: Paul Graham and Ted Chiang. I don’t think it’s coincidence that they both are very unprolific, and both talk about how hard it is to write well, and how many edits they go through.
Below is a short coda I wrote in classic style, for a recent article of mine.
Intellectual jokes, at their core, are jokes that teach you new ideas, or help you reconceive existing ideas in a new way.
My favorite forms of intellectual jokes/humor work on multiple levels: They’re accessible to those who just get the surface joke but rewards deeper knowledge with additional layers of meaning. In some of the best examples, the connection to insight is itself subtle, and not highlighted by a direct reference to the relevant academic fields.
There are two failures of attempts to do intellectual humor. They can fail to be intellectual, or they can fail to be funny. Of frequently cited attempts to do “intellectual” humor that fail to be intellectual, there are again two common forms: 1) they are about intellectuals as people, rather than about ideas, or 2) They’re about jargon, not ideas.
In both cases, the joke isn’t intellectual humor so much as “smart people jokes”: the humor rests on stereotypes, in-group solidarity, and the feeling of smartness that you get when you get a joke, but the joke does not actually teach you about new ideas, or help you reconceive of existing ideas in a new way.
Two examples come to mind:
Q: How do you tell if a mathematician is extroverted2?
A: When he’s talking to you, he stares at your shoes!
And
Q: What’s purple and commutes?
A: An Abelian3 grape.
If you were in my undergrad abstract algebra classes, the above jokes were the shit. For 20 year old math majors, they were hilarious. Nonetheless, they are not, by any reasonable definition of the term, intellectual.
Of course, a more common failure mode is that the jokes simply fail to be funny. I will not offer a treatise into what makes a joke funny. All unfunny jokes are alike in their unfunniness, but each funny joke is funny in its own way.
I’m currently drafting a post on different mature writing by first inhabiting the respective styles and then evaluating the pros and cons, especially in the context of internet writing. It’s a pretty hard post to write, and I suspect it’d be a lot less popular in the end than the Chiang review or many LessWrong posts, but I hope it’d be more helpful.
I’m glad you enjoyed it!
It Never Worked Before: Nine Intellectual Jokes
The main reason I disagree with both this comment and the OP is that you both have the underlying assumption that we are in a nadir (local nadir?) of connectedness-with-reality, whereas from my read of history I see no evidence of this, and indeed plenty of evidence against.
People used to be confused about all sorts of things, including, but not limited to, the supernatural, the causes of disease, causality itself, the capabilities of women, whether children can have conscious experiences, and so forth.
I think we’ve gotten more reasonable about almost everything, with a few minor exceptions that people seem to like highlighting (I assume in part because they’re so rare).
The past is a foreign place, and mostly not a pleasant one.
In both programming and mathematics, there’s a sense that only 3 numbers need no justification (0,1, infinity). Everything else is messier.
Unfortunately something similar is true for arguments as well. This creates a problem.
Much of the time, you want to argue that people underrate X (or overrate X). Or that people should be more Y (or less Y).
For example, people might underrate human rationality. Or overrate credentials. Or underrate near-term AI risks. Or overrate vegan food. Or underrate the case for moral realism. Or overrate Palestine’s claims. Or underrate Kendrick Lamar. (These are all real discussions I’ve had).
Much of the time, if a writer thinks their readers are underrating X, they’ll make an argument in favor of X. (Sounds obvious, I know).
But X and Y are usually not precise things that you can measure, never mind ascertain a specific value to it.
So if a writer argues for X, usually they don’t have a good sense of what value the reader assigns X (in part because of a lack of good statistics, and in part because a specific reader is a specific person with their own idiosyncratic views). Nor does a writer have a precise sense of what the optimal value of X ought to be, just that it’s higher (or lower) than what others think.
This creates major problems for both communication and clarity of thought!
One solution of course is to be an extremist. But this is a bad solution unless you actually think maximal (or minimal) X is good.
Sometimes either the structure of reality, or the structure of our disagreements, create natural mid-points while we can explicate their disagreements. For example, in my debate with BB, a natural midpoint is (we believe[1]) whether bees have net positive or net negative welfare. “0” is a natural midpoint. In my second post on the “rising premium of life”, I can naturally contrast my preferred hypothesis (premium of life rising) against the null hypothesis that the premium of life is mostly unchanged, or against the alternate hypothesis that it’s falling.
But reality often doesn’t give us such shortcuts! What are natural midpoints to argue for in terms of appropriate levels of credentialism? Or appropriate faith in human rationality? Or how much we should like Kendrick Lamar?
I don’t want to give people the illusion of an answer here, just presenting the problem as-is.
[1] This is disputed, see here.
Sounds right to me too but it’s an empirical experiment that I’d be keen on people trying!
https://linch.substack.com/p/the-puzzle-of-war
I wrote about Fearon (1995)’s puzzle: reasonable countries, under most realistic circumstances, always have better options than to go to war. Yet wars still happen. Why?
I discuss 4 different explanations, including 2 of Fearon’s (private information with incentives to mislead, commitment problems) and 2 others (irrational decisionmakers, and decisionmakers that are game-theoretically rational but have unreasonable and/or destructive preferences)
I disagree with a lot of John’s sociological theories, but this is one I independently have fairly high credence in. I think it elegantly explains poor decisions by seemingly smart people like Putin, SBF, etc, as well as why dictators often perform poorly (outside of a few exceptions like LKY).
The other complaint I had about that segment is that I do not believe microeconomics-informed reading of criminal punishment (as exemplified by Gary Becker’s work) has held up well.
I think it’s often given as an example of where microeconomics-informed reasoning has led policymakers astray (as criminals are often bad at expected value calculations, even intuitively), and certainty of punishment >> expected cost of punishment. I don’t have a direct source for this but I think it’s a common position among economists.
Beneath Psychology sequence too long. Sorry!
Religion for Rationalists—very low, maybe 1/10? It just doesn’t seem like the type of thing that has an easy truth-value to it, which is frustrating. I definitely buy the theoretical argument that religion is instrumentally rational for many people[1], what’s lacking is empirics and/or models. But nothing in the post itself is woo-y.
Symmetry Theory of Valence -- 5-6/10? I dunno I’ve looked into it a bit over the years but it’s far from any of the things I’ve personally deeply studied. They trigger a bunch of red flags; however I’d be surprised but not shocked if it turns out I’m completely wrong here. I know Scott (whose epistemics I broadly trust) and somebody else I know endorses them.
But tbc I’m not the arbiter of what is and is not woo lol.
- ^
And I’m open to the instrumental rationality being large enough that it even increases epistemic rationality. Analogy: if you’re a scientist who’s asked to believe a false thing to retain your funding, it might well be worth it even from a purely truth-seeking perspective, though of course it’s a dangerous path.
- ^
I don’t feel strongly about what the specific solutions are. I think it’s easier to diagnose a problem than to propose a fix.
In particular, I worry about biases in proposing solutions that favor my background and things I’m good at.
I think the way modern physics is taught probably gives people a overly clean/neat understanding of how most of the world works, and how to figure out problems in the world, but this might be ameliorated by studying the history of physics and how people come to certain conclusions. Though again, this could easily be because I didn’t invest in the points to learn physics that much myself, so there might be major holes in what I don’t know and my own epistemics.
I think looking at relevant statistics (including Our World In Data) is often good, though it depends on the specific questions you’re interested in investigating. I think questions you should often ask yourself for any interesting discovery or theory you want to propose is something like “how can I cheaply gather more data?” and “Is the data already out there?” Some questions you might be interested in are OWID-shaped, and most probably will not be.
I found forecasting edifying for my own education and improving my own epistemics, but I don’t know what percentage of LessWrongers currently forecast, and I don’t have a good sense of whether it’s limiting LessWrongers. Forecasting/reading textbooks/reading papers/reading high-quality blogposts all seem like plausible contenders for good uses of time.
My problems with the EA forum (or really EA-style reasoning as I’ve seen it) is the over-use of over-complicated modeling tools which are claimed to be based on hard data and statistics, but the amount and quality of that data is far too small & weak to “buy” such a complicated tool. So in some sense, perhaps, they move too far in the opposite direction.
Interestingly I think this mirrors the debates of “hard” vs “soft” obscurantism in the social sciences: hard obscurantism (as is common in old-school economics) relies on over-focus on mathematical modeling and complicated equations based on scant data and debatable theory, while soft obscurantism (as is common in most of the social sciences and humanities, outside of econ and maybe modern psychology) relies on complicated verbal debates, and dense jargon. I think my complaints about LW (outside of AI) mirror that of soft obscurantism, and your complaints of EA-forum style math modeling mirror that of hard obscurantism.
To be clear I don’t think our critiques are at odds with each other.
In economics, the main solution over the last few decades appears mostly to be to limit their scope and turn to greater empiricism (“better data beats better theory”), though that of course has its own downsides (streetlight bias, less investigation into the more important issues, replication crises). I think my suggestions to focus more on data is helpful in that regard.
My own thoughts on LessWrong culture, specifically focused on things I personally don’t like about it (while acknowledging it does many things well). I say this as someone who cares a lot about epistemic rationality in my own thinking, and aspire to be more rational and calibrated in a number of ways.
Broadly, I tend not to like many of the posts here that are not about AI. The main exception are posts that are focused on objective reality, with specific, tightly focused arguments (eg).
I think many of the posts here tend to be overtheorized, and not enough effort being spent on studying facts and categorizing empirical regularities about the world (in science, the difference between a “Theory” and a “Law”).
My premium of life post is an example of the type of post I wish other people write more of.
Many of the commentators also some seem to have background theories about the world that to me seem implausibly neat (eg a lot of folk evolutionary-psychology, a common belief regulation drives everything)
Good epistemics is built on a scaffolding of facts, and I do not believe that many people on LessWrong spent enough effort checking whether their load-bearing facts are true.
Many of the posts here have a high verbal tilt; I think verbal explanations are good for explaining concepts you already understand very well to normal people, but verbal reasoning is not a reliable guide to discovering truth. In contrast, they tend to be lighter on statistics, data and simple mathematical reasoning.
The community overall seems more tolerant of post-rationality and “woo” than I would’ve expected the standard-bearers of rationality to be.
The comments I like the most tend to be ones that are a) tightly focused, b) easy to understand factual or logical claims, c) either/both focused on important load-bearing elements of the original arguments and/or easy to address, and d) crafted in a way to not elicit strong emotional reactions from either your debate partner or any onlookers.
I think the EA Forum is epistemically better than LessWrong in some key ways, especially outside of highly politicized topics. Notably, there is a higher appreciation of facts and factual corrections.
Relatedly, this is why I don’t agree with broad generalizations of LW in general having “better epistemics”
Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong. Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it’s maladaptive for people to use Eliezer’s style and epistemic confidence in their own writings and thinking.
I realize that this quick take is quite hypocritical in that it displays the same flaws I criticized, as are some of my recent posts. I’m also drafting a post arguing against hypocrisy being a major anti-desiderata, so at least I’m not meta-hypocritical about the whole thing.
I’ll just hop on the bandwagon and say that I’ll be posting my thoughts over at inchpin.substack.com!