7 Vicious Vices of Rationalists
Vices aren’t behaviors that one should never do. Rather, vices are behaviors that are fine and pleasurable to do in moderation, but tempting to do in excess. The classical vices are actually good in part. Moderate amounts of gluttony is just eating food, which is important. Moderate amounts of envy is just “wanting things”, which is a motivator of much of our economy.
What are some things that rationalists are wont to do, and often to good effect, but that can grow pathological?
1. Contrarianism
There are a whole host of unaligned forces producing the arguments and positions you hear. People often hold beliefs out of convenience, defend positions that they are aligned with politically, or just don’t give much thought to what they’re saying one way or another.
A good way find out whether people have any good reasons for their positions, is to take a contrarian stance, and to seek the best arguments for unpopular positions. This also helps you to explore arguments around positions that others aren’t investigating.
However, this can be taken to the extreme.
While it is hard to know for sure what is going on inside others’ heads, I know I have taken positions simply by aiming to disagree, rather than to think for myself, and I strongly suspect others of blocking conversations and decisions not from genuinely disagreeing, but by reliably executing a contrarian heuristic.
I have seen people take what I consider to be ludicrous positions in order to avoid losing face, or perhaps to assert themselves in a conversation as having their own unique position, or out of a fear of orthodoxy or group-think (in response to a group appearing to make an assumption and move on).
Contrarianism is a healthy habit, but mustn’t replace thinking, or prevent one from being able to come to agreement.
2. Pedantry
It’s great to notice when sentences are not literally true. When a speaker lets an assumption fly, especially if it’s an assumption that someone else in the room cares about, that someone can take a little effort to correct it.
“You might not think that this minor rephrase or restatement matters, but to be pedantic it is at least technically inaccurate, so please let’s correct it.”
Hewing toward your sentences being technically true, and your arguments being locally valid, makes it far harder for anyone—your allies, your enemies, your fellow countrymen—to fool themselves or others by saying things that sound true but aren’t.
Yet we cannot always achieve maximal precision.
When people say a lot of sentences, there will commonly be ways to nitpick irrelevant and unimportant ways that things could be more precise. There are almost always ways that it could potentially be misleading if it were read in a specific way by someone with certain background assumptions.
And too much of that is a massive amount of friction for no gain. You can drain someone’s energy with arbitrary pedantry, and not cut that much closer to reality (especially reality usefulness).
Of course, it’s hard always to know where the usefulness lies, so we must give space for people to be pedantic. But we should also notice patterns over time of who is improving discourse by a desire for precision, and who is just producing a whole lot of friction.
(A rationalist said to me that this vice should really be known as nitpicking, not as pedantry, but I ignored their comment.)
3. Elaboration
Communication often takes effort, and it’s great that rationalists are often willing to put in the work. But sometimes I don’t need a 2,000 word comment or a 5-hour conversation; it’s just a waste of my time.
4. Social Obliviousness
Humans have a whole host of complicated social calculations going on in them, whether explicitly in people’s conscious minds or implicitly in people’s lower-level feelings and emotions, tracking status and relationships and attitudes.
Speech can affect these different games all at once, and doing so diplomatically and without upsetting any of the games can be an effortful dance.
In order to focus on the truth of a matter, of what the evidence and arguments imply, it can be helpful to set aside those concerns for a while and just focus on the explicit topic of discussion.
I’m not saying it’s good to be unable to play these social games, of being politic and polite. I am saying it is good, to be able not to.
However! Obliviousness can cause problems. In the naive situation, being unable to notice that you are causing someone major discomfort or a major attack on their status can cause social backfiring that you didn’t intend and could have avoided.
And worse, if you commit to a blind strategy, you can miss people optimizing adversarially against you. Perhaps people coordinate to make a certain position socially outcast, while you’re just focusing on what’s true, and this either ends up with you socially outcast or unable to hold the position that you’ve shown the evidence and argument led to.
Putting on the blinders a little, and being a bit naive, can help, but if done too much or in the wrong circumstance, it can leave you vulnerable to a large social outfall.
5. Assuming Good Faith
(Also known as ‘the principle of charity’ or ‘being a quokka’)
Going around kind of hoping that people are trying to have actual debate and dialogue with you and care about the truth, helps get into cooperate-cooperate equilibria with others doing this, too.
Furthermore, by the magic of humans trying to fit in and playing the right role in a scene, if you expect good behavior from people and act in a way where that’s the only good way for them to join the social interaction, they will be encouraged to try it out. It’s an invitation to play a different game.
So on the margin going around behaving as though other people are acting in good faith, will lead to other people joining you in this.
However, too much of this and you will let sociopaths trick you. If someone’s job depends on them believing something, and you assume good faith for the reason why they’re not changing their mind, then you have just added an anti-epistemic anchor into your social space and blinded yourself to it. This will not go well for your epistemics. You want to be offering an olive branch of charity, not a fig leaf.
This is a special case of obliviousness that happens so commonly as to be called out as its own vice.
6. Undercutting Social Momentum
People often invest energy in bad ideas. Even worse, sometimes people invest energy in bad ideas together. Perhaps it’s a company idea that has no chance of working; or perhaps your friends are going to out to an escape room together that you know will disappoint them.
It’s a natural and good response to lower the energy for bad ideas, to provide resistance. If someone in the group is willing to do this, it makes me more trusting of the group’s decision-making, that it won’t just get caught up in social momentum. (Our culture’s way to do this is counterargument, since counterargument is the kind of thing that correlates with the idea being good or bad.) But still, I think some people do this more due to temperament, and others do it out of bad habit—they get a rush from controlling the energies of a group.
A friend told me that their local rationalist meetup could never celebrate Petrov Day; it was only good at cutting down other culture’s holidays, and not at having its own holidays. The group couldn’t sustain momentum for caring about something over everyone, people were not able to join in. I think this is a flaw!
It is an old rationalist proverb (originally stated by P. C. Hodgell) “that which can be destroyed by the truth should be”. Let it not fall into the pathological version of itself “that which can be destroyed should be”, and let us remember its complement: “that which can be nourished by the truth, should be”.
7. Digging Your Heels In
(related: “Demanding Redress”, “Entitlement to Argument”)
It’s important to hold onto your principles, even when inconvenient.
However, sometimes rationalists will make a social scene entirely not function, based on a not-especially-relevant difference in principles.
I have seen brief pleasantries before a meeting expand to take the whole meeting due to an unrelated argument started. I have seen someone spend an entire party arguing at someone in the corner about a minor point made many years prior. I have seen large group social activities very aggressively hit the brakes because of not liking the phrasing someone used. I have seen people spew thousands of words in comment sections in ways that didn’t need to happen and weren’t worth it until they were no longer on good terms with their interlocutor.
It is good to take a stand on principles, and not let them wither or have people get something important past you. Yet this does not mean it is always the right call; it is sometimes irrelevant, or it is too much effort right now to litigate it and you will overall end a relationship, where you can just as easily bring it up at a later time; and oftentimes arguing the difference isn’t worth ending drowning out everything else.
Sometimes digging in your heels to defend a principle, is not called for.
These, then, are seven vices of rationalists:
Contrarianism, Pedantry, Elaboration, Social Obliviousness, Assuming Good Faith, Undercutting Social Momentum, and Digging Your Heels In.
related to contrarianism: not invented here syndrome. I think rationalists have a strong tendency to want to reinvent things their own way, bulldozing Chesterton’s fences, or just reinventing the wheel with rationalist flavor. good in moderation, bad in excess
I’m surprised you missed out the rationalist disdain for the world. In my experience, rationalists disproportionately dislike the world as it is. This goes all the way back to Eliezer writing The Sword of Good, and joking about being an alien from Dath Ilan. Of course, Eliezer also wrote Kindness to Kin, and Three Worlds Collide, which are (supposed to be) positive on the world. But I don’t hear about those so much.
The actual “good guy” in TSoG is a guy who everyone says is evil, who is casting the spell of Ultimate Doom, who wants to rip apart the existing world and build a new one entirely to his design. The moral of the story is that doing this unilaterally is the correct course of action. Well, we have those guys in the real world, they’re the AI company CEOs.
Is the world bad? Yeah, in some ways. But do you know what’s worse than the world? Having zero worlds because you blew it up.
In reality, the person who wants to cast the spell of Ultimate Doom and end the world is probably not an enlightened being who dislikes the world for Good and principled reasons. They just don’t make many humans like that. E.g. the Unabomber made up his environmentalist schtick to justify a hatred that came from a mixture of CIA experiments (though honestly they didn’t even sound that bad) and being rejected by a woman at work.
(I also think that the rationalist penchant for book reviews comes from a similar, but distinct flavour of disdain. The point of a rat-style book review is not to recommend the book to people, even conditionally. The point of a rat book review is to obviate the need to read the book. This means that after reading the review, you don’t need to engage with the non-rat world, you can just engage with a pre-filtered and summarized version of it.)
I think disdain for the state of the world is an interesting rationalist vice, as something that is healthy in moderation and destructive in extremis.
However I don’t currently see it as happening as much as the ones above, or as damagingly. One might think of the foolish Zizians, but I think their hatred for the world comes more from leftist philosophy / animal rights background, than from rationalist vices. Their rationalist vices look more like their endless and inaccurate writing about hemispheres.
I don’t agree on book reviews FWIW. I think the counterfactual is that most people don’t read a book, and that a book review is, if done well, a successful 80⁄20 of the book. I think of book reviews as positive way that rationalists en-masse engage with parts of the world outside of their worldview.
My sense is Eliezer or rationalists generally don’t hate the world.
I don’t like the world. I think the history of the world is negative on net, and that the current existence of the world is still net negative. I would not blow up the world, because I think the future can be very very good, and is unlikely to be very very bad. But if my p(doom) was 100% by 2100, I’d prefer the world blow up sooner rather than later.
But when I express this opinion, my sense is people overwhelmingly disagree. Like less than 1% of rationalists will agree with this.
Actually, out of curiosity: could people please agree/disagree react to this to signal agreement/disagreement with the above statement?
Like the statement is:
I put an agree vote, but also an uncertainty react. This seems like a very hard question to answer, which depends on massive subquestions like “are insects conscious and do they experience pain?” and “are subcomponents of people independent consciousnesses that can be suffering even if the ‘ego’ isn’t experiencing suffering?”
But yes, on a straightforward accounting, brushing all the complexity aside, I think the factory farming alone probably outweighs all the value of human civilization.
I disagree; I think all the stories and adventures and loves and lives that people in the world have lived are worth quite a lot of torture, and it’s not naively the case that if the torturous experiences are larger than the other experiences, that this means they’re more important. I think a world that is primarily negative experiences can still be very meaningful and worthwhile.
I think it’s worth being a good part of a bad story, rather than there being no story at all.
I think most people haven’t been tortured very much, but those that endure terrible pain regularly are often willing to give up almost anything to make it stop. (This isn’t universal. Nietzsche was enduring terrible pain nearly constantly in his later life, and he would agree with you.)
I think it’s easy for someone to say “the meaning in the world outweighs even terrible suffering on an massive scale” when they’re not the one experiencing the nearly constant suffering.
That said, I’ll add to the (incomplete) list of uncertainties “how exactly does the axiological value of meaning relate to the axiological value of suffering?”
Or to to put it another way, a part of me definitely resonates with...
Indeed, most of the personal meaning in my life is grounded in my choice of the role that I play in this this small part of the whole story of civilization. I don’t decide my circumstances, but I do decide what sort of person I chose to be in meeting those circumstances. I am proud to choose the Good, and that is most of what is personally valuable to me.[1]
But I’m suspicious that the reason why “I think it’s worth being a good part of a bad story” feels appealing as a justification for existence is scope insensitivity.
I just literally cannot comprehend the horror of the terror and the suffering of billions of years of life killing and eating life on planet earth. If I could comprehend it, if I experienced it all myself, I think it would be extremely and straightforwardly obvious that the tiny fraction of total experience in which I could contextualize the whole thing as part of an epic story doesn’t come even remotely close to justifying the horror, and any claims to the contrary are cope.
As a side note, this is related I have a strong dislike of John Wentworth’s recent categorization of human values vs Goodness.
I do not resonate with his formulation of what Goodness is. Goodness, as I mean it definitively not “conformity to local norms”. In fact, goodness depends on non-conformity, insofar as local norms are often and regularly evil. Goodness is something like the reflective unfolding of a small number of innate principles, like fairness and kindness, which are tied to strong, innate, moral emotional reactions.
And this matters a lot to me because my relationship to Goodness is probably the largest component of what I find yummy.
For you, is this a quantitative question, or an “in principle” question? Like could there exist some amount of extreme suffering, for which you would judge them to be outweighing the meaningful and worthwhile experiences?
Or is the sentiment more like “if there exists a single moment of meaning, that redeems all the pain and suffering of all of history”?
I don’t know. Some frames:
Scope sensitivity: Some amount of it should be able to outweigh a certain amount of meaning.
Virtue ethics: I am willing to push through a lot of suffering if it means something; the simple ratio of the two does not determine whether the overall thing is worthwhile.
Deontology: it does kind of differ on whether you’re responsible for the suffering happening or not.
It’s also plausible to me that I am more coming at this from a deontological feeling of “One should not kill everyone if one has a good reason” rather than “The world is net positive”.
I agree that these are importantly different, and easily conflated!
(Note that this message and its parent are talking about different things: the parent talked about whether the current value is negative, and the child talked about whether the total value has been negative.)
Hmm thanks, I meant the latter both times, but I see its unclear, so I’ll edit it so no one gets confused.
Okay so 20%, which means I was directionally correct /s
Furthermore, many of them will think that this is absurd.
Like “if you come to the conclusion that the world is bad, and it would be better if it didn’t exist, or that humanity is bad, and it would be better if it didn’t exist, then that’s a reductio ad absurdum, and that means you must have made an error somewhere.”
(I don’t share this view.)
Kindness to Kin is my favorite short story of his.
I think pedantry lends itself to an additional negative effect worse than added friction: trapping your priors. If your cognitive circuits are over-indexed on identifying local mistakes, then it’s easy to dismiss something when it fails the most salient checks you’re running. This isn’t a particularly rationalist thing, but it does seem like the kind of thing where a rationalist vice lends itself to fewer people having identified / fixed a flaw than otherwise.
I like the concept of “things that are good in moderation and harmful in excess”, but I don’t think that’s what’s usually meant by “vice”, and calling this a “list of rationalist vices” will easily connote that these are all bad. Especially if/when people start talking about them in casual conversation and saying things like “assuming good faith is a rationalist vice”, without also giving the context of how vice is being used in a non-standard sense. That would quickly lead to the list being understood in a way entirely different than you meant it.
If you don’t believe that the usage here is non-standard, consider that none of the meanings that dictionary.com offers for “vice” include “a thing that’s good in moderation” but are things like “an immoral or evil habit or practice”. Wikipedia also defines vice as “a practice, behaviour, habit or item generally considered morally wrong” or “a fault, a negative character trait, a defect, an infirmity, or a bad or unhealthy habit”.
Claude (Sonnet 4.5) also thought that your definition doesn’t match the classical sense:
But again, I do like the concept, even if I think the term is misleading! One possibility for a better one might be hormeses (singular hormese) - a made-up word referencing hormesis, the phenomenon where a low dose of something is beneficial but a high dose of it is harmful. It has the disadvantage that most people aren’t going to understand it without having it explained, but that also means that they’re not going to incorrectly think they understand what it means. And that gives you the chance to explain a neat concept to people who haven’t heard about it before.
Perhaps “7 Pathologies of Rationalists”.
or “7 Vicious Virtues”
I think “pathologies” also connotes something purely bad.
I’d say “things that are good in moderation and harmful in excess… and most people (in our community) do them in excess”.
Even better, we should have two different words for “doing it in moderation” and “doing it in excess”, but that would predictably end up with people saying that they are doing the former while they are doing the latter, or insisting that both words actually mean the same only you use the former for the people you like and the latter for the people you dislike.
I am not even sure whether “contrarianism” refers to the former or the latter (to a systematically independent honest thinker, or to an annoying edgy clickbait poser—many people probably don’t even have separate mental buckets for these).
Assuming good faith? Hm. Definitely something some people suffer from, but rationalists in general? Not so sure?
In case you don’t know, it is a widespread meme (originating here) that rationalists are like quokkas and often unable to notice that people want to harm them or are taking a conflict theory stance.
It is not true in full generality of course, but I think it is a common pattern that someone writes something intended as an attack on the status of rationalists, and rationalists falsely read it as a friend inviting them to argue, and then keep talking, to the annoyance and confusion of the original person.
That sounds like an intentional response, and a way to fight against trolls. I’ve certainly used that strategy before.
Great post! It’s very cool that you self-demonstrated “how to not do it” in parts 3 and 4. I think the post would be even better if you did this in all 7 parts.
This is a great suggestion. However I tried for a while and found it difficult. How does one display the opposite of contrarianism? In a virtue-of-silence kind of way, by believing (and sometimes stating) simple and boring truths when relevant. The very act of putting emphasis on this is to also make it contrarian (“I am so bold as to believe the obvious and good things”); I am reminded of Hitchens’ article on Salman Rushdie, which ends “And, complex though it all is, it has elements of simplicity too. One must side with Salman Rushdie not because he is an underdog but because there is no other side to be on.”
Open to hearing recommendations!
I’d add Hubris too. Intelligence is a virtue, but it comes with a weakness of not listening to those that are viewed as less intelligent for one reason or another. Also, intelligence and wisdom in one field, makes someone more likely to fall for the Dunning-Krueger Effect for others. We must be mindful of this trap, and seek to err on the side of humility.
Hubris can take other forms too, such as presuming a solution will scale or apply universally. Expanding Internet access is great, but if the people being serviced don’t have electricity, there isn’t much point. Providing resources to poor families may be less effective than just giving them money, so they can do their own prioritizations. If we are going to help people, those people should have a say in the solutions.
Are you positing an associative or causal connection from increased intelligence to decreased listening ability or motivation to those who are deemed less intelligent? This is a complex area; I agree with statisticians who promote putting one’s causal model front and center.
This should be promoted to the front page.
This. Sometimes I feel that LW articles come off as borderline pretentious. Not everything needs an essay—especially not one packed with overly formal language.