The Craft and the Community
This sequence ran from March to April of 2009 and dealt with the topic of building rationalist communities that could systematically improve on the art, craft, and science of human rationality. This is a highly forward-looking sequence—not so much an immediately complete recipe, as a list of action items and warnings for anyone setting out in the future to build a craft and a community.
Raising the Sanity Waterline: Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine—getting rid of the canary doesn’t get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?
A Sense That More Is Possible: The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician—more like that of a strong casual amateur. Self-proclaimed “rationalists” don’t seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.
Epistemic Viciousness: An essay by Gillian Russell on “Epistemic Viciousness in the Martial Arts” generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with “data poverty”—the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.
Schools Proliferating Without Evidence: The branching schools of “psychotherapy”, another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness—that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it’s a good measurement, you get good science.
3 Levels of Rationality Verification: How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational. A “reputational” test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - “keeping it real”, but without being able to break down exactly what was responsible for success. An “experimental” test is one that can be run on each of a hundred students (such as a well-validated survey). An “organizational” test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.
Why Our Kind Can’t Cooperate: The atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd, aka “the nonconformist cluster”, seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements—the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn’t help either.
Tolerate Tolerance: One of the likely characteristics of someone who sets out to be a “rationalist” is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people’s tolerance—to avoid rejecting them because they tolerate people you wouldn’t—since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots—so long as they don’t literally believe the same ideas themselves—try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.
You’re Calling Who A Cult Leader?: Paul Graham gets exactly the same accusations about “cults” and “echo chambers” and “coteries” that I do, in exactly the same tone—e.g. comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults, or claiming that founders were asked to move to the Bay Area startup hub as a cult tactic of separation from friends and family. This is bizarre, considering our relative surface risk factors. It just seems to be a failure mode of the nonconformist community in general. By far the most cultish-looking behavior on Hacker News is people trying to show off how willing they are to disagree with Paul Graham, which, I can personally testify, feels really bizarre when you’re the target. Admiring someone shouldn’t be so scary—I don’t hold back so much when praising e.g. Douglas Hofstadter; in this world there are people who have pulled off awesome feats and it is okay to admire them highly.
On Things That Are Awesome: Seven followup thoughts: I can list more than one thing that is awesome; when I think of “Douglas Hofstadter” I am really thinking of his all-time greatest work; the greatest work is not the person; when we imagine other people we are imagining their output, so the real Douglas Hofstadter is the source of “Douglas Hofstadter”; I most strongly get the sensation of awesomeness when I see someone outdoing me overwhelmingly, at some task I’ve actually tried; we tend to admire unique detailed awesome things and overlook common nondetailed awesome things; religion and its bastard child “spirituality” tends to make us overlook human awesomeness.
Your Price For Joining: The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this—with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn’t worth personally fixing by however much effort it takes, it’s not worth a refusal to contribute.
Can Humanism Match Religion’s Output?: Anyone with a simple and obvious charitable project—responding with food and shelter to a tidal wave in Thailand, say—would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?
Church vs. Taskforce: Churches serve a role of providing community—but they aren’t explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There’s a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.
Rationality: Common Interest of Many Causes: Many causes benefit particularly from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.
Helpless Individuals: When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals—research isn’t a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.
Money: The Unit of Caring: Omohundro’s resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called “money” and it is the unit of how much society cares about something—a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale—the reason we’re not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.
Purchase Fuzzies and Utilons Separately: Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).
Selecting Rationalist Groups: Trying to breed e.g. egg-laying chickens by individual selection can produce odd side effects on the farm level, since a more dominant hen can produce more egg mass at the expense of other hens. Group selection is nearly impossible in Nature, but easy to impose in the laboratory, and group-selecting hens produced substantial increases in efficiency. Though most of my essays are about individual rationality—and indeed, Traditional Rationality also praises the lone heretic more than evil Authority—the real effectiveness of “rationalists” may end up determined by their performance in groups.
Incremental Progress and the Valley: The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline—it just takes a lot of progress to get there.
Whining-Based Communities: Many communities feed emotional needs by offering their members someone or something to blame for failure—say, those looters who don’t approve of your excellence. You can easily imagine some group of “rationalists” congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. But this is not how real rationality works—there’s no assumption that other agents are rational. We all face unfair tests (and yes, they are unfair to different degrees for different people); and how well you do with your unfair tests, is the test of your existence. Rationality is there to help you win anyway, not to provide a self-handicapping excuse for losing. There are no first-person extenuating circumstances. There is absolutely no point in going down the road of mutual bitterness and consolation, about anything, ever.
Mandatory Secret Identities: This post was not well-received, but the point was to suggest that a student must at some point leave the dojo and test their skills in the real world. The aspiration of an excellent student should not consist primarily of founding their own dojo and having their own students.
Beware of Other-Optimizing: Aspiring rationalists often vastly overestimate their own ability to optimize other people’s lives. They read nineteen webpages offering productivity advice that doesn’t work for them… and then encounter the twentieth page, or invent a new method themselves, and wow, it really works—they’ve discovered the true method. Actually, they’ve just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person—for then you’ll just believe that they aren’t trying hard enough.
Akrasia and Shangri-La: The Shangri-La diet works amazingly well for some people, but completely fails for others, for no known reason. Since the diet has a metabolic rationale and is not supposed to require willpower, its failure in my and other cases is unambigiously mysterious. If it required a component of willpower, then I and others might be tempted to blame myself for not having willpower. The art of combating akrasia (willpower failure) has the same sort of mysteries and is in the same primitive state; we don’t know the deeper rule that explains why a trick works for one person but not another.
Collective Apathy and the Internet: The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.
Bayesians vs. Barbarians: Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There’s a certain concept of “rationality” which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm’s way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians… And then there’s the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...
Of Gender and Rationality: Analysis of the gender imbalance that appears in “rationalist” communities, suggesting nine possible causes of the effect, and possible corresponding solutions.
My Way: I sometimes think of myself as being like the protagonist in a classic SF labyrinth story, wandering further and further into some alien artifact, trying to radio back a description of what I’m seeing, so that I can be followed. But what I’m finding is not just the Way, the thing that lies at the center of the labyrinth; it is also my Way, the path that I would take to come closer to the center, from whatever place I started out. And yet there is still a common thing we are all trying to find. We should be aware that others’ shortest paths may not be the same as our own, but this is not the same as giving up the ability to judge or to share.
The Sin of Underconfidence: When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That’s what makes a lot of cognitive subtasks so troublesome—you know you’re biased but you’re not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself “Does this way of thinking make me stronger, or weaker?”
Well-Kept Gardens Die By Pacifism: Good online communities die primarily by refusing to defend themselves, and so it has been since the days of Eternal September. Anyone acculturated by academia knows that censorship is a very grave sin… in their walled gardens where it costs thousands and thousands of dollars to enter. A community with internal politics will treat any attempt to impose moderation as a coup attempt (since internal politics seem of far greater import than invading barbarians). In rationalist communities this is probably an instance of underconfidence—mildly competent moderators are probably quite trustworthy to wield the banhammer. On Less Wrong, the community is the moderator (via karma) and you will need to trust yourselves enough to wield the power and keep the garden clear.
Practical Advice Backed By Deep Theories: Practical advice is genuinely much, much more useful when it’s backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn’t have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there’s a distinctive LW style, this is it.
Less Meta: The fact that this final series was on the craft and the community seems to have delivered a push in something of the wrong direction, (a) steering toward conversation about conversation and (b) making present accomplishment pale in the light of grander dreams. Time to go back to practical advice and deep theories, then.
Go Forth and Create the Art!: I’ve developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism… arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality—fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature… And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It’s my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.
- Well-Kept Gardens Die By Pacifism by 21 Apr 2009 2:44 UTC; 197 points) (
- Whining-Based Communities by 7 Apr 2009 20:31 UTC; 78 points) (
- You’re Calling *Who* A Cult Leader? by 22 Mar 2009 6:57 UTC; 66 points) (
- Mandatory Secret Identities by 8 Apr 2009 18:10 UTC; 64 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 55 points) (
- Akrasia and Shangri-La by 10 Apr 2009 20:53 UTC; 51 points) (
- My Way by 17 Apr 2009 1:25 UTC; 48 points) (
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 43 points) (
- Selecting Rationalist Groups by 2 Apr 2009 16:21 UTC; 41 points) (
- Notes from the Hufflepuff Unconference by 23 May 2017 21:04 UTC; 30 points) (
- On Things that are Awesome by 24 Mar 2009 3:24 UTC; 26 points) (
- No Really, Why Aren’t Rationalists Winning? by 4 Nov 2018 18:11 UTC; 25 points) (
- What is Rationality? by 1 Apr 2010 20:14 UTC; 21 points) (
- Parallelizing Rationality: How Should Rationalists Think in Groups? by 17 Dec 2012 4:08 UTC; 21 points) (
- Looking for alteration suggestions for the official Sequences ebook by 16 Oct 2012 22:32 UTC; 20 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 6 Aug 2011 18:31 UTC; 18 points)'s comment on Raise the Age Demographic by (
- Less Meta by 26 Apr 2009 5:38 UTC; 17 points) (
- De-Centering Bias by 18 Oct 2017 23:24 UTC; 14 points) (
- Guarding Against the Postmodernist Failure Mode by 8 Jul 2014 1:34 UTC; 12 points) (
- 12 Jan 2011 1:16 UTC; 10 points)'s comment on Branches of rationality by (
- 8 Sep 2013 17:42 UTC; 8 points)'s comment on Open thread, September 2-8, 2013 by (
- 25 May 2015 22:44 UTC; 7 points)'s comment on Open Thread, May 25 - May 31, 2015 by (
- 22 Oct 2013 6:33 UTC; 7 points)'s comment on What Can We Learn About Human Psychology from Christian Apologetics? by (
- Article idea: Good argumentation by 27 Nov 2011 19:57 UTC; 5 points) (
- 24 Nov 2010 19:10 UTC; 5 points)'s comment on The Cult Of Reason by (
- 5 Aug 2014 14:35 UTC; 5 points)'s comment on Open thread, August 4 − 10, 2014 by (
- 5 May 2014 16:51 UTC; 4 points)'s comment on Truth: It’s Not That Great by (
- 6 Aug 2011 18:18 UTC; 3 points)'s comment on Raise the Age Demographic by (
- 28 Apr 2009 4:19 UTC; 2 points)'s comment on The End (of Sequences) by (
- Rationalism and Community—LW/ACX Meetup #195 (Wednesday, July 27th) by 26 Jul 2022 9:35 UTC; 2 points) (
- 14 Sep 2010 6:17 UTC; 2 points)'s comment on The Affect Heuristic, Sentiment, and Art by (
- 7 Nov 2017 3:21 UTC; 2 points)'s comment on De-Centering Bias by (
- 16 Jan 2011 19:01 UTC; 1 point)'s comment on The annoyingness of New Atheists: declaring God Dead makes you a Complete Monster? by (
- 11 Oct 2015 17:13 UTC; 0 points)'s comment on Emotional tools for the beginner rationalist by (
- 20 Feb 2014 7:49 UTC; 0 points)'s comment on Mandatory Secret Identities by (
″...then there’s the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...”
Why should rationalists necessarily care a lot about other people? If we are to avoid circular altruism and the nefarious effects of other-optimizing, the best amount of caring might be less than “a lot.”
Additionally, caring about other people in the sense of seeking emotional gratification primarily in tribe-like social rituals may be truly inimical to dedicating one’s life to theoretical physics, math, or any other far-thinking discipline.
Caring about other people may entail involvement in politics, and local politics can be just as mind-killing as national politics.
Sorry to answer a 5 year old post, but apparently people read these things. You asked “Why should rationalists necessarily care a lot about other people,” but all the post said was that they should be able to.
They shouldn’t, particularly. End goals are not a part of rationality, rationality exists to achieve them.
However, many end goals can be more easily achieved by getting help from others. If your end goals are like this, it’s rational for you to solve group coordination problems and care about other people.
I don’t think that b is necessarily an immediate entailment of rationality, but a condition that can be met simultaneously with a and c. The post presents a situation where c is satisficed only through a and b. (It does not take much finagling to suppose that a lonesome mountain man existence in a world ruled by barbarians is inferior in fuzziness and utilons relative to the expectation of the world where a b and c are held to be true.)
Good point Bacon. I’ve been wondering where the implicit assumption that rational agents have an altruistic agenda came from. The assumption seems to permeate a rather large number of posts.
When Omega offers to save lives, why do I care? To be perfectly honest, my own utility function suggests that those extra billions are a liability to my interests.
When I realise that my altruistic notions are in conflict with my instinctive drive for status and influence, why do I “need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness”? If anything it seems somewhat more rational to acknowledge the drive for status and self-interest as the key component and satisfy those criteria more effectively.
This isn’t to say I don’t have an altruistic agenda that I pursue. It is just that I don’t see that agenda itself as ‘rational’ at all. It is somewhere between merely arbitrary and ‘slightly irrational’.
With that caveat, this summary and plenty of the posts contained within are damn useful!
“With that caveat, this summary and plenty of the posts contained within are damn useful!”
I resoundingly agree.
That said, Eliezer is attempting to leverage the sentiments we now call “altruistic” into efficient other-optimizing. What if all people are really after is warm fuzzies? Mightn’t they then shrink from the prospect of optimally helping others?
Hobbes gives us several possible reasons for altruism, none of which seem to be conducive to effective helping:
“When the transferring of right is not mutual, but one of the parties transferreth in hope to gain thereby friendship or service from another, or from his friends; or in hope to gain the reputation of charity, or magnanimity; or to deliver his mind from the pain of compassion [self-haters give more?]; or in hope of reward in heaven; this is not contract, but gift, free gift, grace: which words signify one and the same thing.”
There is also the problem of epistemic limitations around other-optimizing. Charity might remove more utilons from the giver than it bestows upon the receiver, if only because it’s difficult to know what other people need and easier to know what oneself needs.
“Mightn’t” we shrink from optimal helping? “Might” charity be usually an imbalance of utilons?
Yes, we might, it might.
These are important considerations—I don’t mean to denigrate clear thinking. But to lie content with hypothetical reasons why something wouldn’t work, due to a common hidden laziness of most humans but which we can convince ourselves is due to more noble and reasonable reasons, is to completely miss the most crucial point of this entire Sequence: actually doing something, testing.
I think it’s safe to say that the natural inclination of most humans isn’t initiating large projects with high but uncertain reward. It’s to “just get by”, a fact which I must thank you, good sir, for illustrating.… it was intentional, right?
I like summary posts like this, by the way. It makes it much easier to find what I am looking for later and helps get the wiki started.
What does “paying-customer priority” refer to in the above sentence? Is ‘paying’ being used as a verb or is “paying-customer priority” something that is being donated?
There’s an interesting googletech video on how to create an online community with desired attributes (a la stackoverflow.com): http://www.youtube.com/watch?v=NWHfY_lvKIQ