Why Our Kind Can’t Cooperate

From when I was still forced to at­tend, I re­mem­ber our syn­a­gogue’s an­nual fundrais­ing ap­peal. It was a sim­ple enough for­mat, if I re­call cor­rectly. The rabbi and the trea­surer talked about the shul’s ex­penses and how vi­tal this an­nual fundraise was, and then the syn­a­gogue’s mem­bers called out their pledges from their seats.

Straight­for­ward, yes?

Let me tell you about a differ­ent an­nual fundrais­ing ap­peal. One that I ran, in fact; dur­ing the early years of a non­profit or­ga­ni­za­tion that may not be named. One differ­ence was that the ap­peal was con­ducted over the In­ter­net. And an­other differ­ence was that the au­di­ence was largely drawn from the athe­ist/​liber­tar­ian/​technophile/​sf-fan/​early-adopter/​pro­gram­mer/​etc crowd. (To point in the rough di­rec­tion of an em­piri­cal cluster in per­son­space. If you un­der­stood the phrase “em­piri­cal cluster in per­son­space” then you know who I’m talk­ing about.)

I crafted the fundrais­ing ap­peal with care. By my na­ture I’m too proud to ask other peo­ple for help; but I’ve got­ten over around 60% of that re­luc­tance over the years. The non­profit needed money and was grow­ing too slowly, so I put some force and po­etry into that year’s an­nual ap­peal. I sent it out to sev­eral mailing lists that cov­ered most of our po­ten­tial sup­port base.

And al­most im­me­di­ately, peo­ple started post­ing to the mailing lists about why they weren’t go­ing to donate. Some of them raised ba­sic ques­tions about the non­profit’s philos­o­phy and mis­sion. Others talked about their brilli­ant ideas for all the other sources that the non­profit could get fund­ing from, in­stead of them. (They didn’t vol­un­teer to con­tact any of those sources them­selves, they just had ideas for how we could do it.)

Now you might say, “Well, maybe your mis­sion and philos­o­phy did have ba­sic prob­lems—you wouldn’t want to cen­sor that dis­cus­sion, would you?”

Hold on to that thought.

Be­cause peo­ple were donat­ing. We started get­ting dona­tions right away, via Pay­pal. We even got con­grat­u­la­tory notes say­ing how the ap­peal had fi­nally got­ten them to start mov­ing. A dona­tion of $111.11 was ac­com­panied by a mes­sage say­ing, “I de­cided to give **** a lit­tle bit more. One more hun­dred, one more ten, one more sin­gle, one more dime, and one more penny. All may not be for one, but this one is try­ing to be for all.”

But none of those donors posted their agree­ment to the mailing list. Not one.

So far as any of those donors knew, they were alone. And when they tuned in the next day, they dis­cov­ered not thanks, but ar­gu­ments for why they shouldn’t have donated. The crit­i­cisms, the jus­tifi­ca­tions for not donat­ing—only those were dis­played proudly in the open.

As though the trea­surer had finished his an­nual ap­peal, and ev­ery­one not mak­ing a pledge had proudly stood up to call out jus­tifi­ca­tions for re­fus­ing; while those mak­ing pledges whispered them quietly, so that no one could hear.

I know some­one with a ra­tio­nal­ist cause who goes around plain­tively ask­ing, “How come the Raelian fly­ing-saucer cult can get tens of thou­sands of mem­bers [prob­a­bly around 40,000] in­ter­ested in com­plete non­sense, but we can’t even get a thou­sand peo­ple work­ing on this?”

The ob­vi­ous wrong way to finish this thought is to say, “Let’s do what the Raeli­ans do! Let’s add some non­sense to this meme!” For the benefit of those not im­me­di­ately stopped by their eth­i­cal in­hi­bi­tions, I will ob­serve that there may be a hun­dred failed fly­ing-saucer cults for ev­ery one that be­comes fa­mous. And the Dark Side may re­quire non-ob­vi­ous skills, which you, yes you, do not have: Not ev­ery­one can be a Sith Lord. In par­tic­u­lar, if you talk about your planned lies on the pub­lic In­ter­net, you fail. I’m no mas­ter crim­i­nal, but even I can tell cer­tain peo­ple are not cut out to be crooks.

So it’s prob­a­bly not a good idea to cul­ti­vate a sense of vi­o­lated en­ti­tle­ment at the thought that some other group, who you think ought to be in­fe­rior to you, has more money and fol­low­ers. That path leads to—par­don the ex­pres­sion—the Dark Side.

But it prob­a­bly does make sense to start ask­ing our­selves some pointed ques­tions, if sup­posed “ra­tio­nal­ists” can’t man­age to co­or­di­nate as well as a fly­ing-saucer cult.

How do things work on the Dark Side?

The re­spected leader speaks, and there comes a cho­rus of pure agree­ment: if there are any who har­bor in­ward doubts, they keep them to them­selves. So all the in­di­vi­d­ual mem­bers of the au­di­ence see this at­mo­sphere of pure agree­ment, and they feel more con­fi­dent in the ideas pre­sented—even if they, per­son­ally, har­bored in­ward doubts, why, ev­ery­one else seems to agree with it.

(“Plu­ral­is­tic ig­no­rance” is the stan­dard la­bel for this.)

If any­one is still un­per­suaded af­ter that, they leave the group (or in some places, are ex­e­cuted)—and the re­main­der are more in agree­ment, and re­in­force each other with less in­terfer­ence.

(I call that “evap­o­ra­tive cool­ing of groups”.)

The ideas them­selves, not just the leader, gen­er­ate un­bounded en­thu­si­asm and praise. The halo effect is that per­cep­tions of all pos­i­tive qual­ities cor­re­late—e.g. tel­ling sub­jects about the benefits of a food preser­va­tive made them judge it as lower-risk, even though the quan­tities were log­i­cally un­cor­re­lated. This can cre­ate a pos­i­tive feed­back effect that makes an idea seem bet­ter and bet­ter and bet­ter, es­pe­cially if crit­i­cism is per­ceived as traitorous or sin­ful.

(Which I term the “af­fec­tive death spiral”.)

So these are all ex­am­ples of strong Dark Side forces that can bind groups to­gether.

And pre­sum­ably we would not go so far as to dirty our hands with such...

There­fore, as a group, the Light Side will always be di­vided and weak. Athe­ists, liber­tar­i­ans, technophiles, nerds, sci­ence-fic­tion fans, sci­en­tists, or even non-fun­da­men­tal­ist re­li­gions, will never be ca­pa­ble of act­ing with the fa­natic unity that an­i­mates rad­i­cal Is­lam. Tech­nolog­i­cal ad­van­tage can only go so far; your tools can be copied or stolen, and used against you. In the end the Light Side will always lose in any group con­flict, and the fu­ture in­evitably be­longs to the Dark.

I think that one’s re­ac­tion to this prospect says a lot about their at­ti­tude to­wards “ra­tio­nal­ity”.

Some “Clash of Civ­i­liza­tions” writ­ers seem to ac­cept that the En­light­en­ment is des­tined to lose out in the long run to rad­i­cal Is­lam, and sigh, and shake their heads sadly. I sup­pose they’re try­ing to sig­nal their cyn­i­cal so­phis­ti­ca­tion or some­thing.

For my­self, I always thought—call me loony—that a true ra­tio­nal­ist ought to be effec­tive in the real world.

So I have a prob­lem with the idea that the Dark Side, thanks to their plu­ral­is­tic ig­no­rance and af­fec­tive death spirals, will always win be­cause they are bet­ter co­or­di­nated than us.

You would think, per­haps, that real ra­tio­nal­ists ought to be more co­or­di­nated? Surely all that un­rea­son must have its dis­ad­van­tages? That mode can’t be op­ti­mal, can it?

And if cur­rent “ra­tio­nal­ist” groups can­not co­or­di­nate—if they can’t sup­port group pro­jects so well as a sin­gle syn­a­gogue draws dona­tions from its mem­bers—well, I leave it to you to finish that syl­l­o­gism.

There’s a say­ing I some­times use: “It is dan­ger­ous to be half a ra­tio­nal­ist.”

For ex­am­ple, I can think of ways to sab­o­tage some­one’s in­tel­li­gence by se­lec­tively teach­ing them cer­tain meth­ods of ra­tio­nal­ity. Sup­pose you taught some­one a long list of log­i­cal fal­la­cies and cog­ni­tive bi­ases, and trained them to spot those fal­la­cies in bi­ases in other peo­ple’s ar­gu­ments. But you are care­ful to pick those fal­la­cies and bi­ases that are eas­iest to ac­cuse oth­ers of, the most gen­eral ones that can eas­ily be mis­ap­plied. And you do not warn them to scru­ti­nize ar­gu­ments they agree with just as hard as they scru­ti­nize in­con­gru­ent ar­gu­ments for flaws. So they have ac­quired a great reper­toire of flaws of which to ac­cuse only ar­gu­ments and ar­guers who they don’t like. This, I sus­pect, is one of the pri­mary ways that smart peo­ple end up stupid. (And note, by the way, that I have just given you an­other Fully Gen­eral Coun­ter­ar­gu­ment against smart peo­ple whose ar­gu­ments you don’t like.)

Similarly, if you wanted to en­sure that a group of “ra­tio­nal­ists” never ac­com­plished any task re­quiring more than one per­son, you could teach them only tech­niques of in­di­vi­d­ual ra­tio­nal­ity, with­out men­tion­ing any­thing about tech­niques of co­or­di­nated group ra­tio­nal­ity.

I’ll write more later (to­mor­row?) on how I think ra­tio­nal­ists might be able to co­or­di­nate bet­ter. But to­day I want to fo­cus on what you might call the cul­ture of dis­agree­ment, or even, the cul­ture of ob­jec­tions, which is one of the two ma­jor forces pre­vent­ing the athe­ist/​liber­tar­ian/​technophile crowd from co­or­di­nat­ing.

Imag­ine that you’re at a con­fer­ence, and the speaker gives a 30-minute talk. After­ward, peo­ple line up at the micro­phones for ques­tions. The first ques­tioner ob­jects to the graph used in slide 14 us­ing a log­a­r­ith­mic scale; he quotes Tufte on The Vi­sual Dis­play of Quan­ti­ta­tive In­for­ma­tion. The sec­ond ques­tioner dis­putes a claim made in slide 3. The third ques­tioner sug­gests an al­ter­na­tive hy­poth­e­sis that seems to ex­plain the same data...

Perfectly nor­mal, right? Now imag­ine that you’re at a con­fer­ence, and the speaker gives a 30-minute talk. Peo­ple line up at the micro­phone.

The first per­son says, “I agree with ev­ery­thing you said in your talk, and I think you’re brilli­ant.” Then steps aside.

The sec­ond per­son says, “Slide 14 was beau­tiful, I learned a lot from it. You’re awe­some.” Steps aside.

The third per­son—

Well, you’ll never know what the third per­son at the micro­phone had to say, be­cause by this time, you’ve fled scream­ing out of the room, pro­pel­led by a bone-deep ter­ror as if Cthulhu had erupted from the podium, the fear of the im­pos­si­bly un­nat­u­ral phe­nomenon that has in­vaded your con­fer­ence.

Yes, a group which can’t tol­er­ate dis­agree­ment is not ra­tio­nal. But if you tol­er­ate only dis­agree­ment—if you tol­er­ate dis­agree­ment but not agree­ment—then you also are not ra­tio­nal. You’re only will­ing to hear some hon­est thoughts, but not oth­ers. You are a dan­ger­ous half-a-ra­tio­nal­ist.

We are as un­com­fortable to­gether as fly­ing-saucer cult mem­bers are un­com­fortable apart. That can’t be right ei­ther. Re­v­ersed stu­pidity is not in­tel­li­gence.

Let’s say we have two groups of sol­diers. In group 1, the pri­vates are ig­no­rant of tac­tics and strat­egy; only the sergeants know any­thing about tac­tics and only the officers know any­thing about strat­egy. In group 2, ev­ery­one at all lev­els knows all about tac­tics and strat­egy.

Should we ex­pect group 1 to defeat group 2, be­cause group 1 will fol­low or­ders, while ev­ery­one in group 2 comes up with bet­ter ideas than what­ever or­ders they were given?

In this case I have to ques­tion how much group 2 re­ally un­der­stands about mil­i­tary the­ory, be­cause it is an el­e­men­tary propo­si­tion that an un­co­or­di­nated mob gets slaugh­tered.

Do­ing worse with more knowl­edge means you are do­ing some­thing very wrong. You should always be able to at least im­ple­ment the same strat­egy you would use if you are ig­no­rant, and prefer­ably do bet­ter. You definitely should not do worse. If you find your­self re­gret­ting your “ra­tio­nal­ity” then you should re­con­sider what is ra­tio­nal.

On the other hand, if you are only half-a-ra­tio­nal­ist, you can eas­ily do worse with more knowl­edge. I re­call a lovely ex­per­i­ment which showed that poli­ti­cally opinionated stu­dents with more knowl­edge of the is­sues re­acted less to in­con­gru­ent ev­i­dence, be­cause they had more am­mu­ni­tion with which to counter-ar­gue only in­con­gru­ent ev­i­dence.

We would seem to be stuck in an awful valley of par­tial ra­tio­nal­ity where we end up more poorly co­or­di­nated than re­li­gious fun­da­men­tal­ists, able to put forth less effort than fly­ing-saucer cultists. True, what lit­tle effort we do man­age to put forth may be bet­ter-tar­geted at helping peo­ple rather than the re­verse—but that is not an ac­cept­able ex­cuse.

If I were set­ting forth to sys­tem­at­i­cally train ra­tio­nal­ists, there would be les­sons on how to dis­agree and les­sons on how to agree, les­sons in­tended to make the trainee more com­fortable with dis­sent, and les­sons in­tended to make them more com­fortable with con­for­mity. One day ev­ery­one shows up dressed differ­ently, an­other day they all show up in uniform. You’ve got to cover both sides, or you’re only half a ra­tio­nal­ist.

Can you imag­ine train­ing prospec­tive ra­tio­nal­ists to wear a uniform and march in lock­step, and prac­tice ses­sions where they agree with each other and ap­plaud ev­ery­thing a speaker on a podium says? It sounds like un­speak­able hor­ror, doesn’t it, like the whole thing has ad­mit­ted out­right to be­ing an evil cult? But why is it not okay to prac­tice that, while it is okay to prac­tice dis­agree­ing with ev­ery­one else in the crowd? Are you never go­ing to have to agree with the ma­jor­ity?

Our cul­ture puts all the em­pha­sis on heroic dis­agree­ment and heroic defi­ance, and none on heroic agree­ment or heroic group con­sen­sus. We sig­nal our su­pe­rior in­tel­li­gence and our mem­ber­ship in the non­con­formist com­mu­nity by in­vent­ing clever ob­jec­tions to oth­ers’ ar­gu­ments. Per­haps that is why the athe­ist/​liber­tar­ian/​technophile/​sf-fan/​Sili­con-Valley/​pro­gram­mer/​early-adopter crowd stays marginal­ized, los­ing bat­tles with less non­con­formist fac­tions in larger so­ciety. No, we’re not los­ing be­cause we’re so su­pe­rior, we’re los­ing be­cause our ex­clu­sively in­di­vi­d­u­al­ist tra­di­tions sab­o­tage our abil­ity to co­op­er­ate.

The other ma­jor com­po­nent that I think sab­o­tages group efforts in the athe­ist/​liber­tar­ian/​technophile/​etcetera com­mu­nity, is be­ing ashamed of strong feel­ings. We still have the Spock archetype of ra­tio­nal­ity stuck in our heads, ra­tio­nal­ity as dis­pas­sion. Or per­haps a re­lated mis­take, ra­tio­nal­ity as cyn­i­cism—try­ing to sig­nal your su­pe­rior world-weary so­phis­ti­ca­tion by show­ing that you care less than oth­ers. Be­ing care­ful to os­ten­ta­tiously, pub­li­cly look down on those so naive as to show they care strongly about any­thing.

Wouldn’t it make you feel un­com­fortable if the speaker at the podium said that he cared so strongly about, say, fight­ing ag­ing, that he would will­ingly die for the cause?

But it is nowhere writ­ten in ei­ther prob­a­bil­ity the­ory or de­ci­sion the­ory that a ra­tio­nal­ist should not care. I’ve looked over those equa­tions and, re­ally, it’s not in there.

The best in­for­mal defi­ni­tion I’ve ever heard of ra­tio­nal­ity is “That which can be de­stroyed by the truth should be.” We should as­pire to feel the emo­tions that fit the facts, not as­pire to feel no emo­tion. If an emo­tion can be de­stroyed by truth, we should re­lin­quish it. But if a cause is worth striv­ing for, then let us by all means feel fully its im­por­tance.

Some things are worth dy­ing for. Yes, re­ally! And if we can’t get com­fortable with ad­mit­ting it and hear­ing oth­ers say it, then we’re go­ing to have trou­ble car­ing enough—as well as co­or­di­nat­ing enough—to put some effort into group pro­jects. You’ve got to teach both sides of it, “That which can be de­stroyed by the truth should be,” and “That which the truth nour­ishes should thrive.”

I’ve heard it ar­gued that the taboo against emo­tional lan­guage in, say, sci­ence pa­pers, is an im­por­tant part of let­ting the facts fight it out with­out dis­trac­tion. That doesn’t mean the taboo should ap­ply ev­ery­where. I think that there are parts of life where we should learn to ap­plaud strong emo­tional lan­guage, elo­quence, and po­etry. When there’s some­thing that needs do­ing, po­etic ap­peals help get it done, and, there­fore, are them­selves to be ap­plauded.

We need to keep our efforts to ex­pose coun­ter­pro­duc­tive causes and un­jus­tified ap­peals, from stomp­ing on tasks that gen­uinely need do­ing. You need both sides of it—the will­ing­ness to turn away from coun­ter­pro­duc­tive causes, and the will­ing­ness to praise pro­duc­tive ones; the strength to be unswayed by un­grounded ap­peals, and the strength to be swayed by grounded ones.

I think the syn­a­gogue at their an­nual ap­peal had it right, re­ally. They weren’t go­ing down row by row and putting in­di­vi­d­u­als on the spot, star­ing at them and say­ing, “How much will you donate, Mr. Schwartz?” Peo­ple sim­ply an­nounced their pledges—not with grand drama and pride, just sim­ple an­nounce­ments—and that en­couraged oth­ers to do the same. Those who had noth­ing to give, stayed silent; those who had ob­jec­tions, chose some later or ear­lier time to voice them. That’s prob­a­bly about the way things should be in a sane hu­man com­mu­nity—tak­ing into ac­count that peo­ple of­ten have trou­ble get­ting as mo­ti­vated as they wish they were, and can be helped by so­cial en­courage­ment to over­come this weak­ness of will.

But even if you dis­agree with that part, then let us say that both sup­port­ing and coun­ter­sup­port­ing opinions should have been pub­li­cly voiced. Sup­port­ers be­ing faced by an ap­par­ently solid wall of ob­jec­tions and dis­agree­ments—even if it re­sulted from their own un­com­fortable self-cen­sor­ship—is not group ra­tio­nal­ity. It is the mere mir­ror image of what Dark Side groups do to keep their fol­low­ers. Re­v­ersed stu­pidity is not in­tel­li­gence.