Studies On Slack

Link post

I.

Imag­ine a dis­tant planet full of eye­less an­i­mals. Evolv­ing eyes is hard: they need to evolve Eye Part 1, then Eye Part 2, then Eye Part 3, in that or­der. Each of these re­quires a sep­a­rate se­ries of rare mu­ta­tions.

Here on Earth, sci­en­tists be­lieve each of these mu­ta­tions must have had its own benefits – in the land of the blind, the man with only Eye Part 1 is king. But on this hy­po­thet­i­cal alien planet, there is no such luck. You need all three Eye Parts or they’re use­less. Worse, each Eye Part is metabol­i­cally costly; the an­i­mal needs to eat 1% more food per Eye Part it has. An an­i­mal with a full eye would be much more fit than any­thing else around, but an an­i­mal with only one or two Eye Parts will be at a small dis­ad­van­tage.

So these an­i­mals will only evolve eyes in con­di­tions of rel­a­tively weak evolu­tion­ary pres­sure. In a world of in­tense and perfect com­pe­ti­tion, where the fittest an­i­mal always sur­vives to re­pro­duce and the least fit always dies, the an­i­mal with Eye Part 1 will always die – it’s less fit than its fully-eye­less peers. The weaker the com­pe­ti­tion, and the more ran­dom­ness dom­i­nates over sur­vival-of-the-fittest, the more likely an an­i­mal with Eye Part 1 can sur­vive and re­pro­duce long enough to even­tu­ally pro­duce a de­scen­dant with Eye Part 2, and so on.

There are lots of ways to de­crease evolu­tion­ary pres­sure. Maybe nat­u­ral dis­asters of­ten dec­i­mate the pop­u­la­tion, dozens of gen­er­a­tions are spend re­coloniz­ing empty land, and dur­ing this pe­riod there’s more than enough for ev­ery­one and no­body has to com­pete. Maybe there are fre­quent whale­falls, and any an­i­mal nearby has hit the evolu­tion­ary jack­pot and will have thou­sands of de­scen­dants. Maybe the pop­u­la­tion is iso­lated in lit­tle is­lands and moun­tain valleys, and one gene or an­other can reach fix­a­tion in a pop­u­la­tion to­tally by chance. It doesn’t mat­ter ex­actly how it hap­pens, it mat­ters that evolu­tion­ary pres­sure is low.

The branch of evolu­tion­ary sci­ence that deals with this kind of situ­a­tion is called “adap­tive fit­ness land­scapes”. Land­scapes re­ally are a great metaphor – con­sider some­where like this:

You pour out a bucket of wa­ter. Water “flows down­hill”, so it’s tempt­ing to say some­thing like “wa­ter wants to be at the low­est point pos­si­ble”. But that’s not quite right. The low­est point pos­si­ble is the pit, and wa­ter won’t go there. It will just sit in the lit­tle pud­dle for­ever, be­cause it would have to go up the tiny lit­tle hillock in or­der to get to the pit, and wa­ter can’t flow up­hill. Us­ing nor­mal hu­man logic, we feel tempted to say some­thing like “Come on! The hillock is so tiny, and that pit is so deep, just make a sin­gle lit­tle ex­cep­tion to your ‘always flow down­hill’ policy and you could do so much bet­ter for your­self!” But wa­ter stub­bornly re­fuses to listen.

Un­der con­di­tions of perfectly in­tense com­pe­ti­tion, evolu­tion works the same way. We imag­ine a mul­ti­di­men­sional evolu­tion­ary “land­scape” where lower ground rep­re­sents higher fit­ness. In this perfectly in­tense com­pe­ti­tion, or­ganisms can go from higher to lower fit­ness, but never vice versa. As with wa­ter, the tiniest hillock will leave their po­ten­tial for­ever un­re­al­ized.

Un­der more re­laxed com­pe­ti­tion, evolu­tion only tends prob­a­bil­is­ti­cally to flow down­hill. Every so of­ten, it will flow up­hill; the smaller the hillock, the more likely evolu­tion will sur­mount it. Given enough time, it’s guaran­teed to reach the deep­est pit and mostly stay there.

Take a mo­ment to be prop­erly amazed by this. It sounds like some­thing out of the Tao Te Ching. An an­i­mal with eyes has very high evolu­tion­ary fit­ness. It will win at all its evolu­tion­ary com­pe­ti­tions. So in or­der to pro­duce the high­est-fit­ness an­i­mal, we need to – se­lect for fit­ness less hard? In or­der to pro­duce an an­i­mal that wins com­pe­ti­tions, we need to stop op­ti­miz­ing for win­ning com­pe­ti­tions?

This doesn’t mean that less com­pe­ti­tion is always good. An evolu­tion­ary en­vi­ron­ment with no com­pe­ti­tion won’t evolve eyes ei­ther; a few in­di­vi­d­u­als might ran­domly drift into hav­ing eyes, but they won’t catch on. In or­der to op­ti­mize the species as much as pos­si­ble as fast as pos­si­ble, you need the right bal­ance, some­where in the mid­dle be­tween to­tal com­pe­ti­tion and to­tal ab­sence of com­pe­ti­tion.

In the es­o­teric teach­ings, to­tal com­pe­ti­tion is called Moloch, and to­tal ab­sence of com­pe­ti­tion is called Slack. Slack (thanks to Zvi Moskovitz for the term and con­cept) gets short shrift. If you think of it as “some peo­ple try to win com­pe­ti­tions, other peo­ple don’t care about win­ning com­pe­ti­tions and slack off and go to the beach”, you’re mi­s­un­der­stand­ing it. Think of slack as a para­dox – the Taoist art of win­ning com­pe­ti­tions by not try­ing too hard at them. Moloch and Slack are op­po­sites and com­ple­ments, like yin and yang. Nei­ther is stronger than the other, but their in­ter­play cre­ates the ten thou­sand things.

II.

Be­fore we dis­cuss slack fur­ther, a di­gres­sion on group se­lec­tion.

Some peo­ple would ex­pect this dis­cus­sion to be quick, since group se­lec­tion doesn’t ex­ist. Th­ese peo­ple un­der­stand it as evolu­tion act­ing for the good of a species. It’s a tempt­ing way to think, be­cause evolu­tion usu­ally even­tu­ally makes species stronger and more fit, and some­times we col­lo­quially round that off to evolu­tion tar­get­ing a species’ greater good. But in­evitably we find evolu­tion is awful and does ab­solutely noth­ing of the sort.

Imag­ine an alien planet that gets hit with a so­lar flare once an eon, kil­ling all un­shielded an­i­mals. Some­times un­shielded an­i­mals spon­ta­neously mu­tate to shielded, and vice versa. Shielded an­i­mals are com­pletely im­mune to so­lar flares, but have 1% higher metabolic costs. What hap­pens? If you pre­dicted “mag­netic shield­ing reaches fix­a­tion and all an­i­mals get it”, you’ve fallen into the group se­lec­tion trap. The un­shielded an­i­mals out­com­pete the shielded ones dur­ing the long in­ter-flare pe­riod, driv­ing their pop­u­la­tion down to zero (though a few new shielded ones arise ev­ery gen­er­a­tion through spon­ta­neous mu­ta­tions). When the flare comes, only the few spon­ta­neous mu­tants sur­vive. They breed a new en­tirely-shielded pop­u­la­tion, un­til a few un­shielded an­i­mals arise through spon­ta­neous mu­ta­tion. The un­shielded out­com­pete the shielded ones again, and by the time of the next so­lar flare, the pop­u­la­tion is 100% un­shielded again and they all die. If the an­i­mals are lucky, there will always be enough spon­ta­neously-mu­tated shielded an­i­mals to cre­ate a post-flare breed­ing pop­u­la­tion; if they are un­lucky, the flare will hit at a time with un­usu­ally few such mu­tants, and the species will go ex­tinct.

An Evolu­tion Czar con­cerned with the good of the species would just de­clare that all an­i­mals should be shielded and solve the prob­lem. In the ab­sence of such a Czar, these an­i­mals will just keep dy­ing in so­lar-flare-in­duced mass ex­tinc­tions for­ever, even though there is an easy solu­tion with only 1% metabolic cost.

A less dra­matic ver­sion of the same prob­lem hap­pens here on Earth. Every so of­ten preda­tors (let’s say foxes) re­pro­duce too quickly and out­strip the available sup­ply of prey (let’s say rab­bits). There is a brief pe­riod of star­va­tion as foxes can’t find any more rab­bits and die en masse. This usu­ally ends with a boom-bust cy­cle: af­ter most foxes die, the rab­bits (who re­pro­duce very quickly and are now free of pre­da­tion) have a pop­u­la­tion boom; now there are rab­bits ev­ery­where. Even­tu­ally the foxes catch up, eat all the new rab­bits, and the cy­cle re­peats again. It’s a waste of re­sources for foxkind to spend so much of time and its en­ergy breed­ing a huge pop­u­la­tion of foxes that will in­evitably col­lapse a gen­er­a­tion later; an Evolu­tion Czar con­cerned with the com­mon good would have foxes limit their breed­ing at a sus­tain­able level. But since in­di­vi­d­ual foxes that breed ex­ces­sively are more likely to have their genes rep­re­sented in the next gen­er­a­tion than foxes that breed at a sus­tain­able level, we end up with foxes that breed ex­ces­sively, and the cy­cle con­tinues.

(but hu­mans are too smart to fall for this one, right?)

Some sci­en­tists tried to cre­ate group se­lec­tion un­der lab­o­ra­tory con­di­tions. They di­vided some in­sects into sub­pop­u­la­tions, then kil­led off any sub­pop­u­la­tion whose num­bers got too high, and and “pro­moted” any sub­pop­u­la­tion that kept its num­bers low to bet­ter con­di­tions. They hoped the in­sects would evolve to nat­u­rally limit their fam­ily size in or­der to keep their sub­pop­u­la­tion al­ive. In­stead, the in­sects be­came can­ni­bals: they ate other in­sects’ chil­dren so they could have more of their own with­out the to­tal pop­u­la­tion go­ing up. In ret­ro­spect, this makes perfect sense; an in­sect with the be­hav­ioral pro­gram “have many chil­dren, and also kill other in­sects’ chil­dren” will have its genes bet­ter rep­re­sented in the next gen­er­a­tion than an in­sect with the pro­gram “have few chil­dren”.

But some­times evolu­tion ap­pears to solve group se­lec­tion prob­lems. What about mul­ti­cel­lu­lar life? Stick some cells to­gether in a re­source-plen­tiful en­vi­ron­ment, and they’ll nat­u­rally do the evolu­tion­ary com­pe­ti­tion thing of eat­ing re­sources as quickly as pos­si­ble to churn out as many copies of them­selves as pos­si­ble. If you were ex­pect­ing these cells to form a uni­tary or­ganism where in­di­vi­d­ual cells do things like be­come heart cells and just stay in place beat­ing rhyth­mi­cally, you would call the ex­pected nor­mal be­hav­ior “can­cer” and be against it. Your op­po­si­tion would be on firm group se­lec­tion­ist grounds: if any cell be­comes can­cer, it and its de­scen­dants will even­tu­ally over­whelm ev­ery­thing, and the or­ganism (in­clud­ing all cells within it, in­clud­ing the can­cer cells) will die. So for the good of the group, none of the cells should be­come can­cer­ous.

The first step in evolu­tion’s solu­tion is giv­ing all cells the same genome; this mostly elimi­nates the need to com­pete to give their genes to the next gen­er­a­tion. But this solu­tion isn’t perfect; cells can get mu­ta­tions in the nor­mal course of di­vid­ing and do­ing bod­ily func­tions. So it em­ploys a host of other tricks: ge­netic pro­grams tel­ling cells to self-de­struct if they get too can­cer-ad­ja­cent, an im­mune sys­tem that hunts down and de­stroys can­cer cells, or grow­ing old and dy­ing (this last one isn’t usu­ally thought of as a “trick”, but it ab­solutely is: if you ar­range for a cell line to lose a lit­tle in­for­ma­tion dur­ing each mi­to­sis, so that it de­grades to the point of gob­bledy­gook af­ter X di­vi­sions, this means can­cer cells that di­vide con­stantly will die very quickly, but nor­mal cells di­vid­ing on an ap­proved sched­ules will last for decades).

Why can evolu­tion “de­velop tricks” to pre­vent can­cer, but not to pre­vent foxes from over­breed­ing, or aliens from los­ing their so­lar flare shields? Group se­lec­tion works when the group it­self has a shared ge­netic code (or other analo­gous rule­set) that can evolve. It doesn’t work if you ex­pect it to di­rectly change the ge­netic code of each in­di­vi­d­ual to co­op­er­ate more.

When we think of can­cer, we are at risk of con­flat­ing two ge­netic codes: the shared ge­netic code of the mul­ti­cel­lu­lar or­ganism, and the ge­netic code of each cell within the or­ganism. Usu­ally (when there are no mu­ta­tions in cell di­vi­sions) these are the same. Once in­di­vi­d­ual cells within the or­ganism start mu­tat­ing, they be­come differ­ent. Evolu­tion will se­lect for can­cer in changes to in­di­vi­d­ual cells’ genomes over an or­ganism’s life­time, but se­lect against it in changes to the over­ar­ch­ing genome over the life­time of the species (ie you should ex­pect all the genes you in­her­ited from your par­ents to be se­lected against can­cer, and all the mu­ta­tions in in­di­vi­d­ual cells you’ve got­ten since then to be se­lected for can­cer).

The fox pop­u­la­tion has no equiv­a­lent of the over­ar­ch­ing genome; there is no set of rules that gov­ern the be­hav­ior of ev­ery fox. So foxes can’t un­dergo group se­lec­tion to pre­vent over­pop­u­la­tion (there are some more com­pli­cated dy­nam­ics that might still be able to res­cue the foxes in some situ­a­tions, but they’re not rele­vant to the sim­ple model we’re look­ing at).

In other words, group se­lec­tion can hap­pen in a two-layer hi­er­ar­chy of nested evolu­tion­ary sys­tems when the outer sys­tem (eg mul­ti­cel­lu­lar hu­mans) in­cludes rules that the in­ner sys­tem (eg hu­man cells) have to fol­low, and where the fit­ness of the evolv­ing-en­tities in the outer sys­tem de­pends on some char­ac­ter­is­tics of the evolv­ing-en­tities in the in­ner sys­tem (eg hu­mans are higher-fit­ness if their cells do not be­come can­cer­ous). The evolu­tion of the outer layer in­cludes in­cludes evolu­tion over rule­sets, and even­tu­ally evolves good strong rule­sets that tell the in­ner-layer evolv­ing en­tities how to be­have, which can in­clude group se­lec­tion (eg hu­mans evolve a ge­netic code that in­cludes a rule “in­di­vi­d­ual cells in­side of me should not get can­cer” and mechanisms for en­forc­ing this rule).

You can find these kinds of two-layer evolu­tion­ary sys­tems ev­ery­where. For ex­am­ple, “cul­tural evolu­tion” is a two-layer evolu­tion­ary sys­tem. In the hy­po­thet­i­cal state of na­ture, there’s un­re­stricted com­pe­ti­tion – peo­ple steal from and mur­der each other, and only the strongest sur­vive. After they form groups, the groups com­pete with each other, and groups that de­velop rule­sets that pre­vent theft and mur­der (eg le­gal codes, re­li­gions, mores) tend to win those com­pe­ti­tions. Once again, the outer layer (com­pe­ti­tion be­tween cul­tures) evolves groups that suc­cess­fully con­strains the in­ner layer (com­pe­ti­tion be­tween in­di­vi­d­u­als). Species don’t have a czar who re­straints in­ter­nal com­pe­ti­tion in the in­ter­est of keep­ing the group strong, but some hu­man cul­tures do (eg Rus­sia).

Or what about mar­ket eco­nomics? The outer layer is com­pa­nies, the in­ner layer is in­di­vi­d­u­als. Maybe the in­di­vi­d­u­als are work­ers – each worker would self­ishly be best off if they spent the day watch­ing YouTube videos and pushed the hard work onto some­one else. Or maybe they’re ex­ec­u­tives – each in­di­vi­d­ual ex­ec­u­tive would self­ishly be best off if they spent their en­ergy on office poli­tics, try­ing to flat­ter and net­work with who­ever was most likely to pro­mote them. But if all the em­ploy­ees loaf off and all the ex­ec­u­tives fo­cus on office poli­tics, the com­pany won’t make prod­ucts, and com­peti­tors will eat their lunch. So some­one – maybe the founder/​CEO – comes up with a rule­set to in­cen­tivize good work, prob­a­bly some kind of perfor­mance re­view sys­tem where peo­ple who do good work get pro­moted and peo­ple who do bad work get fired. The outer-layer com­pe­ti­tion be­tween com­pa­nies will se­lect for cor­po­ra­tions with the best rule­sets; over time, com­pa­nies’ in­ter­nal poli­tics should get bet­ter at pro­mot­ing the kind of co­op­er­a­tion nec­es­sary to suc­ceed.

How do these sys­tems repli­cate mul­ti­cel­lu­lar life’s suc­cess with­out be­ing literal en­tities with literal DNA hav­ing literal sex? They all in­volve a shared rule­set and a way of pun­ish­ing rule­break­ers which make it in each in­di­vi­d­ual’s short-term in­ter­est to fol­low the rule­set that leads to long-term suc­cess. Coun­tries can do that (fol­low the law or we’ll jail you), com­pa­nies can do that (fol­low our poli­cies or we’ll fire you), even mul­ti­cel­lu­lar life can sort of do that (don’t be­come can­cer, or im­mune cells will kill you). When there’s noth­ing like that (like the overly-fast-breed­ing foxes) evolu­tion fails at group se­lec­tion prob­lems. When there is some­thing like that, it has a chance. When there’s some­thing like that, and the thing like that is it­self evolv­ing (ei­ther be­cause it’s en­coded in literal DNA, or be­cause it’s en­coded in things like com­pany poli­cies that de­ter­mine whether a com­pany goes out of busi­ness or be­comes a model for oth­ers), then it can reach a point where it solves group se­lec­tion prob­lems very effec­tively.

In the es­o­teric teach­ings, the in­ner layer of two-layer evolu­tion­ary sys­tems is rep­re­sented by the God­dess of Cancer, and outer layer by the God­dess of Every­thing Else. In each part of the poem, the God­dess of Cancer or­ders the evolv­ing-en­tities to com­pete, but the God­dess of Every­thing Else re­casts it as a two-layer com­pe­ti­tion where co­op­er­a­tion on the in­ter­nal layer helps win the com­pe­ti­tion on the ex­ter­nal layer. He who has ears to hear, let him listen.

III.

Why the di­gres­sion? Be­cause slack is a group se­lec­tion prob­lem. A species that gave it­self slack in its evolu­tion­ary com­pe­ti­tion would do bet­ter than one that didn’t – for ex­am­ple, the eye­less aliens would evolve eyes and get a big fit­ness boost. But no in­di­vi­d­ual can unilat­er­ally choose to com­pete less in­tensely; if it did, it would be out­com­peted and die. So one-layer evolu­tion will fail at this prob­lem the same way it fails all group se­lec­tion prob­lems, but two-layer sys­tems will have a chance to es­cape the trap.

The mul­ti­cel­lu­lar life ex­am­ple above is a spe­cial case where you want 100% co­or­di­na­tion and 0% com­pe­ti­tion. I framed the other ex­am­ples the same way – coun­tries do best when their cit­i­zens avoid all com­pe­ti­tion and work to­gether for the com­mon good, com­pa­nies do best when their ex­ec­u­tives avoid self-ag­gran­diz­ing office poli­tics and fo­cus on product qual­ity. But as we saw above, some sys­tems do best some­where in the mid­dle, where there’s some com­pe­ti­tion but also some slack.

For ex­am­ple, con­sider a re­searcher fac­ing their own ver­sion of the eye­less aliens’ dilemma. They can keep go­ing with busi­ness as nor­mal – pub­lish­ing trendy but ul­ti­mately use­less pa­pers that no­body will re­mem­ber in ten years. Or they can work on Re­search Pro­gram Part 1, which might lead to Re­search Pro­gram Part 2, which might lead to Re­search Pro­gram Part 3, which might lead to a ground-break­ing in­sight. If their jobs are up for re­view ev­ery year, and a year from now the busi­ness-as-nor­mal re­searcher will have five trendy pa­pers, and the ground­break­ing-in­sight re­searcher will be halfway through Re­search Pro­gram Part 1, then the busi­ness-as-nor­mal re­searcher will out­com­pete the ground­break­ing-in­sight re­searcher; as the say­ing goes, “pub­lish or per­ish”. Without slack, no re­searcher can unilat­er­ally es­cape the sys­tem; their best op­tion will always be to con­tinue busi­ness as usual.

But group se­lec­tion makes the situ­a­tion less hope­less. Univer­si­ties have long time-hori­zons and good in­cen­tives; they want to get fa­mous for pro­duc­ing ex­cel­lent re­search. Univer­si­ties have rule­sets that bind their in­di­vi­d­ual re­searchers, for ex­am­ple “af­ter a while good re­searchers get tenure”. And since uni­ver­si­ties com­pete with each other, each is in­cen­tivized to come up with the rule­set that max­i­mizes long-term re­searcher pro­duc­tivity. So if tenure re­ally does work bet­ter than con­stant vi­cious com­pe­ti­tion, then (ab­sent the usual culprits like re­sis­tance-to-change, weird sig­nal­ing equil­ibria, poli­tics, etc) we should ex­pect uni­ver­si­ties to con­verge on a tenure sys­tem in or­der to pro­duce the best work. In fact, we should ex­pect uni­ver­si­ties to evolve a re­ally im­pres­sive rule­set for op­ti­miz­ing re­searcher in­cen­tives, just as im­pres­sive as the clever mechanisms the hu­man body uses to pre­vent can­cer (since this seems a bit op­ti­mistic, I as­sume the usual culprits are not ab­sent).

The same is true for grant-writ­ing; naively you would want some com­pe­ti­tion to make sure that only the best grant pro­pos­als get funded, but too much com­pe­ti­tion seems to stifle origi­nal re­search, so much so that some fun­ders are throw­ing out the whole pro­cess and se­lect­ing grants by lot­tery, and oth­ers are run­ning grants you can ap­ply for in a half-hour and hear back about two days later. If there’s a feed­back mechanism – if these differ­ent rule­sets pro­duce differ­ent-qual­ity re­search, and grant pro­grams that pro­duce higher-qual­ity re­search are more likely to get funded in the fu­ture – then the rule­sets for grants will grad­u­ally evolve, and the com­pe­ti­tion for grants will take place in an en­vi­ron­ment with what­ever the right evolu­tion­ary pa­ram­e­ters for evolv­ing good re­search are.

I don’t want to say these things will definitely hap­pen – you can read Inad­e­quate Equil­ibria for an idea of why not. But they might. The evolu­tion­ary dy­nam­ics which would nor­mally pre­vent them can be over­come. Two-layer evolu­tion­ary sys­tems can pro­duce their own slack, if hav­ing slack would be a good idea.

IV.

That was a lot of para­graphs, and a lot of them started with “imag­ine a hy­po­thet­i­cal situ­a­tion where…”. Let’s look deeper into cases where an un­der­stand­ing of slack can in­form how we think about real-world phe­nom­ena. Five ex­am­ples:

1. Mo­nop­o­lies. Not the kind that sur­vive off over­reg­u­la­tion and patents, the kind that sur­vive by be­ing big enough to crush com­peti­tors. Th­ese are preda­tors that ex­ploit low-slack en­vi­ron­ments. If Boe­ing has a monopoly on build­ing pas­sen­ger planes, and is ex­ploit­ing that by mak­ing shoddy prod­ucts and over­charg­ing con­sumers, then that means any­one else who built a gi­ant air­plane fac­tory could make bet­ter prod­ucts at a lower price, cap­ture the whole air­plane mar­ket, and be­come a zillion­aire. Why don’t they? Slack. In terms of those adap­tive fit­ness land­scapes, in be­tween your cur­rent po­si­tion (av­er­age Joe) and a much bet­ter po­si­tion at the bot­tom of a deep pit (you own a gi­ant air­plane fac­tor and are a zillion­aire), there’s a very big hill you have to climb – the part where you build Gi­ant Air­plane Fac­tory Part 1, Gi­ant Air­plane Fac­tory Part 2, etc. At each point in this hill, you are worse off than some­body who was not build­ing an as-yet-un­prof­itable gi­ant air­plane fac­tory. If you have in­finite slack (maybe you are Jeff Be­zos, have un­limited money, and will never go bankrupt no mat­ter how much time and cost it takes be­fore you start earn­ing prof­its) you’re fine. If you have more limited slack, your slack will run out and you’ll be out­com­peted be­fore you make it to the greater-fit­ness deep pit.

Real mo­nop­o­lies are more com­pli­cated than this, be­cause Boe­ing can shape up and cut prices when you’re halfway to build­ing your gi­ant air­plane fac­tory, thus re­mov­ing your in­cen­tive. Or they can do ac­tu­ally shady stuff. But none of this would mat­ter if you already had your gi­ant air­plane fac­tory fully built and ready to go – at worst, you and Boe­ing would then be in a fair fight. Every­thing Boe­ing does to try to pre­vent you from build­ing that fac­tory is ex­ploit­ing your slack­less­ness and try­ing to in­crease the height of that hill you have to climb be­fore the re­ally deep pit.

(Peter Thiel in­verts the land­scape metaphor and calls the hill a “moat”, but he’s get­ting at the same con­cept).

2. Tar­iffs. Same story. Here’s the way I un­der­stand the his­tory of the in­ter­na­tional auto in­dus­try – any­one who knows more can cor­rect me if I’m wrong. Au­to­mo­biles were in­vented in the early 20th cen­tury. Sev­eral Western coun­tries de­vel­oped home­grown auto in­dus­tries more or less si­mul­ta­neously, with the most im­pres­sive be­ing Henry Ford’s work on mass pro­duc­tion in the US. Post-WWII Ja­pan re­al­ized that its own auto in­dus­try would never be able to com­pete with more es­tab­lished Western com­pa­nies, so it placed high tar­iffs on for­eign cars, giv­ing lo­cal com­pa­nies like Nis­san and Toy­ota a chance to get their act to­gether. Th­ese com­pa­nies, es­pe­cially Toy­ota, in­vented a new form of auto pro­duc­tion which was ac­tu­ally much more effi­cient than the usual Amer­i­can meth­ods, and were even­tu­ally able to hold their own. They started ex­port­ing cars to the US; al­though Amer­i­can tar­iffs put them at a dis­ad­van­tage, they were so much bet­ter than the Amer­i­can cars of the time that con­sumers preferred them any­way. After decades of los­ing out, the Amer­i­can com­pa­nies adopted a more Ja­panese ethos, and were even­tu­ally able to com­pete on a level play­ing field again.

This is a story of things gone sur­pris­ingly right – Amer­i­cans and Ja­panese al­ike were able to get ex­cel­lent in­ex­pen­sive cars. Two things had to hap­pen for it to work. First, Ja­pan had to have high enough tar­iffs to give their com­pa­nies some slack – to let them de­velop their own home­grown meth­ods from scratch with­out be­ing im­me­di­ately out­com­peted by tem­porar­ily-su­pe­rior Amer­i­can com­peti­tors. Se­cond, Amer­ica had to have low enough tar­iffs that even­tu­ally-su­pe­rior Ja­panese com­pa­nies could out­com­pete Amer­i­can au­tomak­ers, and Ja­pan’s fit­ness-im­prov­ing in­no­va­tions could spread.

From the per­spec­tive of a Toy­ota man­ager, this is analo­gous to the eye­less alien story. You start with some good-enough stan­dard (blind an­i­mals, Amer­i­can car com­pa­nies). You want to evolve a su­pe­rior end product (eye-hav­ing an­i­mals, Toy­ota). The in­ter­me­di­ate steps (an an­i­mal with only Eye Part 1, a kind of crappy car com­pany that stum­bles over it­self try­ing out new things) are less fit than the good-enough stan­dard. Only when the in­fe­rior in­ter­me­di­ate steps are pro­tected from com­pe­ti­tion (through evolu­tion­ary ran­dom­ness, through tar­iffs) can the su­pe­rior end product come into ex­is­tence. But you want to keep enough com­pe­ti­tion that the su­pe­rior end product can use its su­pe­ri­or­ity to spread (there is enough evolu­tion­ary com­pe­ti­tion that hav­ing eyes reaches fix­a­tion, there is enough free trade that Amer­i­cans prefer­en­tially buy Toy­ota and US car com­pa­nies have to adopt its poli­cies).

From the per­spec­tive of an eco­nomic his­to­rian, maybe it’s a group se­lec­tion story. The var­i­ous stake­hold­ers in the US auto in­dus­try – Ford, GM, sup­pli­ers, the gov­ern­ment, la­bor, cus­tomers – com­peted with each other in a cer­tain way and struck some com­pro­mise. The var­i­ous stake­hold­ers in the Ja­panese auto in­dus­try did the same. For some rea­son the Amer­i­can com­pro­mise worked worse than the Ja­panese one – I’ve heard sto­ries about how US com­pa­nies were more will­ing to defraud con­sumers for short-term profit, how US la­bor unions were more will­ing to de­mand con­ces­sions even at the cost of com­pany effi­ciency, how reg­u­la­tors and ex­ec­u­tives were in bed with each other to the detri­ment of the product, etc. Every US in­ter­est group was act­ing in its own short-term self-in­ter­est, but the Ja­panese in­dus­try-as-a-whole out­com­peted the Amer­i­can one and the Amer­i­cans had to ad­just.

3. Mo­nop­o­lies, Part II. Tra­di­tion­ally, mo­nop­o­lies have been among the most suc­cess­ful R&D cen­ters. The most fa­mous ex­am­ple is Xerox; it had a monopoly on pho­to­copiers for a few decades be­fore los­ing an anti-trust suit in the late 1970s; dur­ing that pe­riod, its PARC R&D pro­gram in­vented “laser print­ing, Eth­er­net, the mod­ern per­sonal com­puter, graph­i­cal user in­ter­face (GUI) and desk­top paradigm, ob­ject-ori­ented pro­gram­ming, [and] the mouse”. The sec­ond most fa­mous ex­am­ple is Bell Labs, which in­vented “ra­dio as­tron­omy, the tran­sis­tor, the laser, the pho­to­voltaic cell, the charge-cou­pled de­vice, in­for­ma­tion the­ory, the Unix op­er­at­ing sys­tem, and the pro­gram­ming lan­guages B, C, C++, and S” be­fore the gov­ern­ment broke up its par­ent com­pany AT&T. Google seems to be try­ing some­thing similar, though it’s too soon to judge their out­comes.

Th­ese suc­cesses make sense. Re­search and de­vel­op­ment is a long-term gam­ble. Devot­ing more money to R&D de­creases your near-term prof­its, but (hope­fully) in­creases your fu­ture prof­its. Freed from com­pe­ti­tion, mo­nop­o­lies have limitless slack, and can af­ford to in­vest in pro­jects that won’t pay off for ten or twenty years. This is part of Peter Thiel’s defense of mo­nop­o­lies in Zero To One.

An ad­minis­tra­tor tasked with ad­vanc­ing tech­nol­ogy might be tempted to en­courage mo­nop­o­lies in or­der to get more re­search done. But mo­nop­o­lies can also be stag­nant and re­sis­tant to change; it’s prob­a­bly not a co­in­ci­dence that Xerox wasn’t the first com­pany to bring the per­sonal com­puter to mar­ket, and ended up ir­rele­vant to the com­put­ing rev­olu­tion. Like the eye­less aliens, who will not evolve in con­di­tions of perfect com­pe­ti­tion or perfect lack of com­pe­ti­tion, prob­a­bly all you can do here is strike a bal­ance. Some Com­mu­nist coun­tries tried the ex­treme solu­tion – one state-sup­ported monopoly per in­dus­try – and it failed the test of group se­lec­tion. I don’t know enough to have an opinion on whether coun­tries with strong an­titrust even­tu­ally out­com­pete those with weaker an­titrust or vice versa.

4. Strat­egy Games. I like the strat­egy game Civ­i­liza­tion, where you play as a group of prim­i­tives set­ting out to found a em­pire. You build cities and in­fras­truc­ture, re­search tech­nolo­gies, and fight wars. Your world is filled with sev­eral (usu­ally 2 to 7) other civ­i­liza­tions try­ing to do the same.

Just like in the real world, civ­i­liza­tions must de­cide be­tween Guns and But­ter. The Civ ver­sion of Guns is called the Axe Rush. You im­me­di­ately de­vote all your re­search to dis­cov­er­ing how to make re­ally good axes, all your in­dus­try to man­u­fac­tur­ing those axes, and all your pop­u­la­tion into wield­ing those axes. Then you go and hack ev­ery­one else to pieces while they’re still futz­ing about try­ing to in­vent pot­tery or some­thing.

The Civ ver­sion of But­ter is called Build. You de­vote all your re­search, in­dus­try, and pop­u­lace to lay­ing the foun­da­tions of a bal­anced econ­omy and cul­ture. You in­vent pot­tery and weav­ing and stuff like that. Soon you have a thriv­ing trade net­work and a strong philo­soph­i­cal tra­di­tion. Even­tu­ally you can field larger and more ad­vanced armies than your neigh­bors, and lev­er­age the ad­van­tage into even more pros­per­ity, or into mil­i­tary con­quest.

Con­sider a very sim­ple sce­nario: a map of Eura­sia with two civ­i­liza­tions, Rome and China.

If both choose Axe Rush, then who­ever Axe Rushes bet­ter wins.

If both choose Build, then who­ever Builds bet­ter wins.

What if Rome chooses Axe Rush, and China chooses Build?

Then it de­pends on their dis­tance! If it’s a very small map and they start very close to­gether, Rome will prob­a­bly over­whelm the Chi­nese be­fore Build starts pay­ing off. But if it’s a very big map, by the time Ro­man Ax­e­men trek all the way to China, China will have Built high walls, dis­cov­ered long­bows and other defen­sive tech­nolo­gies, and gen­er­ally be­come too strong for axes to defeat. Then they can crush the Ro­mans – who are still just axe-wield­ing prim­i­tives – at their leisure.

Con­sider a more com­pli­cated sce­nario. You have a map of Earth. The Old World con­tains Rome and China. The New World con­tains Aztecs. Rome and China are very close to each other. Now what hap­pens?

Rome and China spend the Stone, Bronze, and Iron Ages hack­ing each other to bits. Aztecs spend those Ages build­ing cities, re­search­ing tech­nolo­gies, and build­ing unique Won­ders of the World that provide pow­er­ful bonuses. In 1492, they dis­cover Galleons and starts cross­ing the ocean. The pow­er­ful and ad­vanced Aztec em­pire crushes the ex­hausted axe-wield­ing Ro­mans and Chi­nese.

This is an­other story about slack. The Aztecs had it – they were un­der no com­pet­i­tive pres­sure to do things that paid off next turn. The Ro­mans and Chi­nese didn’t – they had to be at the top of their game ev­ery sin­gle turn, or their neigh­bor would con­quer them. If there was an op­tion that made you 10% weaker next turn in ex­change for mak­ing you 100% stronger ten turns down the line, the Aztecs could take it with­out a sec­ond thought; the Ro­mans and Chi­nese would prob­a­bly have to pass.

Okay, more com­pli­cated Civ­i­liza­tion sce­nario. This time there are two Old World civs, Rome and China, and two New World civs, Aztecs and Inca. The map is stretched a lit­tle bit so that all four civ­i­liza­tions have the same amount of nat­u­ral ter­ri­tory. All four play­ers un­der­stand the map lay­out and can com­mu­ni­cate with each other. What hap­pens?

Now it’s a group se­lec­tion prob­lem. A skil­lful Rome player will pri­vate mes­sage the China player and ex­plain all of this to her. She’ll re­mind him that if one hemi­sphere spends the whole Stone Age fight­ing, and the other spends it build­ing, the builders will win. She might tell him that she knows the Aztec and Inca play­ers, they’re smart, and they’re go­ing to be dis­cussing the same con­sid­er­a­tions. So it would benefit both Rome and China to sign a peace treaty di­vid­ing the Old World in two, stick to their own side, and Build. If both sides co­op­er­ate, they’ll both Build strong em­pires ca­pa­ble of match­ing the New World play­ers. If one side co­op­er­ates and the other defects, it will eas­ily steam­roll over its un­pre­pared op­po­nent and con­quer the whole Old World. If both sides defect, they’ll hack each other to death with axes and be easy prey for the New Wor­lders.

This might be true in Civ­i­liza­tion games, but real-world civ­i­liza­tions are more com­pli­cated. Gra­ham Greene wrote:

In Italy, for thirty years un­der the Bor­gias, they had war­fare, ter­ror, mur­der and blood­shed, but they pro­duced Michelan­gelo, Leonardo da Vinci and the Re­nais­sance. In Switzer­land, they had broth­erly love, they had five hun­dred years of democ­racy and peace – and what did that pro­duce? The cuckoo clock.

So maybe a lit­tle bit of in­ter­nal con­flict is good, to keep you hon­est. Too much con­flict, and you tear your­selves apart and are easy prey for out­siders. Too lit­tle con­flict, and you in­vent the cuckoo clock and noth­ing else. The con­ti­nent that con­quers the world will have enough pres­sure that its peo­ple want to in­no­vate, and enough slack that they’re able to.

This is to­tal un­grounded am­a­teur his­tor­i­cal spec­u­la­tion, but when I hear that I think of the Clas­si­cal world. We can imag­ine it as di­vided into a cer­tain num­ber of “the­aters of civ­i­liza­tion” – Greece, Me­sopotamia, Egypt, Per­sia, In­dia, Scythia, etc. Each the­ater had its own rules gov­ern­ing av­er­age state size, the rules of en­gage­ment be­tween states, how of­ten big­ger states con­quered smaller states, how of­ten ideas spread be­tween states of the same size, etc. Some of those the­aters were in­tensely com­pet­i­tive: Egypt was a nice straight line, very suited to cen­tral­ized rule. Others had more slack: it was re­ally hard to take over all of Greece; even the Spar­tans didn’t man­age. Each the­ater con­ducted its own “evolu­tion” in its own way – Egypt was ruled by a sin­gle Pharaoh with­out much com­pe­ti­tion, Scythia was con­stant war­fare of all against all, Greece was iso­lated city-states that fought each other some­times but also had enough slack to de­velop philos­o­phy and sci­ence. Each of those sys­tems did their own thing for a while, un­til fi­nally one of them pro­duced some­thing perfect: 4th cen­tury BC Mace­do­nia. Then it went out and con­quered ev­ery­thing.

If Greene is right, the point isn’t to find the rule­set that pro­motes 100% co­op­er­a­tion. It’s to find the rule­set that pro­motes an evolu­tion­ary sys­tem that makes your group the strongest. Usu­ally this in­volves some amount of com­pe­ti­tion – in or­der to se­lect for stronger or­ganisms – but also some amount of slack – to let or­ganisms de­velop com­pli­cated strate­gies that can make them stronger. De­spite the ear­lier de­scrip­tion, this isn’t nec­es­sar­ily a slider be­tween 0% com­pe­ti­tion and 100% com­pe­ti­tion. It could be much more com­pli­cated – maybe al­ter­nat­ing high-slack vs. low-slack pe­ri­ods, or many semi-iso­lated pop­u­la­tions with a small chance of in­ter­ac­tion each gen­er­a­tion, or al­ter­na­tion be­tween pe­ri­ods of iso­la­tion and pe­ri­ods of churn­ing.

In a full two-layer evolu­tion, you would let the sys­tems evolve un­til they reached the best pa­ram­e­ters. Here we can’t do that – Greece has how­ever many moun­tains it has; its suc­cess does not cause the rest of the world to grow more moun­tains. Still, we ran­domly started with enough differ­ent groups that we got to learn some­thing in­ter­est­ing.

(I can’t em­pha­size enough how un­grounded this his­tor­i­cal spec­u­la­tion is. Please don’t try to evolve Alexan­der the Great in your base­ment and then get an­gry at me when it doesn’t work)

5. The Long-Term Stock Ex­change. Ac­tu­ally, all stock ex­changes are about slack. Imag­ine you are a brilli­ant in­ven­tor who, given $10 mil­lion and ten years, could in­vent fu­sion power. But in fact you have $10 and need work to­mor­row or you will starve. Given those con­straints, maybe you could start, I don’t know, a lemon­ade stand.

You’re in the same po­si­tion as the an­i­mal try­ing to evolve an eye – you could cre­ate some­thing very high-util­ity, if only you had enough slack to make it hap­pen. But by de­fault, the in­ven­tor work­ing on fu­sion power starves to death ten days from now (or at least makes less money than his coun­ter­part who ran the lemon­ade stand), the same way the an­i­mal who evolves Eye Part 1 gets out­com­peted by other an­i­mals who didn’t and dies out.

You need slack. In the evolu­tion ex­am­ple, an­i­mals usu­ally stum­ble across slack ran­domly. You too might stum­ble across slack ran­domly – maybe it so hap­pens that you are in­de­pen­dently wealthy, or won the lot­tery, or some­thing.

More likely, you use the in­vest­ment sys­tem. You ask rich peo­ple to give you $10 mil­lion for ten years so you can in­vent fu­sion; once you do, you’ll make trillions of dol­lars and share some of it with them.

This is a great sys­tem. There’s no evolu­tion­ary equiv­a­lent. An an­i­mal can’t pitch Dar­win on its three-step plan to evolve eyes and get free food and mat­ing op­por­tu­ni­ties to make it hap­pen. Wall Street is a gi­ant multi-trillion dol­lar time ma­chine fun­nel­ing fu­ture prof­its back into the past, and that gives peo­ple the slack they need to make the fu­ture prof­its hap­pen at all.

But the Long-Term Stock Ex­change is es­pe­cially about slack. They are a new ex­change (ap­proved by the SEC last year) which has com­pli­cated rules about who can list with them. In­vestors will get ex­tra clout by agree­ing to hold stocks for a long time; ex­ec­u­tives will get in­cen­tivized to do well in the far fu­ture in­stead of at the next quar­terly earn­ings re­port. It’s mak­ing a de­liber­ate choice to give com­pa­nies more slack than the reg­u­lar sys­tem and see what they do with it. I don’t know enough about in­vest­ing to have an opinion, ex­cept that I ap­pre­ci­ate the ex­per­i­ment. Pre­sum­ably its com­pa­nies will do bet­ter/​worse than com­pa­nies on the reg­u­lar stock ex­change, that will cause com­pa­nies to flock to­ward/​away from it, and we’ll learn that its new rule­set is bet­ter/​worse at evolv­ing good com­pa­nies through com­pe­ti­tion than the reg­u­lar stock ex­change’s rule­set.

6. That Time Ayn Rand De­stroyed Sears. Or at least that’s how Michael Roz­worski and Leigh Phillips de­scribe Ed­die Lam­pert’s cor­po­rate re­or­ga­ni­za­tion in How Ayn Rand De­stroyed Sears, which I recom­mend. Lam­pert was a Sears CEO who figured – since free-mar­ket com­pet­i­tive economies out­com­pete top-down economies, shouldn’t free-mar­ket com­pet­i­tive com­pa­nies out­com­pete top-down com­pa­nies? He re­or­ga­nized Sears as a set of com­pet­ing de­part­ments that traded with each other on nor­mal free-mar­ket prin­ci­ples; if the Product Depart­ment wanted its prod­ucts mar­keted, it would have to pay the Mar­ket­ing Depart­ment. This worked re­ally badly, and was one of the main con­trib­u­tors to Sears’ im­plo­sion.

I don’t have a great un­der­stand­ing of ex­actly why Lam­pert’s Sears lost to other com­pa­nies, but cap­i­tal­ist economies beat so­cial­ist ones; Roz­worski and Phillips’ Peo­ple’s Repub­lic Of Wal-Mart, which looks into this ques­tion, is some­where on my read­ing list. But even with­out com­plete un­der­stand­ing, we can use group se­lec­tion to evolve the right pa­ram­e­ters. Imag­ine an econ­omy with sev­eral busi­nesses. One is a straw-man com­mu­nist col­lec­tive, where ev­ery worker gets paid the same re­gard­less of out­put and there are no pro­mo­tions (0% com­pe­ti­tion, 100% co­op­er­a­tion). Another is Lam­pert’s Sears (100% com­pe­ti­tion, 0% co­op­er­a­tion). Others are nor­mal busi­nesses, where em­ploy­ees mostly work to­gether for the good of the com­pany but also com­pete for pro­mo­tions (X% com­pe­ti­tion, Y% co­op­er­a­tion). Pre­sum­ably the nor­mal busi­ness out­com­petes both Lam­pert and the com­mies, and we sigh with re­lief and con­tinue hav­ing nor­mal busi­nesses. And if some of the nor­mal busi­nesses out­com­pete oth­ers, we’ve learned some­thing about the best val­ues of X and Y.

7. Ideas. Th­ese are in con­stant evolu­tion­ary com­pe­ti­tion – this is the in­sight be­hind memet­ics. The memetic equiv­a­lent of slack is in­fer­en­tial range, aka “will­ing­ness to en­ter­tain and ex­plore ideas be­fore de­cid­ing that they are wrong”.

In­fer­en­tial dis­tance is the num­ber of steps it takes to make some­one un­der­stand and ac­cept a cer­tain idea. Some­times in­fer­en­tial dis­tances can be very far apart. Imag­ine try­ing to con­vince a 12th cen­tury monk that there was no his­tor­i­cal Ex­o­dus from Egypt. You’re in the mid­dle of go­ing over ar­chae­olog­i­cal ev­i­dence when he ob­jects that the Bible says there was. You re­spond that the Bible is false and there’s no God. He says that doesn’t make sense, how would life have origi­nated? You say it evolved from sin­gle-cel­led or­ganisms. He asks how evolu­tion, which seems to be a change in an­i­mals’ ac­ci­dents, could ever af­fect their essences and change them into an en­tirely new species. You say that the whole scholas­tic wor­ld­view is wrong, there’s no such thing as ac­ci­dents and essences, it’s just atoms and empty space. He asks how you ground moral­ity if not in a striv­ing to ap­prox­i­mate the ideal em­bod­ied by your essence, you say…well, it doesn’t mat­ter what you say, be­cause you were try­ing to con­vince him that some very spe­cific peo­ple didn’t leave Egypt one time, and now you’ve got to ground moral­ity.

Another way of think­ing about this is that there are two self-con­sis­tent equil­ibria. There’s your equil­ibrium, (no Ex­o­dus, athe­ism, evolu­tion, atom­ism, moral non­re­al­ism), and the monk’s equil­ibrium (yes Ex­o­dus, the­ism, cre­ation­ism, scholas­ti­cism, tele­ol­ogy), and be­fore you can make the monk budge on any of those points, you have to con­vince him of all of them.

So the ques­tion be­comes – how much pa­tience does this monk have? If you tell him there’s no God, does he say “I look for­ward to the sev­eral years of care­ful study of your sci­en­tific and philo­soph­i­cal the­o­ries that it will take for that state­ment not to seem ob­vi­ously wrong and con­tra­dicted by ev­ery other fea­ture of the world”? Or does he say “KILL THE UNBELIEVER”? This is in­fer­en­tial range.

Aris­to­tle says that the mark of an ed­u­cated man is to be able to en­ter­tain an idea with­out ac­cept­ing it. In­fer­en­tial range ex­plains why. The monk cer­tainly shouldn’t im­me­di­ately ac­cept your claim, when he has countless pieces of ev­i­dence for the ex­is­tence of God, from the spec­tac­u­lar faith heal­ings he has wit­nessed (“look, there’s this thing called psy­cho­so­matic ill­ness, and it’s re­ally sus­cep­ti­ble to this other thing called the placebo effect…”) to Con­stan­tine’s vic­tory at the Mul­vian Bridge de­spite be­ing heav­ily out­num­bered (“look, I’m not a clas­si­cal scholar, but some peo­ple are just re­ally good gen­er­als and get lucky, and some­times it hap­pens the day af­ter they have weird dreams, I think there’s enough good ev­i­dence the other way that this is not the sort of thing you should cen­ter your wor­ld­view around”). But if he’s will­ing to en­ter­tain your claim long enough to hear your ar­gu­ments one by one, even­tu­ally he can reach the same self-con­sis­tent equil­ibrium you’re at and judge for him­self.

Nowa­days we don’t burn peo­ple at the stake. But we do make fun of them, or flame them, or block them, or wan­der off, or oth­er­wise not listen with an open mind to ideas that strike us at first as stupid. This is an­other case where we have to bal­ance com­pe­ti­tion vs. slack. With perfect com­pe­ti­tion, the monk in­stantly re­jects our “no Ex­o­dus” idea as less true (less memet­i­cally fit) than its com­peti­tors, and it has no chance to grow on him. With zero com­pe­ti­tion, the monk doesn’t be­lieve any­thing at all, or spends hours pa­tiently listen­ing to some­one ex­plain their world-is-flat the­ory. Good epistemics re­quire a bal­ance be­tween be­ing will­ing to choose bet­ter ideas over worse ones, and open-mind­edly hear­ing the worse ones out in case they grow on you.

(Thomas Kuhn points out that early ver­sions of the he­lio­cen­tric model were much worse than the geo­cen­tric model, that as­tronomers only kept work­ing on them out of a sort of weird cu­ri­os­ity, and that it took decades be­fore they could clearly hold their own against geo­cen­trism in a de­bate).

Differ­ent peo­ple strike a differ­ent bal­ance in this space, and those differ­ent peo­ple suc­ceed or fail based on their own epistemic rule­set. Some­one who’s com­pletely closed-minded and dog­matic prob­a­bly won’t suc­ceed in busi­ness, or sci­ence, or the mil­i­tary, or any other ca­reer (ex­cept maybe poli­tics). But some­one who’s so patholog­i­cally open-minded that they listen to ev­ery­thing and re­fuse to pri­ori­tize what is or isn’t worth their time will also fail. We take no­tice of who suc­ceeds or fails and change our be­hav­ior ac­cord­ingly.

Maybe there’s even a third layer of se­lec­tion; maybe differ­ent com­mu­ni­ties are more or less will­ing to tol­er­ate open-minded vs. close-minded peo­ple. The Slate Star Codex com­mu­nity has re­ally differ­ent epistemics norms from the Catholic Church or In­fowars listen­ers; these are evolu­tion­ary pa­ram­e­ters that de­ter­mine which ideas are more memet­i­cally fit. If our epistemics make us more likely to con­verge on use­ful (not nec­es­sar­ily true!) ideas, we will suc­ceed and our epistemic norms will catch on. Fran­cis Ba­con was just some guy with re­ally good epistemic norms, and now ev­ery­body who wants to be taken se­ri­ously has to use his norms in­stead of what­ever they were do­ing be­fore. Come up with the right evolu­tion­ary pa­ram­e­ters, and that could be you!