Selecting Rationalist Groups

Pre­vi­ously in se­ries: Pur­chase Fuzzies and Utilons Separately
Fol­lowup to: Con­jur­ing an Evolu­tion To Serve You

GreyThumb.blog offered an in­ter­est­ing com­par­i­son of poor an­i­mal breed­ing prac­tices and the fall of En­ron, which I pre­vi­ously posted on in some de­tail. The es­sen­tial theme was that in­di­vi­d­ual se­lec­tion on chick­ens for the chicken in each gen­er­a­tion who laid the most eggs, pro­duced highly com­pet­i­tive chick­ens—the most dom­i­nant chick­ens that pecked their way to the top of the peck­ing or­der at the ex­pense of other chick­ens. The chick­ens sub­jected to this in­di­vi­d­ual se­lec­tion for egg-lay­ing prowess needed their beaks clipped, or hous­ing in in­di­vi­d­ual cages, or they would peck each other to death.

Which is to say: in­di­vi­d­ual se­lec­tion is se­lect­ing on the wrong crite­rion, be­cause what the farmer ac­tu­ally wants is high egg pro­duc­tion from groups of chick­ens.

While group se­lec­tion is nearly im­pos­si­ble in or­di­nary biol­ogy, it is easy to im­pose in the lab­o­ra­tory: and breed­ing the best groups, rather than the best in­di­vi­d­u­als, in­creased av­er­age days of hen sur­vival from 160 to 348, and egg mass per bird from 5.3 to 13.3 kg.

The anal­ogy be­ing to the way that En­ron eval­u­ated its em­ploy­ees ev­ery year, fired the bot­tom 10%, and gave the top in­di­vi­d­ual perform­ers huge raises and bonuses. Jeff Skil­ling fan­cied him­self as ex­ploit­ing the won­drous power of evolu­tion, it seems.

If you look over my ac­cu­mu­lated es­says, you will ob­serve that the art con­tained therein is al­most en­tirely in­di­vi­d­ual in na­ture… for around the same rea­son that it all fo­cuses on con­fronting im­pos­si­bly tricky ques­tions: That’s what I was do­ing when I thought up all this stuff, and for the most part I worked in soli­tude. But this is not in­her­ent in the Art, not re­flec­tive of what a true mar­tial art of ra­tio­nal­ity would be like if many peo­ple had con­tributed to its de­vel­op­ment along many facets.

Case in point: At the re­cent LW /​ OB meetup, we played Para­noid De­bat­ing, a game that tests group ra­tio­nal­ity. As is only ap­pro­pri­ate, this game was not the in­ven­tion of any sin­gle per­son, but was col­lec­tively thought up in a se­ries of sug­ges­tions by Nick Bostrom, Black Belt Bayesian, Tom McCabe, and steven0461.

In the game’s fi­nal form, Robin Gane-McCalla asked us ques­tions like “How many Rhode Is­lands would fit into Alaska?” and a group of (in this case) four ra­tio­nal­ists tried to pool their knowl­edge and figure out the an­swer… ex­cept that be­fore the round started, we each drew face­down from a set of four cards, con­tain­ing one spade card and one red card. Who­ever drew the red card got the job of try­ing to mis­lead the group. Who­ever drew the spade showed the card and be­came the spokesper­son, who had to se­lect the fi­nal an­swer. It was in­ter­est­ing, try­ing to play this game, and re­al­iz­ing how lit­tle I’d prac­ticed ba­sic skills like try­ing to mea­sure the ap­pro­pri­ate­ness of an­other’s con­fi­dence or figure out who was ly­ing.

A bit fur­ther along, at the sug­ges­tion of Steve Ray­hawk, and slightly sim­plified by my­self, we named 60% con­fi­dence in­ter­vals for the quan­tity with lower and up­per bounds; Steve fit a Cauchy dis­tri­bu­tion to the in­ter­val (“be­cause it has a fat­ter tail than a Gaus­sian”) and we were scored ac­cord­ing to the log of our prob­a­bil­ity den­sity on the true an­swer, ex­cept for the red-card drawer, who got the nega­tive of this num­ber.

The Para­noid De­bat­ing game worked sur­pris­ingly well—at least I had fun, de­spite some­how man­ag­ing to draw the red card three out of four times. I can to­tally vi­su­al­ize do­ing this at some cor­po­rate train­ing event or even at par­ties. The red player is tech­ni­cally act­ing as an in­di­vi­d­ual and learn­ing to prac­tice de­cep­tion, but per­haps prac­tic­ing de­cep­tion (in this con­trol­led, eth­i­cally ap­proved set­ting) might help you be a lit­tle less gullible in turn. As Ze­lazny ob­serves, there is a differ­ence in the arts of dis­cov­er­ing lies and find­ing truth.

In a real in­sti­tu­tion… you would prob­a­bly want to op­ti­mize less for fun, and more for work-rele­vance: some­thing more like Black Belt Bayesian’s origi­nal sug­ges­tion of The Au­mann Game, no red cards. But where both B3 and Tom McCabe origi­nally thought in terms of scor­ing in­di­vi­d­u­als, I would sug­gest form­ing peo­ple into groups and scor­ing the groups. An in­sti­tu­tion’s perfor­mance is the sum of its groups more di­rectly than it is the sum of its in­di­vi­d­u­als—though of course there are in­ter­ac­tions be­tween groups as well. Find peo­ple who, in gen­eral, seem to have a statis­ti­cal ten­dency to be­long to high-perform­ing groups—these are the ones who con­tribute much to the group, who are per­sua­sive with good ar­gu­ments.

I won­der if there are any hedge funds that prac­tice “trio trad­ing”, by anal­ogy with pair pro­gram­ming?

Hal Fin­ney called Au­mann’s Agree­ment The­o­rem “the most in­ter­est­ing, sur­pris­ing, and challeng­ing re­sult in the field of hu­man bias: that mu­tu­ally re­spect­ful, hon­est, and ra­tio­nal de­baters can­not dis­agree on any fac­tual mat­ter once they know each other’s opinions”. It is not just my own es­says that are skewed to­ward in­di­vi­d­ual ap­pli­ca­tion; the whole trope of Tra­di­tional Ra­tion­al­ity seems to me skewed the same way. It’s the in­di­vi­d­ual heretic who is the hero, and Author­ity the un­trust­wor­thy villain whose main job is not to re­sist the heretic too much, to be prop­erly defeated. Science is cast as a com­pe­ti­tion be­tween the­o­ries in an arena with rules de­signed to let the strongest con­tender win. Of course, it may be that I am se­lec­tive in my mem­ory, and that if I went back and read my child­hood books again, I would no­tice more on group tac­tics that origi­nally slipped my at­ten­tion… but re­ally, Au­mann’s Agree­ment The­o­rem doesn’t get enough at­ten­tion.

Of course most Bayesian math is not widely known—the Agree­ment The­o­rem is no ex­cep­tion here. But even the in­tu­itively ob­vi­ous coun­ter­part of the Agree­ment The­o­rem, the treat­ment of oth­ers’ be­liefs as ev­i­dence, re­ceives lit­tle shrift in Tra­di­tional Ra­tion­al­ity. This may have some­thing to do with Science de­vel­op­ing in the midst of in­san­ity and in defi­ance of Author­ity; that is a his­tor­i­cal fact about how Science de­vel­oped. But if the high perform­ers of a ra­tio­nal­ity dojo need to prac­tice the same sort of lonely dis­sent… well, that must not be a very effec­tive ra­tio­nal­ity dojo.

Part of the se­quence The Craft and the Community

Next post: “In­cre­men­tal Progress and the Valley

Pre­vi­ous post: “Pur­chase Fuzzies and Utilons Separately