On Dragon Army

Anal­y­sis Of: Dragon Army: The­ory & Char­ter (30 Minute Read)

Epistemic Sta­tus: Varies all over the map from point to point

Length Sta­tus: In the­ory I sup­pose it could be longer

This is a long post is long post re­spond­ing to a al­most as long (and sev­eral weeks and sev­eral con­tro­versy cy­cles old, be­cause life comes at you fast on the in­ter­net) post, which in­cludes ex­ten­sive quot­ing from the origi­nal post and as­sumes you have already read the origi­nal. If you are not in­ter­ested in a very long anal­y­sis of an­other per­son’s pro­posal for a ra­tio­nal­ist group house, given that life is short, you can (and prob­a­bly should) safely skip this one.

Dragon Army is a crazy idea that just might work. It prob­a­bly won’t, but it might. It might work be­cause it be­lieves in some­thing that has not been tried, and there is a chance that those in­volved will ac­tu­ally try the thing and see what hap­pens.

Scott made an ob­ser­va­tion that the re­sponses to the Dragon Army pro­posal on Less Wrong were mostly con­struc­tive crit­i­cism, while the re­sponses on Tum­blr were mostly ex­pres­sions of hor­ror. That is ex­actly the re­sponse you would ex­pect from a pro­ject with real risks, but also real po­ten­tial benefits worth tak­ing the risks to get. This up­dates me strongly in fa­vor of go­ing for­ward with the pro­ject.

As one would ex­pect, the idea as laid out in the char­ter is far from perfect. There are many mod­ifi­ca­tions that need to be made, both that one could fore­see in ad­vance, and that one could not fore­see in ad­vance.

My ap­proach is go­ing to be to go through the post and com­ment on the com­po­nents of the pro­posal, then pull back and look at the big­ger pic­ture.

In part 1, Dun­can makes ar­gu­ments, then later in part 2 he says the fol­low­ing:

Ul­ti­mately, though, what mat­ters is not the prob­lems and solu­tions them­selves so much as the light they shine on my aes­thet­ics (since, in the ac­tual house, it’s those aes­thet­ics that will be used to re­solve epistemic grid­lock). In other words, it’s not so much those ar­gu­ments as it is the fact that Dun­can finds those ar­gu­ments com­pel­ling.

I agree in one par­tic­u­lar case that this is the im­por­tant (and wor­ri­some) thing, but mostly I dis­agree and think that we should be en­gag­ing with the ar­gu­ments them­selves. This could be be­cause I am as in­ter­ested in learn­ing about and dis­cussing gen­eral things us­ing the pro­posal as a tak­ing-off point, as I am in the pro­posal it­self. A lot of what Dun­can dis­cusses and en­dorses is the value of do­ing a thing at all even if it isn’t the best thing and I strongly agree with that – this is me go­ing out and en­gag­ing with this thing’s thing­ness, and do­ing a con­crete thing to it.

Pur­pose of post: Three­fold. First, a lot of ra­tio­nal­ists live in group houses, and I be­lieve I have some in­ter­est­ing mod­els and per­spec­tives, and I want to make my think­ing available to any­one else who’s in­ter­ested in skim­ming through it for Things To Steal. Se­cond, since my ini­tial pro­posal to found a house, I’ve no­ticed a sig­nifi­cant amount of well-mean­ing push­back and con­cern à la have you no­ticed the skulls? and it’s en­tirely un­fair for me to ex­pect that to stop un­less I make my skull-notic­ing ev­i­dent. Third, some nonzero num­ber of hu­mans are gonna need to sign the fi­nal ver­sion of this char­ter if the house is to come into ex­is­tence, and it has to be vie­w­able some­where. I figured the best place was some­where that im­par­tial clear thinkers could weigh in (flat­tery).

All of this is good, and re­sponses definitely did not do enough look­ing for Things To Steal, so I’d en­courage oth­ers to do that more. Dun­can (the au­thor) pro­poses a lot of con­crete things and makes a lot of claims. You don’t need to agree with all, most or even many of them to po­ten­tially find worth­while ideas. Let­ting peo­ple know you’ve thought about all the things that can go wrong is also good, al­though ac­tu­ally think­ing about those things is (ideally) more im­por­tant, and I worry from the in­ter­ac­tions that Dun­can is more con­cerned with show­ing that he’s con­sid­ered all the con­cerns than he is con­cerned with the ac­tual con­cerns, but at a suffi­cient level of rigor that al­gorithm might not be effi­cient but it will still be suffi­cient. And of course, I think Less Wrong was the right place to post this.

What is Dragon Army? It’s a high-com­mit­ment, high-stan­dards, high-in­vest­ment group house model with cen­tral­ized lead­er­ship and an up-or-out par­ti­ci­pa­tion norm, de­signed to a) im­prove its mem­bers and b) ac­tu­ally ac­com­plish medium-to-large scale tasks re­quiring long-term co­or­di­na­tion. Tongue-in-cheek referred to as the “fas­cist/​au­thor­i­tar­ian take on ra­tio­nal­ist hous­ing,” which has no doubt con­tributed to my be­ing vuln­er­a­ble to straw­man­ning but was nev­er­the­less the cor­rect joke to be mak­ing, lest peo­ple mi­s­un­der­stand what they were sign­ing up for. Aes­thet­i­cally mod­eled af­ter Dragon Army from En­der’s Game (not HPMOR), with a touch of Paper Street Soap Com­pany thrown in, with Dun­can Sa­bien in the role of En­der/​Tyler and Eli Tyre in the role of Bean/​The Nar­ra­tor.

I ap­plaud Dun­can’s in­stinct to use the metaphors he ac­tu­ally be­lieves ap­ply to what he is do­ing, rather than the ones that would avoid scar­ing the liv­ing hell out of ev­ery­one. Less points for ac­tu­ally be­liev­ing those metaphors with­out think­ing there is a prob­lem.

I have seen and heard ar­gu­ments against struc­tur­ing things as an army, or against struc­tur­ing things in an au­thor­i­tar­ian fash­ion. As I note in the next sec­tion, I think these are things to be cau­tious of but that are worth try­ing, and I do not find ei­ther of them es­pe­cially scary when they are based on free as­so­ci­a­tion. If our kind can’t co­op­er­ate enough to have a tem­po­rary vol­un­teer metaphoric army that does not shoot any­one and does not get shot at, then we re­ally can’t co­op­er­ate. En­der is a per­son you may wish to em­u­late, at least un­til some point in books that were writ­ten and hap­pen later, and may or may not ex­ist.

What should freak Dun­can (and ev­ery­one else) out is the refer­ence that peo­ple seem to be strangely gloss­ing over, which is the Paper Street Soap Com­pany. Fight Club is a great movie (and book), and if you haven’t seen it yet you should go see it and/​or read it, but – spoiler alert, guys! – Tyler Dur­dan is a bad dude. Tyler Dur­dan is com­pletely in­sane. He is not a per­son you want to copy or em­u­late. I should not have to be say­ing this. This should be ob­vi­ous. Se­ri­ously. The origi­nal au­thor made this some­what more ex­plicit in the book, but the movie re­ally should have been clear enough for ev­ery­one.

That does not mean that Tyler Dur­dan did not have a worth­while mes­sage for us hid­den un­der all of that. Many (in­clud­ing villains) do, but no, Tyler is not the hero, and no, he does not be­long to the Mag­neto List of Villains Who Are Right. You can no­tice that your life is end­ing one minute at a time, and that get­ting out there in the phys­i­cal world and tak­ing risks is good, and that the things you own can end up own­ing you, and even learn how to make soap. Fine.

You do not want to be try­ing to recre­ate some­thing called Pro­ject May­hem, un­less your Dragon Army is deep in­side en­emy ter­ri­tory and fight­ing an ac­tual war (in which case you prob­a­bly still don’t want to do that, but at least I see the at­trac­tion).

Also, if you want to show that you can del­e­gate and trust oth­ers, and you’re refer­ring to your sec­ond in com­mand as ‘The Nar­ra­tor’ I would sim­ply say “spoiler alert,” and ask you to pon­der that again for a bit.

The weird part is that the pro­posal here does not, to me, evoke the Paper Street Soap Com­pany at all, so what I am wor­ried about is why this metaphor ap­pealed to Dun­can more than any­thing else.

Why? Cur­rent group hous­ing/​at­tempts at group ra­tio­nal­ity and com­mu­nity-sup­ported lev­el­ing up seem to me to be fal­ling short in a num­ber of ways. First, there’s not enough stuff ac­tu­ally hap­pen­ing in them (i.e. to the ex­tent peo­ple are grow­ing and im­prov­ing and ac­com­plish­ing am­bi­tious pro­jects, it’s largely within their pro­fes­sional orgs or fueled by un­usu­ally agenty in­di­vi­d­u­als, and not by lev­er­ag­ing the low-hang­ing fruit available in our house en­vi­ron­ments). Se­cond, even the group houses seem to be plagued by the same sense of unan­chored aban­doned loneli­ness that’s hit­ting the ra­tio­nal­ist com­mu­nity speci­fi­cally and the mil­len­nial gen­er­a­tion more gen­er­ally. There are a bunch of com­peti­tors for “third,” but for now we can leave it at that.

Later in the post, Dun­can hits on what third is, and I think that third is su­per im­por­tant: Even if you think we’ve got a good ver­sion of the group house con­cept, we are do­ing far too much ex­ploita­tion of the group house con­cept, and not enough ex­plo­ra­tion. There is vari­a­tion on the lo­ca­tion, size and mix of peo­ple that com­pose a house, and some tin­ker­ing with a few other com­po­nents, but the ba­sic struc­tures re­main in­var­i­ant. The idea of a group house built around the group be­ing a group that does am­bi­tious things to­gether, and/​or op­er­at­ing with an au­thor­i­tar­ian struc­ture, has not been tried and been found want­ing. It has been found scary and difficult, and not been tried. Yes, I have heard oth­ers note that the in­ten­tion­ally named In­ten­tional Com­mu­nity Com­mu­nity did ban au­thor­i­tar­ian houses be­cause they find that they rarely work out, but they also all fo­cus on ‘sus­tain­abil­ity’ rather than try­ing to ac­com­plish big things in the world, so I am not overly wor­ried about that.

His sec­ond rea­son also seems im­por­tant. There is an epi­demic of loneli­ness hit­ting both our com­mu­nity and the en­tire world as well. Younger gen­er­a­tions may or may not have it worse, but if one can state with a straight face that the av­er­age Amer­i­can only has 2-3 close friends (whether or not that statis­tic is ac­cu­rate) there is a huge prob­lem. I have more than that, but not as many as I used to, or as many as I would like. If group houses are not gen­er­at­ing close friend­ships, that is very bad, and we should try and fix it, since this is an im­por­tant un­met need for many of us and they should be a very good place to meet that need.

His first rea­son I am torn about be­cause it is not ob­vi­ous that stuff should be ac­tu­ally hap­pen­ing in­side the houses as op­posed to the houses pro­vid­ing an in­fras­truc­ture for peo­ple who then cause things to hap­pen. Most im­por­tant things that hap­pen in the world hap­pen in pro­fes­sional or­ga­ni­za­tions or as the re­sult of un­usu­ally agenty in­di­vi­d­u­als. Houses could be very suc­cess­ful at caus­ing things to hap­pen with­out any highly visi­ble things hap­pen­ing within the houses. The most ob­vi­ous ways to do this are to sup­port the mechanisms Dun­can men­tions. One could provide sup­port for peo­ple to de­vote their en­er­gies to im­por­tant or­ga­ni­za­tions and pro­jects el­se­where, by let­ting peo­ple get their do­mes­tic needs met for less time and money, and by steer­ing them to the most im­por­tant places and pro­jects. One could also do other things that gen­er­ate more un­usu­ally agenty in­di­vi­d­u­als, or make those in­di­vi­d­u­als more effec­tive when they do agenty things (and/​or make them do even more agenty things), which in my read­ing is one of two main goals of Dragon Army, the other be­ing to in­crease con­nec­tion be­tween its in­hab­itants.

Dun­can’s claim here is that there are things that could be hap­pen­ing di­rectly in the houses that are not hap­pen­ing, and that those things rep­re­sent low-hang­ing fruit. This seems plau­si­ble, but it does not seem ob­vi­ous, nor does it seem ob­vi­ous what the low-hang­ing fruit would be. The rest of the post does go into de­tails, so judg­ment needs to be based on those de­tails.

Prob­lem 1: Pendulums

This one’s first be­cause it in­forms and un­der­lies a lot of my other as­sump­tions. Essen­tially, the claim here is that most so­cial progress can be mod­eled as a pen­du­lum os­cillat­ing de­creas­ingly far from an ideal. The so­ciety is “stuck” at one point, re­al­izes that there’s some­thing wrong about that point (e.g. that maybe we shouldn’t be forc­ing peo­ple to live out their en­tire lives in mar­riages that they en­tered into with im­perfect in­for­ma­tion when they were like six­teen), and then moves to cor­rect that spe­cific prob­lem, of­ten break­ing some other Ch­ester­ton’s fence in the pro­cess.

For ex­am­ple, my ex­pe­rience leads me to put a lot of con­fi­dence be­hind the claim that we’ve traded “a lot of peo­ple trapped in mar­riages that are net bad for them” for “a lot of peo­ple who never reap the benefits of what would’ve been a strongly net-pos­i­tive mar­riage, be­cause it ended too eas­ily too early on.” The lat­ter prob­lem is clearly smaller, and is prob­a­bly a bet­ter prob­lem to have as an in­di­vi­d­ual, but it’s nev­er­the­less clear (to me, any­way) that the loos­en­ing of the ab­solute­ness of mar­riage had nega­tive effects in ad­di­tion to its pos­i­tive ones.

Pro­posed solu­tion: Rather than choos­ing be­tween ab­solutes, in­te­grate. For ex­am­ple, I have two close col­leagues/​al­lies who share mil­len­ni­als’ de­fault skep­ti­cism of lifelong mar­riage, but they also are skep­ti­cal that a com­mit­ment-free lifestyle is costlessly good. So they’ve de­cided to do hand­fast­ing, in which they’re fully com­mit­ted for a year and a day at a time, and there’s a known pe­riod of time for ask­ing the ques­tion “should we stick to­gether for an­other round?”

In this way, I posit, you can get the strengths of the old so­cially evolved norm which stood the test of time, while also avoid­ing the ma­jor­ity of its known failure modes. Sort of like build­ing a gate into the Ch­ester­ton’s fence, in­stead of knock­ing it down—do the old thing in time-boxed iter­a­tions with reg­u­lar strate­gic check-ins, rather than as­sum­ing you can in­vent a new thing from whole cloth.

Caveat/​skull: Of course, the as­sump­tion here is that the Old Way Of Do­ing Things is not a slip­pery slope trap, and that you can in fact avoid the failure modes sim­ply by try­ing. And there are plenty of ex­am­ples of that not work­ing, which is why Tak­ing Time-Boxed Ex­per­i­ments And Strate­gic Check-Ins Se­ri­ously is a must. In par­tic­u­lar, when at­tempt­ing to strike such a bal­ance, all par­ties must have com­mon knowl­edge agree­ment about which side of the ideal to err to­ward (e.g. in­no­cents in prison, or guilty par­ties walk­ing free?).

I think the pen­du­lum is a very bad model of so­cial progress. It seems pretty rare that we ex­ist (are stuck) at point A, then we try point B, and then we re­al­ize that our mis­take was that we swung a lit­tle too far but that a point we passed through was right all along. This is the Aris­to­tle mis­take of au­to­mat­i­cally prais­ing the mean, when there is no rea­son to think that your bounds are in any way rea­son­able, or that you are even think­ing about the right set of pos­si­ble rules, ac­tions or virtues. If any­thing, there is usu­ally rea­son to sus­pect oth­er­wise.

Even in ex­am­ples where you have a sort of ‘zero-sum’ de­ci­sion where policy needs to choose a point on a num­ber line, I think this is mostly wrong.

I guess there are some ex­am­ples of start­ing out with the equil­ibrium “Abor­tions for no one,” then mov­ing to the equil­ibrium “Abor­tions for ev­ery­one,” and then set­tling on the equil­ibrium “Abor­tions for some, mi­ni­a­ture Amer­i­can flags for oth­ers” and that be­ing the cor­rect an­swer (I am not mak­ing a claim of any kind of what the cor­rect an­swer is here). Call this Th­e­sis-An­tithe­sis-Syn­the­sis.

There are a lot more ex­am­ples of so­cial progress that go more like “Slaves for ev­ery­one” then get­ting to “Slaves for some” and fi­nally reach­ing “ac­tu­ally, you know what, slaves for no one, ever, and se­ri­ously do I even need to ex­plain this one?” Call this Th­e­sis-LessTh­e­sis-An­tithe­sis, and then Anti-Th­e­sis just wins and then we get progress.

There is also the mode where some­one no­tices a real prob­lem but then has a re­ally, mind­bog­glingly bad idea, for ex­am­ple Karl Marx, the idea is tried, and it turns out it is not only Not Progress but a huge dis­aster. Then, if you are pay­ing at­ten­tion, aban­don it and try some­thing else, but you learn from what hap­pened. Now you un­der­stand where some of the fences are and what they are for, which helps you come up with a new plan, but your de­fault should ab­solutely not be “well, sure, that was a huge dis­aster so we should try some mix­ture of what we just did and the old way and that will to­tally be fine.”

There is no rea­son to as­sume you were mov­ing in the cor­rect di­rec­tion, or even the cor­rect di­men­sion. Do not be fooled by the Over­ton Win­dow.

If any­thing, to the ex­tent that you must choose a point on the num­ber line, mov­ing from 0 to 1 and find­ing 1 to be worse is not a good rea­son to try 0.5 un­less your prior is very strong. It might well be a rea­son to try −0.5 or −1! Maybe you didn’t even re­al­ize that was an op­tion be­fore, or why you might want to do that.

Prob­lem 2: The Un­pleas­ant Valley

As far as I can tell, it’s pretty un­con­tro­ver­sial to claim that hu­mans are sys­tems with a lot of in­er­tia. Sta­tus quo bias is well re­searched, past be­hav­ior is the best pre­dic­tor of fu­ture be­hav­ior, most peo­ple fail at re­s­olu­tions, etc.

I have some un­qual­ified spec­u­la­tion re­gard­ing what’s go­ing on un­der the hood. For one, I sus­pect that you’ll of­ten find hu­mans be­hav­ing pretty much as an effort- and en­ergy-con­serv­ing al­gorithm would be­have. Peo­ple have op­ti­mized their most known and fa­mil­iar pro­cesses at least some­what, which means that it re­quires less oomph to just keep do­ing what you’re do­ing than to cob­ble to­gether a new sys­tem. For an­other, I think hy­per­bolic dis­count­ing gets way too lit­tle credit/​at­ten­tion, and is a ma­jor fac­tor in knock­ing peo­ple off the wagon when they’re try­ing to forego lo­cal be­hav­iors that are known to be in­trin­si­cally re­ward­ing for lo­cal be­hav­iors that add up to long-term cu­mu­la­tive gain.

But in short, I think the pic­ture of “I’m go­ing to try some­thing new, eh?” of­ten looks like this:

… with an “un­pleas­ant valley” some time af­ter the start point. Think about the cold feet you get af­ter the “hon­ey­moon pe­riod” has worn off, or the de­sires and opinions of a mil­i­tary re­cruit in the sec­ond week of a six-week boot camp, or the frus­tra­tion that emerges two months into a new diet/​ex­er­cise regime, or your sec­ond year of be­ing forced to take pi­ano les­sons.

The prob­lem is, peo­ple never make it to the third year, where they’re ac­tu­ally good at pi­ano, and start reap­ing the benefits, and their Sys­tem 1 up­dates to yeah, okay, this is in fact worth it. Or rather, they some­times make it, if there are strong sup­port­ive struc­tures to get them across the un­pleas­ant valley (e.g. in a mil­i­tary boot­camp, they just … make you keep go­ing). But left to our own de­vices, we’ll of­ten get halfway through an ex­per­i­ment and just … stop, with­out ever find­ing out what the far side is ac­tu­ally like.

Pro­posed solu­tion: Make ex­per­i­ments “un­quit­table.” The idea here is that (ideally) one would not en­ter into a new ex­per­i­ment un­less a) one were highly con­fi­dent that one could ab­sorb the costs, if things go badly, and b) one were rea­son­ably con­fi­dent that there was an Ac­tu­ally Good Thing wait­ing at the finish line. If (big if) we take those as a given, then it should be safe to, in essence, “lock one­self in,” via any num­ber of com­mit­ment mechanisms. Or, to put it in other words: “Medium-Term Fu­ture Me is go­ing to lose per­spec­tive and want to give up be­cause of be­ing un­able to see past short-term un­pleas­ant­ness to the juicy, long-term goal? Fine, then—Medium-Term Fu­ture Me doesn’t get a vote.” In­stead, Post-Ex­per­i­ment Fu­ture Me gets the vote, in­clud­ing get­ting to up­date heuris­tics on which-kinds-of-ex­per­i­ments-are-worth-en­ter­ing.

Caveat/​skull: Peo­ple who are bad at self-mod­el­ing end up fool­ishly lock­ing them­selves into things that are higher-cost or lower-EV than they thought, and get­ting burned; black swans and tail risk ends up mak­ing even good bets turn out very very badly; we re­ally should’ve built in an ejec­tor seat. This risk can be mostly ame­lio­rated by start­ing small and giv­ing peo­ple a chance to cal­ibrate—you don’t make white belts try to punch through con­crete blocks, you make them punch soft, pillowy tar­gets first.

And, of course, you do build in an ejec­tor seat. See next.

This is the core the­sis be­hind a lot of the con­crete de­tails of Dragon Army. Tem­po­rary com­mit­ment al­lows you to get through the time pe­riod where you are get­ting nega­tive short-term pay­offs that sap your mo­ti­va­tion, and reach a later stage where you get paid off for all your hard work, while giv­ing you the chance to bail if it turns out the ex­per­i­ment is a failure and you are never go­ing to get re­warded.

I would have drawn the graph above with a lot more ran­dom vari­a­tions, but the im­pli­ca­tions are the same.

I think this is a key part that peo­ple should steal, if they do not have a bet­ter sys­tem already in place that works for them. When you are learn­ing to play the pi­ano, you are effec­tively de­cid­ing each day whether to stick with it or to quit, and you only learn to play the pi­ano if you never de­cide to quit (you can ob­vi­ously miss a day and re­cover, but I think the toy model gives the key in­sights and is good enough). You can re­li­ably pre­dict that there will be vari­a­tion (some ran­dom, some pre­dictable) in your mo­ti­va­tion from day to day and week to week, and over longer time frames, so if you give your­self a veto ev­ery day (or ev­ery week) then by de­fault you will quit far too of­ten.

If ev­ery few years, you hold a vote on whether to leave the Euro­pean Union and de­stroy your econ­omy, or to end your democ­racy and ap­point a dic­ta­tor, even­tu­ally the an­swer will be yes. It will not be the ‘will of the peo­ple’ so much as the ‘whim of the peo­ple’ and you want pro­tec­tion against that. The one-per­son case is no differ­ent.

The ejec­tor seat is im­por­tant. If things are go­ing suffi­ciently badly, there needs to be a way out, be­cause the al­ter­na­tives are to ei­ther stick with the thing, or to eject any­way and de­stroy your abil­ity to com­mit to fu­ture things. Even when you eject for good rea­sons us­ing the agreed upon pro­ce­dures, it still dam­ages your abil­ity to com­mit. The key is to cal­ibrate the thresh­old for the seat, in terms of re­quire­ments and costs, such that it be­ing used im­plies that the de­ci­sion to eject was over-de­ter­mined, but with a bar no higher than is nec­es­sary for that to be true.

For most com­mit­ments, your abil­ity to com­mit to things is far more valuable than any­thing else at stake. Even when the other stakes are big, that also means the com­mit­ment stakes are also big. This means that once you com­mit, you should fol­low through al­most all the time even when you re­al­ize that agree­ing to com­mit was a mis­take. That in turn means one should think very care­fully about when to com­mit to things, and not com­mit­ting if you think you are likely to quit in a way that is dam­ag­ing to your com­mit­ment abil­ities.

I think that if any­thing, Dun­can un­der-states the im­por­tance of re­li­able com­mit­ment. His state­ments above about mar­riage are a good ex­am­ple of that, even de­spite the cor­rec­tive words he writes about the sub­ject later on. Agree­ing to stay to­gether for a year is a sea change from no com­mit­ment at all, and there are some big benefits to the year, but that is not re­motely like the benefits of a real mar­riage. Giv­ing an agree­ment an end point, at which the par­ties will re-ne­go­ti­ate, fun­da­men­tally changes the na­ture of the re­la­tion­ship. Longer term plans and trades, which are ex­tremely valuable, can­not be made with­out wor­ry­ing about in­cen­tive com­pat­i­bil­ity, and both sides have to worry about their fu­ture ne­go­ti­at­ing po­si­tions and mar­ket value. Even if both par­ties want things to con­tinue, each year both par­ties have to worry about their ne­go­ti­at­ing po­si­tion, and plan for their fu­ture ne­go­ti­at­ing po­si­tions.

You get to move from a world in which you need to play both for the team and for your­self, and where you get to play only for the team. This changes ev­ery­thing.

It also means that you do not get the in­surance benefits. This isn’t for­mal, pay-you-money in­surance. This is the in­surance of hav­ing some­one there for you even when you have gone sick or in­sane or de­pressed, or other similar thing, and you have noth­ing to offer them, and they will be there for you any­way. We need that. We need to count on that.

I could say a lot more, but it would be be­yond scope.

Prob­lem 3: Sav­ing Face

If any of you have been to a mar­tial arts academy in the United States, you’re prob­a­bly fa­mil­iar with the norm whereby a tardy stu­dent pur­chases en­try into the class by first do­ing some pushups. The stan­dard ex­pla­na­tion here is that the stu­dent is do­ing the pushups not as a pun­ish­ment, but rather as a sign of re­spect for the in­struc­tor, the other stu­dents, and the academy as a whole.

I posit that what’s ac­tu­ally go­ing on in­cludes that, but is some­what more sub­tle/​com­plex. I think the real benefit of the pushup sys­tem is that it closes the loop.

Imag­ine you’re a ten year old kid, and your par­ent picked you up late from school, and you’re stuck in traf­fic on your way to the dojo. You’re sit­ting there, jit­ter­ing, won­der­ing whether you’re go­ing to get yel­led at, won­der­ing whether the mas­ter or the other stu­dents will think you’re lazy, imag­in­ing stut­ter­ing as you try to ex­plain that it wasn’t your fault—

Nope, none of that. Be­cause it’s already clearly es­tab­lished that if you fail to show up on time, you do some pushups, and then it’s over. Done. Finished. Like some­body sneezed and some­body else said “bless you,” and now we can all move on with our lives. Do­ing the pushups cre­ates com­mon knowl­edge around the ques­tions “does this per­son know what they did wrong?” and “do we still have faith in their core char­ac­ter?” You take your lumps, ev­ery­one sees you tak­ing your lumps, and there’s no dan­gling sus­pi­cion that you were just be­ing lazy, or that other peo­ple are se­cretly judg­ing you. You’ve paid the price in pub­lic, and ev­ery­one knows it, and this is a good thing.

Pro­posed solu­tion: This is a solu­tion with­out a con­crete prob­lem, since I haven’t yet ac­tu­ally out­lined the spe­cific com­mit­ments a Dragon has to make (re­gard­ing things like show­ing up on time, par­ti­ci­pat­ing in group ac­tivi­ties, and mak­ing per­sonal progress). But in essence, the solu­tion is this: you have to build into your sys­tem from the be­gin­ning a set of ways-to-re­gain-face. Ways to hit the ejec­tor seat on an ex­per­i­ment that’s go­ing screwy with­out los­ing all so­cial stand­ing; ways to ab­sorb the oc­ca­sional mis­step or failure-to-ad­e­quately-plan; ways to be less-than-perfect and still main­tain the in­tegrity of a sys­tem that’s geared to­ward fo­cus­ing ev­ery­one on perfec­tion. In short, peo­ple have to know (and oth­ers have to know that they know, and they have to know that oth­ers know that they know) ex­actly how to make amends to the so­cial fabric, in cases where things go awry, so that there’s no ques­tion about whether they’re try­ing to make amends, or whether that at­tempt is suffi­cient.

Caveat/​skull: The ob­vi­ous prob­lem is peo­ple at­tempt­ing to game the sys­tem—they no­tice that ten pushups is way eas­ier than do­ing the dili­gent work re­quired to show up on time 95 times out of 100. The next ob­vi­ous prob­lem is that the price is set too low for the group, leav­ing them to still feel jilted or wronged, and the next ob­vi­ous prob­lem is that the price is set too high for the in­di­vi­d­ual, leav­ing them to feel un­fairly judged or pun­ished (the fun part is when both of those are true at the same time). Lastly, there’s some­thing in the mix about ar­bi­trari­ness—what do pushups have to do with late­ness, re­ally? I mean, I get that it’s pay­ing some kind of un­pleas­ant cost, but …

I think the idea of clos­ing the loop be­ing im­por­tant is very right. Hu­mans need re­ciproc­ity and fair­ness, but if the cost is known and paid, and ev­ery­one knows this, we can all move on and not worry about whether we can all move on. One of the things I love about my pre­sent job is that we fo­cus hard on clos­ing this loop. You can make a meme-level huge mis­take, and as long as you own up to it and fix the is­sue go­ing for­ward, ev­ery­one puts it be­hind them. The amount to which this im­proves my life is hard to over-state.

It is im­por­tant to note that the push-ups at the dojo are pretty great. They are in some sense a pun­ish­ment for pre­sent me but are not even a pun­ish­ment as such. Every­one did lots of push-ups any­way. Push-ups are a good thing! By do­ing them, you show that you are still se­ri­ous about try­ing to train, and you do some­thing more in­tense to make up for the lost time. The push-ups are prac­ti­cal. In ex­pec­ta­tion, you trans­fer your push-ups from an­other time to now, al­low­ing the class to as­sign less push-ups at other times based on the ones peo­ple will do when they oc­ca­sion­ally walk in late or oth­er­wise mess up.

This means that you get the equiv­a­lent a pi­gou­vian tax. You cre­ate per­cep­tion of fair­ness, you cor­rect in­cen­tives, and you gen­er­ate rev­enue (fit­ness)! Triple win!

I once saw a Magic: The Gather­ing team do the literal push-up thing. They were play­ing a deck with the card Eide­lon of the Great Revel, which meant that ev­ery time an op­po­nent cast a spell, they had to say ‘trig­ger’ to make their op­po­nent take dam­age. They agreed that if any­one ever missed such a trig­ger, af­ter the round they had to do push-ups. This seemed fun, use­ful and ex­cel­lent.

The ‘price’ be­ing an ac­tion that is close to effi­cient any­way is key to the sys­tem be­ing a suc­cess. If push-ups pro­vided no fit­ness benefit, the sys­tem would not work. The best prices do trans­fer util­ity from you to the group, but more im­por­tantly they also trans­fer util­ity from pre­sent you to fu­ture you.

Prob­lem 4: Defec­tions & Com­pounded Interest

I’m pretty sure ev­ery­one’s tired of hear­ing about one-box­ing and iter­ated pris­on­ers’ dilem­mas, so I’m go­ing to move through this one fairly quickly even though it could be its own whole mul­ti­page post. In essence, the prob­lem is that any rate of tol­er­ance of real defec­tion (i.e. un­miti­gated by the so­cial loop-clos­ing norms above) ul­ti­mately re­sults in the de­struc­tion of the sys­tem. Another way to put this is that peo­ple un­der­es­ti­mate by a cou­ple of or­ders of mag­ni­tude the cor­ro­sive im­pact of their defec­tions—we of­ten con­vince our­selves that 90% or 99% is good enough, when in fact what’s needed is some­thing like 99.99%.

There’s some­thing good that hap­pens if you put a lit­tle bit of money away with ev­ery pay­check, and it van­ishes or is severely cur­tailed once you stop, or start skip­ping a month here and there. Similarly, there’s some­thing good that hap­pens when a group of peo­ple agree to meet in the same place at the same time with­out fail, and it van­ishes or is severely cur­tailed once one per­son skips twice.

In my work at the Cen­ter for Ap­plied Ra­tion­al­ity, I fre­quently tell my col­leagues and vol­un­teers “if you’re 95% re­li­able, that means I can’t rely on you.” That’s be­cause I’m in a con­text where “rely” means re­ally trust that it’ll get done. No, re­ally. No, I don’t care what comes up, DID YOU DO THE THING? And if the an­swer is “Yeah, 19 times out of 20,” then I can’t give that per­son tasks ever again, be­cause we run more than 20 work­shops and I can’t have one of them catas­troph­i­cally fail.

(I mean, I could. It prob­a­bly wouldn’t be the end of the world. But that’s ex­actly the point—I’m try­ing to cre­ate a pocket uni­verse in which cer­tain things, like “the CFAR work­shop will go well,” are ab­solutely re­li­able, and the “ab­solute” part is im­por­tant.)

As far as I can tell, it’s hy­per­bolic dis­count­ing all over again—the per­son who wants to skip out on the meetup sees all of these im­me­di­ate, lo­cal costs to at­tend­ing, and all of these visceral, large gains to defec­tion, and their S1 doesn’t prop­erly weight the im­pact to those dis­tant, cu­mu­la­tive effects (just like the per­son who’s go­ing to end up with no re­tire­ment sav­ings be­cause they wanted those new shoes this month in­stead of next month). 1.01^n takes a long time to look like it’s go­ing any­where, and in the mean­time the quick one-time pay­off of 1.1 that you get by knock­ing ev­ery­thing else down to .99^n looks juicy and deli­cious and seems jus­tified.

But some­thing mag­i­cal does ac­crue when you make the jump from 99% to 100%. That’s when you see teams that truly trust and rely on one an­other, or mar­riages built on un­shake­able faith (and you see what those teams and part­ner­ships can build, when they can adopt time hori­zons of years or decades rather than des­per­ately hop­ing no­body will bail af­ter the third meet­ing). It starts with a com­mon knowl­edge un­der­stand­ing that yes, this is the pri­or­ity, even—no, wait, es­pe­cially—when it seems like there are se­duc­tively con­vinc­ing ar­gu­ments for it to not be. When you know—not hope, but know—that you will make a lo­cal sac­ri­fice for the long-term good, and you know that they will, too, and you all know that you all know this, both about your­selves and about each other.

Pro­posed solu­tion: Dis­cuss, and then agree upon, and then rigidly and rigor­ously en­force a norm of perfec­tion in all for­mal un­der­tak­ings (and, cor­re­spond­ingly, be more care­ful and more con­ser­va­tive about which un­der­tak­ings you offi­cially take on, ver­sus which things you’re just ca­su­ally try­ing out as an in­for­mal ex­per­i­ment), with said norm to be mod­ified/​iter­ated only dur­ing pre­de­cided strate­gic check-in points and not on the fly, in the mid­dle of things. Build a habit of clearly dis­t­in­guish­ing tar­gets you’re go­ing to hit from tar­gets you’d be happy to hit. Agree upon and up­hold sur­pris­ingly high costs for defec­tion, Hofs­tadter style, rec­og­niz­ing that a cost that feels high enough prob­a­bly isn’t. Leave peo­ple wig­gle room as in Prob­lem 3, but define that wig­gle room ex­tremely con­cretely and ob­jec­tively, so that it’s clear in ad­vance when a line is about to be crossed. Be ridicu­lously nit­picky and anal about sup­port­ing stan­dards that don’t seem worth sup­port­ing, in the mo­ment, if they’re in are­nas that you’ve pre­vi­ously as­sessed as sus­cep­ti­ble to com­pound­ing. Be ruth­less about dis­card­ing stan­dards dur­ing strate­gic re­view; if a mem­ber of the group says that X or Y or Z is too high-cost for them to sus­tain, be­lieve them, and make de­ci­sions ac­cord­ingly.

Caveat/​skull: Ob­vi­ously, be­cause we’re hu­mans, even peo­ple who re­flec­tively en­dorse such an over­all solu­tion will chafe when it comes time for them to pay the price (I cer­tainly know I’ve chafed un­der stan­dards I fought to in­stall). At that point, things will seem ar­bi­trary and overly con­strain­ing, pri­ori­ties will seem mis­al­igned (and might ac­tu­ally be), and then feel­ings will be hurt and ac­cu­sa­tions will be lev­eled and things will be rough. The solu­tion there is to have, already in place, strong and open chan­nels of com­mu­ni­ca­tion, strong norms and scaf­folds for emo­tional sup­port, strong de­fault as­sump­tion of trust and good in­tent on all sides, etc. etc. This goes wrongest when things fes­ter and peo­ple feel they can’t speak up; it goes much bet­ter if peo­ple have chan­nels to lodge their com­plaints and reser­va­tions and are ac­tively in­cen­tivized to do so (and can do so with­out be­ing ac­cused of defect­ing on the norm-in-ques­tion; crit­i­cism =/​= at­tack).

It brings me great joy that some­one out there has taken the need for true re­li­a­bil­ity, and gone too far.

I do not think the ex­po­nen­tial model above is a good model. I do think some­thing spe­cial hap­pens when things be­come re­li­able enough that you do not feel the need to worry about or plan for what you are go­ing to do when they do not hap­pen, and you can sim­ply as­sume they will hap­pen.

A lot of this jump is that your brain ac­cepts that things you agreed to do just hap­pen. You are not go­ing to waste time con­sid­er­ing whether or not they are go­ing to hap­pen, you are only go­ing to ask the ques­tion how to make them hap­pen. They are au­to­matic and be­come habits, and the habit of do­ing this also be­comes a habit. Ac­tu­ally be­ing truly re­li­able is eas­ier in many ways than be­ing un­re­li­able! This is similar to the re­al­iza­tion that it is much eas­ier and less tax­ing to never drink than to drink a very small amount. It is much eas­ier and less tax­ing to never cheat (for al­most all val­ues of cheat­ing) than to con­tain cheat­ing at a low but non-zero level. Bet­ter to not have the op­tion in your mind at all.

There is an­other great thing that hap­pens when you as­sume that get­ting a ‘yes I will do this thing’ from some­one means they will do the thing, and if it turns out they did not do the thing, it is be­cause it was lu­dicrously ob­vi­ous that they were not sup­posed to do the thing given the cir­cum­stances, and they gave you what warn­ing or adap­ta­tion they could. Just like you no longer need to con­sider the op­tion of not do­ing the thing, you get to not con­sider that they will choose not do the thing, or what you need to do to en­sure they do the thing.

It is lu­dicrously hard to get 99.99% re­li­a­bil­ity from any­one. If you are tel­ling me that I need to come to the weekly meetup 99.9% of the time you are tel­ling me I can miss it one time in twenty years. If you ask for 99.9% it means meet­ing ev­ery day and miss­ing once in twenty years. Does any­one have real emer­gen­cies that are that rare? Do op­por­tu­ni­ties that are worth tak­ing in­stead come along once ev­ery few decades? This doesn’t make sense. I be­lieve we did man­age to go sev­eral years in a row in New York with­out miss­ing a Tues­day night, and yes that was valuable by let­ting peo­ple show up with­out check­ing first, know­ing the meetup would hap­pen. No sin­gle per­son showed up ev­ery time, be­cause that’s in­sane. You would not put ‘Tues­day meetup’ in the ‘this is 100% re­li­able’ cat­e­gory if you wanted the ‘100% re­li­able’ cat­e­gory to re­main a thing.

There are tasks that, if failed, cause the en­tire work­shop to catas­troph­i­cally fail, and those can­not be solely en­trusted to a 95% re­li­able per­son with­out a backup plan. But if your model says that any failure any­where will cause catas­trophic over­all failure then your pri­mary prob­lem is not need­ing more re­li­able peo­ple, it is en­g­ineer­ing a more ro­bust plan that has fewer sin­gle points of failure.

If you abuse the ‘100% re­li­able’ la­bel, the la­bel be­comes mean­ingless.

Even if you use the la­bel re­spon­si­bly, when you pull out ‘100% re­li­able’ from peo­ple and ex­pect that to get you 99.9%, you have to mean it. The thing has to be that im­por­tant. You don’t need to be launch­ing a space shut­tle, but you do have to face large con­se­quences to failure. You need the kind of hor­rible­ness that re­quires mul­ti­ple locked-in re­li­able backup plans. There is no other way to get to that level. Then you work in early warn­ing sys­tems, so if things are go­ing wrong, you learn about it in time to in­voke the backup plans.

I strongly en­dorse the idea of draw­ing an ex­plicit con­trast be­tween places where peo­ple are only ex­pected to be some­what re­li­able, and those where peo­ple are ex­pected to be ac­tu­ally re­li­able.

I also strongly en­dorse that the de­fault level of re­li­a­bil­ity needs to be much, much higher than the stan­dard de­fault level of re­li­a­bil­ity, es­pe­cially in The Bay. Things there are re­ally bad.

When I make a plan with a friend in The Bay, I never as­sume the plan will ac­tu­ally hap­pen. There is ac­tual no one there I feel I can count on to be on time and not flake. I would come to visit more of­ten if plans could ac­tu­ally be made. In­stead, sug­ges­tions can be made, and half the time things go more or less the way you planned them. This is a ter­rible, very bad, no good equil­ibrium. Are there peo­ple I want to see badly enough to put up with a 50% re­li­a­bil­ity rate? Yes, but there are not many, and I get much less than half the util­ity out of those friend­ships than I would oth­er­wise get.

When I reach what would oth­er­wise be an agree­ment with some­one in The Bay, I have learned that this is not an agree­ment, but rather a state­ment of mo­men­tary in­tent. The other per­son feels good about the in­ten­tion of do­ing the thing, and if the emo­tions and vibe sur­round­ing things con­tinue to be sup­port­ive, and it is still in their in­ter­est to fol­low through, they might ac­tu­ally fol­low through. What they will ab­solutely not do is treat their word as their bond and fol­low through even if they made what turns out to be a bad deal or it seems weird or they could gain sta­tus by throw­ing you un­der the bus. Peo­ple do not co­op­er­ate in this way. That is not a thing. When you no­tice it is not a thing, and that peo­ple will ac­tively lower your sta­tus for treat­ing it as a thing rather than re­ward­ing you, it is al­most im­pos­si­ble to keep treat­ing this as a thing.

For fur­ther de­tails on the above, and those de­tails are im­por­tant, see Com­pass Rose, pretty much the whole blog.

Dun­can is try­ing to build a group house in The Bay that co­or­di­nates to ac­tu­ally do things. From where he sits, re­li­a­bil­ity has ceased to be a thing. Some amount of hy­per­bole and over­re­ac­tion is not only rea­son­able and sym­pa­thetic, but even op­ti­mal. I sym­pa­thize fully with his de­sire to fix this prob­lem via dra­co­nian penalties for non-co­op­er­a­tion.

Ideally, you would not need ex­plicit penalties. There is a large cost to im­pos­ing ex­plicit large penalties in any realm. Those penalties crowd out in­trin­sic mo­ti­va­tion and jus­tifi­ca­tion. They cre­ate ad­ver­sar­ial re­la­tion­ships and feel bad mo­ments, and re­quire large amounts of time up­keep. They make it likely things will fall apart if and when the penalties go away.

A much bet­ter sys­tem, if you can pull it off and keep it, is to have ev­ery­one un­der­stand that defec­tion is re­ally bad and that peo­ple are ad­just­ing their ac­tions and ex­pec­ta­tions on that ba­sis, and have them make an ex­traor­di­nary effort already. The penalty that the streak will be over, and the trust will be lost, should be enough. The prob­lem is, it’s of­ten not enough, and it is very hard to sig­nal and pass on this sys­tem to new peo­ple.

Thus, dra­co­nian penalties, while a sec­ond best solu­tion, should be con­sid­ered and tried.

Like other penalties, we should aim to have these penalties be clear to all, be clearly painful in the short term, and clearly be some­thing that in the long term benefits (or at least does not hurt) the group as a whole – they should be a trans­fer from short term you to some term some­one, ideally long term ev­ery­one, in a way that all can un­der­stand. I am a big fan of ex­po­nen­tially es­ca­lat­ing penalties in these situ­a­tions.

What is miss­ing here is a con­crete ex­am­ple of X failure lead­ing to Y con­se­quence, so it’s hard to tell what level of dra­co­nian he is con­sid­er­ing here.

Prob­lem 5: Every­thing else

There are other mod­els and prob­lems in the mix—for in­stance, I have a model sur­round­ing buy-in and com­mit­ment that deals with an es­ca­lat­ing cy­cle of asks-and-re­wards, or a model of how to effec­tively lev­er­age a group around you to ac­com­plish am­bi­tious tasks that re­quires you to first lay down some “top­soil” of sim­ple/​triv­ial/​ar­bi­trary ac­tivi­ties that starts the growth of an ecol­ogy of af­for­dances, or a the­ory that the strat­egy of try­ing things and do­ing things out­strips the strat­egy of think-un­til-you-iden­tify-worth­while-ac­tion, and that ra­tio­nal­ists in par­tic­u­lar are crip­pling them­selves through de­ci­sion paral­y­sis/​let­ting the perfect be the en­emy of the good when just do­ing vaguely in­ter­est­ing pro­jects would ul­ti­mately gain them more skill and get them fur­ther ahead, or a strong sense based off both re­search and per­sonal ex­pe­rience that phys­i­cal prox­im­ity mat­ters, and that you can’t build the cor­rect kind of strength and flex­i­bil­ity and trust into your re­la­tion­ships with­out ac­tu­ally spend­ing sig­nifi­cant amounts of time with one an­other in meatspace on a reg­u­lar ba­sis, re­gard­less of whether that makes tac­ti­cal sense given your ob­ject-level pro­jects and goals.

But I’m go­ing to hold off on go­ing into those in de­tail un­til peo­ple in­sist on hear­ing about them or ask ques­tions/​pose hes­i­ta­tions that could be an­swered by them.

I think these are good in­stincts, and also agree with the in­stinct not to say more here.

Sec­tion 2 of 3: Power dynamics

All of the above was meant to point at rea­sons why I sus­pect trust­ing in­di­vi­d­u­als re­spond­ing to in­cen­tives mo­ment-by-mo­ment to be a weaker and less effec­tive strat­egy than build­ing an in­ten­tional com­mu­nity that Ac­tu­ally Asks Things Of Its Mem­bers. It was also meant to jus­tify, at least in­di­rectly, why a strong guid­ing hand might be nec­es­sary given that our com­mu­nity’s evolved norms haven’t re­ally pro­duced re­sults (in the group houses) com­men­su­rate with the promises of EA and ra­tio­nal­ity.

Ul­ti­mately, though, what mat­ters is not the prob­lems and solu­tions them­selves so much as the light they shine on my aes­thet­ics (since, in the ac­tual house, it’s those aes­thet­ics that will be used to re­solve epistemic grid­lock). In other words, it’s not so much those ar­gu­ments as it is the fact that Dun­can finds those ar­gu­ments com­pel­ling. It’s worth not­ing that the peo­ple most closely in­volved with this pro­ject (i.e. my clos­est ad­vi­sors and those most likely to ac­tu­ally sign on as house­mates) have been en­couraged to spend a sig­nifi­cant amount of time ex­plic­itly vet­ting me with re­gards to ques­tions like “does this guy ac­tu­ally think things through,” “is this guy likely to be stupid or meta-stupid,” “will this guy listen/​re­act/​up­date/​pivot in re­sponse to ev­i­dence or con­sen­sus op­po­si­tion,” and “when this guy has in­tu­itions that he can’t ex­plain, do they tend to be val­i­dated in the end?”

In other words, it’s fair to view this whole post as an at­tempt to prove gen­eral trust­wor­thi­ness (in both do­main ex­per­tise and over­all san­ity), be­cause—well—that’s what it is. In mi­lieu like the mil­i­tary, au­thor­ity figures ex­pect (and get) obe­di­ence ir­re­spec­tive of whether or not they’ve earned their un­der­lings’ trust; ra­tio­nal­ists tend to have a much higher bar be­fore they’re will­ing to sub­or­di­nate their de­ci­sion­mak­ing pro­cesses, yet still that’s some­thing this sort of model re­quires of its mem­bers (at least from time to time, in some do­mains, in a pre­limi­nary “try things with benefit of the doubt” sort of way). I posit that Dragon Army Bar­racks works (where “works” means “is good and pro­duces both in­di­vi­d­ual and col­lec­tive re­sults that out­strip other group houses by at least a fac­tor of three”) if and only if its mem­bers are will­ing to hold doubt in re­serve and act with full force in spite of reser­va­tions—if they’re will­ing to trust me more than they trust their own sense of things (at least in the mo­ment, pend­ing later ex­pla­na­tion and re­cal­ibra­tion on my part or theirs or both).

And since that’s a) the cen­tral differ­ence be­tween DA and all the other group houses, which are col­lec­tions of non-sub­or­di­nate equals, and b) quite the ask, es­pe­cially in a ra­tio­nal­ist com­mu­nity, it’s en­tirely ap­pro­pri­ate that it be given the great­est scrutiny. Likely par­ti­ci­pants in the fi­nal house spent ~64 con­sec­u­tive hours in my com­pany a cou­ple of week­ends ago, speci­fi­cally to play around with liv­ing un­der my thumb and see whether it’s ac­tu­ally a good place to be; they had all of the con­cerns one would ex­pect and (I hope) had most of those con­cerns an­swered to their satis­fac­tion. The rest of you will have to make do with grilling me in the com­ments here.

Trust­ing in­di­vi­d­u­als to re­spond to in­cen­tives minute to minute does not, on its own, work be­yond the short term. Pe­riod. You need to figure out how to make agree­ments and com­mit­ments, to build trust and re­ciproc­ity, and work in re­sponse to long term in­cen­tives to­ward a greater goal. Other­wise, you fail. At best, you get hi­jacked by what the in­cen­tive gra­di­ent and zeit­geist want to hap­pen, and some­thing hap­pens, but you have lit­tle or no con­trol over what that some­thing will be.

It’s quite the leap from there to hav­ing a per­son prove gen­eral trust­wor­thi­ness, and to have peo­ple trust that per­son more than they trust their own sense of things. Are there times and places where there have been peo­ple I trusted on that level? That is ac­tu­ally a good ques­tion. There are con­texts and ar­eas in which it is cer­tainly true – my Sen­sei at the dojo, my teach­ers in a va­ri­ety of sub­jects, Jon Finkel in a game of Magic. There are peo­ple I trust in con­text, when do­ing a par­tic­u­lar thing. But is there any­one I would trust in gen­eral, if they told me to do some­thing?

If they told me to do the thing with­out tak­ing into con­sid­er­a­tion what I think, then my brain is tel­ling me the an­swer is no. There are zero such peo­ple, who can tell me in gen­eral what to do, and I’d do it even if I thought they were wrong. That, how­ever, is play­ing on su­per duper hard mode. A bet­ter ques­tion is, is there some­one who, if they told you I know that you dis­agree with me, but de­spite that trust me you should do this any­way, even if I didn’t have a good rea­son to do the thing other than that they said so, I would do the thing, pretty much no mat­ter what it is?

The an­swer is still no, in the sense that if they spoke to me like God spoke to Abra­ham, and told me to sac­ri­fice my son, I would tell each and ev­ery per­son on Earth to go to hell. The bar doesn’t have to be any­thing like that high, ei­ther – there might be peo­ple who could talk me into a ma­jor crime, but if so they’d have to ac­tu­ally talk me into it. No run­ning off on Pro­ject May­hem.

Wait. I am not sure that is ac­tu­ally true. If one of a very few se­lect peo­ple ac­tu­ally did tell me to do some­thing that seemed crazy, I might just trust it, be­cause the bar for them to ac­tu­ally do that would be so high. Or I might not. You never know un­til the mo­ment ar­rives.

Dun­can, I hope, is ask­ing for some­thing much weaker than that. He is ask­ing for small scale trust. He is ask­ing that in the mo­ment, with no ‘real’ stakes, mem­bers of the house trust him ab­solutely. This is more like be­ing in a dojo and hav­ing a sen­sei. In the mo­ment, you do not ques­tion the sen­sei within the sa­cred space. That does not even re­quire you to ac­tu­ally trust them more than you trust your­self. It sim­ply means that you need to trust that things will go bet­ter if you do what they say with­out ques­tion­ing it and then you fol­low through on that deal. In limited con­texts, this is not weird or scary. If the sen­sei did some­thing out­side their purview, the deal would be off, and right­fully so.

I even have some ex­pe­rience with hyp­no­sis, which is a lot scarier than this in terms of how much trust is re­quired and what can be done if the per­son in charge goes too far, and there are peo­ple I trust to do that, know­ing that if they try to take things too far, I’ll (prob­a­bly, hope­fully) snap out of it.

In short, this sounds a lot scarier than it is. Prob­a­bly. The bound­aries of what can be asked for are im­por­tant, but the type of per­son read­ing this likely needs to learn more how to trust oth­ers and see where things go, rather than do­ing that less of­ten or be­ing wor­ried about some­one abus­ing that power. If any­thing, we are the most pre­pared to han­dle that kind of over­reach, be­cause we are so (ra­tio­nally ir­ra­tionally? the other way around?) scared of it.

Power and au­thor­ity are gen­er­ally anti-epistemic—for ev­ery in­stance of those-in-power defend­ing them­selves against the bar­bar­ians at the gates or anti-vaxxers or the rise of Don­ald Trump, there are a dozen in­stances of them squash­ing truth, un­der­min­ing progress that would make them ir­rele­vant, and ag­gres­sively pro­mot­ing the sta­tus quo.

Thus, ev­ery at­tempt by an in­di­vi­d­ual to gather power about them­selves is at least sus­pect, given reg­u­lar ol’ in­cen­tive struc­tures and reg­u­lar ol’ fal­lible hu­mans. I can (and do) claim to be af­ter a saved world and a bunch of peo­ple be­com­ing more the-best-ver­sions-of-them­selves-ac­cord­ing-to-them­selves, but I ac­knowl­edge that’s ex­actly the same claim an ego­ma­niac would make, and I ac­knowl­edge that the link be­tween “Dun­can makes all his house­mates wake up to­gether and do pushups” and “the world is in­cre­men­tally less likely to end in gray goo and agony” is not ob­vi­ous.

And it doesn’t quite solve things to say, “well, this is an op­tional, con­sent-based pro­cess, and if you don’t like it, don’t join,” be­cause good and moral peo­ple have to stop and won­der whether their friends and col­leagues with slightly weaker epistemics and slightly less-honed aller­gies to evil are get­ting hood­winked. In short, if some­one’s build­ing a co­er­cive trap, it’s ev­ery­one’s prob­lem.

Power cor­rupts. We all know this. De­spite that, and de­spite my be­ing an ac­tual Dis­cor­dian who thinks the most im­por­tant power-re­lated skill is how to avoid power, and who thinks that in an im­por­tant sense com­mu­ni­ca­tion is only pos­si­ble be­tween equals, and whose lead­er­ship model if I was found­ing a house would be Hag­bard Celine, and I still think this is un­fair to power. It is es­pe­cially un­fair to vol­un­tary power.

Not only are there not twelve anti-vaxxers us­ing power for ev­ery pro-vaxxer, there are more than twelve pro-vaxxers try­ing to get par­ents to vac­ci­nate (mostly suc­cess­fully) for ev­ery one that tries to stop them (mostly un­suc­cess­fully). For ev­ery traitorous sol­dier try­ing to let the bar­bar­ians into the gate, there are more than twelve do­ing their duty to keep the bar­bar­ians out (and vi­o­la­tions of this cor­re­late quite well to where bar­bar­ians get through the gate, which most days and years does not hap­pen). Think about the ‘facts’ the gov­ern­ment tries to pro­mote. Are they bor­ing? Usu­ally. A waste of your tax­payer dol­lar? Often I’d agree. The emo­tions they try to evoke are of­ten bad. But most of the time, aside from ex­cep­tions like the cam­paign trail, cops in­ves­ti­gat­ing a crime (who are ex­plic­itly al­lowed and en­couraged to lie) and any­thing pro­mot­ing the lot­tery, their facts are true.

Yes, power en­courages peo­ple to hold back in­for­ma­tion and lie to each other. Yes, power is of­ten abused, but the most im­por­tant way to main­tain and gain power is to ex­er­cise that power wisely and for the benefit of the group. This goes dou­ble when that power ex­change is vol­un­tary and those who have given up power have the abil­ity and the right to walk away at any time.

A cer­tain amount of ‘abuse’ of power is ex­pected, and even ap­pro­pri­ate, be­cause power is hard work and a bur­den, so it needs to have its re­wards. Some CEOs are over­paid, no doubt, but in gen­eral I be­lieve that lead­ers and de­ci­sion mak­ers are un­der­com­pen­sated and un­der­ap­pre­ci­ated, rather than the other way around. Most peo­ple loathe be­ing in charge even of them­selves. Lead­ers have to look out for ev­ery­one else, and when they de­cide it’s time to look out for them­selves in­stead, we need to make sure they don’t go too far at the ex­pense of oth­ers, but if you au­to­mat­i­cally call that abuse, what you are left with are only burned out lead­ers. That seems to be hap­pen­ing a lot.

That does still leave us with the prob­lem that power is usu­ally anti-epistemic, due to the SNAFU prin­ci­ple: (Fully open and hon­est) com­mu­ni­ca­tion is only pos­si­ble be­tween equals. The good news is that this is an ob­ser­va­tion about the world rather than a law of na­ture, so the bet­ter frame is to ask why and how power is anti-epistemic. So­cial life is also anti-epistemic in many similar ways, largely be­cause any group of peo­ple will in­volve some amount of power and de­sire to shape the ac­tions, be­liefs and opinions of oth­ers.

SNAFU’s main mechanism is that the sub­or­di­nate is un­der the su­pe­rior’s power, which re­sults in the su­pe­rior giv­ing out re­wards and pun­ish­ments (and/​or de­ci­sions that are func­tion­ally re­wards and pun­ish­ments). This leads the sub­or­di­nate to not be able to com­mu­ni­cate to the su­pe­rior, which in turn makes the su­pe­rior in turn en­gage in de­cep­tion in or­der to find out as much of the truth as pos­si­ble. This gets a lot more com­pli­cated (for more de­tail, and in gen­eral, I recom­mend read­ing Robert An­ton Wil­son’s Prometheus Ris­ing and many of his other works) but the core prob­lem is that the sub­or­di­nate wants to be re­warded and to avoid pun­ish­ment, as in­trin­sic goods. The flip side of that is if the su­pe­rior wants to give out pun­ish­ments and avoid re­wards.

Want­ing to get re­wards and avoid pun­ish­ments when you don’t de­serve it, or to give out pun­ish­ments and avoid re­wards to those who don’t de­serve it, is the prob­lem. If the stu­dent wants to avoid push-ups, the stu­dent will de­ceive the mas­ter. If the stu­dent wants to be wor­thy and there­fore avoid push-ups, treat­ing the push-ups as use­ful in­cen­tive and train­ing and sig­nal, then the stu­dent will re­main hon­est. In an im­por­tant sense, the mas­ter has even suc­cess­fully avoided power here once they set the rules, be­cause the stu­dent’s wor­thi­ness de­ter­mines what hap­pens even if the mas­ter tech­ni­cally gives out the ver­dict. The mas­ter sim­ply tries to help make the stu­dent wor­thy.

Power is dan­ger­ous, but most use­ful things are. It’s a poor atom blaster that can’t point both ways.

That’s my jus­tifi­ca­tion, let’s see what his is.

But on the flip side, we don’t have time to waste. There’s ex­is­ten­tial risk, for one, and even if you don’t buy ex-risk à la AI or bioter­ror­ism or global warm­ing, peo­ple’s available hours are trick­ling away at the alarm­ing rate of one hour per hour, and none of us are mov­ing fast enough to get All The Things done be­fore we die. I per­son­ally feel that I am op­er­at­ing far be­low my healthy sus­tain­able max­i­mum ca­pac­ity, and I’m not alone in that, and some­thing like Dragon Army could help.

So. Claims, as clearly as I can state them, in an­swer to the ques­tion “why should a bunch of peo­ple sac­ri­fice non-triv­ial amounts of their au­ton­omy to Dun­can?”

1. Some­body ought to run this, and no one else will. On the meta level, this ex­per­i­ment needs to be run—we have like twenty or thirty in­stances of the laissez-faire model, and none of the high-stan­dards/​hard­core one, and also not very many im­pres­sive re­sults com­ing out of our houses. Due dili­gence de­mands in­ves­ti­ga­tion of the op­po­site hy­poth­e­sis. On the ob­ject level, it seems un­con­tro­ver­sial to me that there are goods wait­ing on the other side of the un­pleas­ant valley—goods that a team of lev­eled-up, co­or­di­nated in­di­vi­d­u­als with bonds of mu­tual trust can seize that the rest of us can’t even con­ceive of, at this point, be­cause we don’t have a deep grasp of what new af­for­dances ap­pear once you get there.

2. I’m the least un­qual­ified per­son around. Those words are cho­sen de­liber­ately, for this post on “less wrong.” I have a unique com­bi­na­tion of ex­per­tise that in­cludes be­ing a ra­tio­nal­ist, sixth grade teacher, coach, RA/​head of a dor­mi­tory, ringleader of a pack of hooli­gans, mem­ber of two honor code com­mit­tees, cur­ricu­lum di­rec­tor, ob­ses­sive sci-fi/​fan­tasy nerd, writer, builder, mar­tial artist, park­our guru, maker, and gen­er­al­ist. If any­body’s in­tu­itions and S1 mod­els are likely to be ca­pa­ble of dis­t­in­guish­ing the un­canny valley from the real deal, I posit mine are.

3. There’s never been a safer con­text for this sort of ex­per­i­ment. It’s 2017, we live in the United States, and all of the peo­ple in­volved are ra­tio­nal­ists. We all know about NVC and dou­ble crux, we’re all go­ing to do Cir­cling, we all know about Gendlin’s Fo­cus­ing, and we’ve all read the Se­quences (or will soon). If ever there was a time to say “let’s all step out onto the slip­pery slope, I think we can keep our bal­ance,” it’s now—there’s no group of peo­ple bet­ter equipped to stop this from go­ing side­ways.

4. It does ac­tu­ally re­quire a tyrant. As a part of a de­brief dur­ing the week­end ex­per­i­ment/​dry run, we went around the cir­cle and peo­ple talked about con­cerns/​dealbreak­ers/​things they don’t want to give up. One in­ter­est­ing thing that popped up is that, ac­cord­ing to con­sen­sus, it’s liter­ally im­pos­si­ble to find a time of day when the whole group could get to­gether to ex­er­cise. This hap­pened even with each in­di­vi­d­ual be­ing will­ing to make per­sonal sac­ri­fices and do­ing things that are some­what costly.

If, of course, the ex­pec­ta­tion is that ev­ery­body shows up on Tues­day and Thurs­day evenings, and the cost of not do­ing so is not be­ing pre­sent in the house, sud­denly the situ­a­tion be­comes sim­ple and work­able. And yes, this means some kids left be­hind (ctrl+f), but the whole point of this is to be in­stru­men­tally ex­clu­sive and con­sen­su­ally high-com­mit­ment. You just need some­one to make the ac­tual fi­nal call—there are too many threads for the co­or­di­na­tion prob­lem of a house of this kind to be solved by com­mit­tee, and too many cir­cum­stances in which it’s im­pos­si­ble to make a prin­ci­pled, jus­tifi­able de­ci­sion be­tween 492 al­most-in­dis­t­in­guish­ably-good op­tions. On top of that, there’s a need for there to be some kind of con­sis­tent, neu­tral force that sets course, im­poses con­sis­tency, re­solves dis­putes/​breaks dead­lock, and ab­sorbs all of the blame for the fact that it’s un­pleas­ant to be forced to do things you know you ought to but don’t want to do.

And lastly, we (by which I in­di­cate the peo­ple most likely to end up par­ti­ci­pat­ing) want the house to do stuff—to ac­tu­ally take on pro­jects of am­bi­tious scope, things that re­quire ten or more tal­ented peo­ple re­li­ably co­or­di­nat­ing for months at a time. That sort of co­or­di­na­tion re­quires a quar­ter­back on the field, even if the strate­giz­ing in the locker room is egal­i­tar­ian.

5. There isn’t re­ally a sta­tus quo for power to abu­sively main­tain. Dragon Army Bar­racks is not an ob­ject-level ex­per­i­ment in mak­ing the best house; it’s a meta-level ex­per­i­ment at­tempt­ing (through iter­a­tion rather than arm­chair the­o­riz­ing) to an­swer the ques­tion “how best does one struc­ture a house en­vi­ron­ment for growth, self-ac­tu­al­iza­tion, pro­duc­tivity, and so­cial syn­ergy?” It’s taken as a given that we’ll get things wrong on the first and sec­ond and third try; the whole point is to shift from one ex­per­i­ment to the next, grad­u­ally ac­cu­mu­lat­ing proven-use­ful norms via con­sen­sus mechanisms, and the cen­tral­ized power is mostly there just to keep the tran­si­tions smooth and seam­less. More im­por­tantly, the fun­da­men­tal con­ceit of the model is “Dun­can sees a bet­ter way, which might take some time to set­tle into,” but af­ter e.g. six months, if the thing is not clearly pos­i­tive and at least well on its way to be­ing self-sus­tain­ing, ev­ery­one ought to aban­don it any­way. In short, my tyranny, if net bad, has a nat­u­ral time limit, be­cause peo­ple aren’t go­ing to wait around for­ever for their re­sults.

6. The ex­per­i­ment has pro­tec­tions built in. Trans­parency, op­er­a­tional­iza­tion, and in­formed con­sent are the name of the game; com­mu­ni­ca­tion and flex­i­bil­ity are how the ma­chine is main­tained. Like the Con­sti­tu­tion, Dragon Army’s char­ter and or­ga­ni­za­tion are meant to be “liv­ing doc­u­ments” that con­strain change only in­so­far as they im­pose rea­son­able limi­ta­tions on how wan­tonly change can be en­acted.

I strongly agree with point one, and think this line should be con­sid­ered al­most a knock-down ar­gu­ment in a lot of con­texts. Some­one has to and no one else will. Un­less you are claiming no one has to, or there is some­one else who will, that’s all that need be said. There is a wise say­ing that ‘those who say it can’t be done should never in­ter­rupt the per­son do­ing it.’ Similarly, I think, once some­one has in­voked the Comet King, we should fol­low the rule that ‘those who agree it must be done need to ei­ther do it or let some­one else do it.’ As far as I can tell, both state­ments are true. Some­one has to. No one else will.

I do not think Dun­can is the least un­qual­ified per­son around if we had our pick of all peo­ple, but we don’t. We only have one per­son will­ing to do this, as far as I know. That means the ques­tion is, is he qual­ified enough to give it a go? On that level, I think these qual­ifi­ca­tions are good enough. I do wish he hadn’t tried to over­sell them quite this much.

I also don’t think this is the most safe situ­a­tion of all time to try an ex­change of power in the name of group and self im­prove­ment, and I worry that Dun­can thinks things like dou­ble crux and cir­cling are far more im­por­tant and pow­er­ful than they are. Let­ting things ‘get to his head’ is one thing Dun­can should be quite con­cerned about in such a pro­ject. We are spe­cial, but we are not as spe­cial as this im­plies. What I do think is that this is an un­usu­ally safe place and time to try this ex­per­i­ment, but I also don’t think the ex­per­i­ment is all that dan­ger­ous even be­fore all the pro­tec­tions in point six and that Dun­can ex­plains el­se­where (in­clud­ing the com­ments) and that were added later or will be added in the fu­ture. Safety first is a thing but our so­ciety is of­ten to­tally ob­sessed with safety and we need to se­ri­ously chill out.

I also think point five is im­por­tant. The nat­u­ral time limit is a strong check (one of many) on what dan­gers do ex­ist. How­ever, there seems to be some dan­ger later on of slip­page on this if you read the char­ter, so it needs to be very clear what the fi­nal end­point is and not al­low wig­gle room later – you can have nat­u­ral end points in be­tween, but things need to be fully over at a fixed fu­ture point, for any given res­i­dent, with no (anti?) es­cape clause.

Sec­tion 3 of 3: Dragon Army Char­ter (DRAFT)

State­ment of pur­pose:

Dragon Army Bar­racks is a group hous­ing and in­ten­tional com­mu­nity pro­ject which ex­ists to sup­port its mem­bers so­cially, emo­tion­ally, in­tel­lec­tu­ally, and ma­te­ri­ally as they en­deavor to im­prove them­selves, com­plete worth­while pro­jects, and de­velop new and use­ful cul­ture, in that or­der. In ad­di­tion to the usual hous­ing com­mit­ments (i.e. rent, util­ities, shared ex­penses), its mem­bers will make limited and spe­cific com­mit­ments of time, at­ten­tion, and effort av­er­ag­ing roughly 90 hours a month (~1.5hr/​day plus oc­ca­sional week­end ac­tivi­ties).

Dragon Army Bar­racks will have an egal­i­tar­ian, flat power struc­ture, with the ex­cep­tion of a com­man­der (Dun­can Sa­bien) and a first officer (Eli Tyre). The com­man­der’s role is to cre­ate struc­ture by which the agreed-upon norms and stan­dards of the group shall be dis­cussed, de­cided, and en­forced, to man­age en­try to and exit from the group, and to break epistemic grid­lock/​make de­ci­sions when speed or sim­plifi­ca­tion is re­quired. The first officer’s role is to man­age and mod­er­ate the pro­cess of build­ing con­sen­sus around the stan­dards of the Army—what they are, and in what pri­or­ity they should be met, and with what con­se­quences for failure. Other “man­age­ment” po­si­tions may come into ex­is­tence in limited do­mains (e.g. if a pro­ject arises, it may have a leader, and that leader will of­ten not be Dun­can or Eli), and will have their scope and pow­ers defined at the point of cre­ation/​rat­ifi­ca­tion.

Ini­tial ar­eas of ex­plo­ra­tion:

The par­tic­u­lar ob­ject level foci of Dragon Army Bar­racks will change over time as its mem­bers ex­per­i­ment and iter­ate, but at first it will pri­ori­tize the fol­low­ing:

  • Phys­i­cal prox­im­ity (ex­er­cis­ing to­gether, prepar­ing and eat­ing meals to­gether, shar­ing a house and com­mon space)

  • Reg­u­lar ac­tivi­ties for bond­ing and emo­tional sup­port (Cir­cling, pair de­bug­ging, weekly ret­ro­spec­tive, tu­tor­ing/​study hall)

  • Reg­u­lar ac­tivi­ties for growth and de­vel­op­ment (talk night, tu­tor­ing/​study hall, bring­ing in ex­perts, cross-pol­li­na­tion)

  • In­ten­tional cul­ture (ex­per­i­ments around lex­i­con, com­mu­ni­ca­tion, con­flict re­s­olu­tion, bets & cal­ibra­tion, per­sonal mo­ti­va­tion, dis­tri­bu­tion of re­sources & re­spon­si­bil­ities, food ac­qui­si­tion & prepa­ra­tion, etc.)

  • Pro­jects with “ship­pable” prod­ucts (e.g. talks, blog posts, apps, events; some solo, some part­ner, some small group, some whole group; rang­ing from short-term to year-long)

  • Reg­u­lar (ev­ery 6-10 weeks) re­treats to learn a skill, par­take in an ad­ven­ture or challenge, or sim­ply change perspective

All of this, I think, is good. My worry is in the set­ting of pri­ori­ties and al­lo­ca­tion of time. We have six bul­let points here, and only the fifth bul­let point, which is part of the third pri­or­ity out of three, in­volves do­ing some­thing that will have a trial by fire in the real world (and even then, we are po­ten­tially talk­ing about a talk or blog post, which can be dan­ger­ously not-fire-trial-like). The cen­tral goal will be self-im­prove­ment.

The prob­lem is that in my ex­pe­rience, your real ter­mi­nal goal can be self-im­prove­ment all you like, but un­less you choose a differ­ent pri­mary goal and work to­wards that, you won’t self-im­prove all that much. The way you get bet­ter is be­cause you need to get bet­ter to do a thing. Other­wise it’s all, well, let’s let Dun­can’s hero Tyler Dur­den ex­plain:

This is im­por­tantly true (al­though in a literal sense it is ob­vi­ously false), and seems like the most ob­vi­ous point of failure. Another is choos­ing Tyler’s solu­tion to this prob­lem. Don’t do that ei­ther.

So yes, do all six of these things and have all three of these goals, but don’t think that down near the bot­tom of your list is do­ing a few con­crete things ev­ery now and then. Every­one needs to have the thing, and have the thing be cen­tral and im­por­tant to them, what­ever the thing may be, and that per­son should then judge their suc­cess or failure on that ba­sis, and the group also needs a big thing. Yes, we will also eval­u­ate whether we hit the self-im­prove­ment marks, but on their own they sim­ply do not cut it.

Credit to my wife Laura Baur for mak­ing this point very clear and ex­plicit to me, so that I re­al­ized its im­por­tance. Which is very high.

Dragon Army Bar­racks will be­gin with a move-in week­end that will in­clude ~10 hours of group bond­ing, dis­cus­sion, and norm-set­ting. After that, it will en­ter an eight-week boot­camp phase, in which each mem­ber will par­ti­ci­pate in at least the fol­low­ing:

  • Whole group ex­er­cise (90min, 3x/​wk, e.g. Tue/​Fri/​Sun)

  • Whole group din­ner and ret­ro­spec­tive (120min, 1x/​wk, e.g. Tue evening)

  • Small group baseline skill ac­qui­si­tion/​study hall/​cross-pol­li­na­tion (90min, 1x/​wk)

  • Small group cir­cle-shaped dis­cus­sion (120min, 1x/​wk)

  • Pair de­bug­ging or rap­port build­ing (45min, 2x/​wk)

  • One-on-one check-in with com­man­der (20min, 2x/​wk)

  • Chore/​house re­spon­si­bil­ities (90min dis­tributed)

  • Pub­lish­able/​ship­pable solo small-scale pro­ject work with weekly pub­lic up­date (100min dis­tributed)

… for a to­tal time com­mit­ment of 16h/​week or 128 hours to­tal, fol­lowed by a whole group re­treat and re­ori­en­ta­tion. The house will then en­ter an eight-week trial phase, in which each mem­ber will par­ti­ci­pate in at least the fol­low­ing:

  • Whole group ex­er­cise (90min, 3x/​wk)

  • Whole group din­ner, ret­ro­spec­tive, and plot­ting (150min, 1x/​wk)

  • Small group cir­cling and/​or pair de­bug­ging (120min dis­tributed)

  • Pub­lish­able/​ship­pable small group medium-scale pro­ject work with weekly pub­lic up­date (180min dis­tributed)

  • One-on-one check-in with com­man­der (20min, 1x/​wk)

  • Chore/​house re­spon­si­bil­ities (60min dis­tributed)

… for a to­tal time com­mit­ment of 13h/​week or 104 hours to­tal, again fol­lowed by a whole group re­treat and re­ori­en­ta­tion. The house will then en­ter a third phase where com­mit­ments will likely change, but will in­clude at a min­i­mum whole group ex­er­cise, whole group din­ner, and some spe­cific small-group re­spon­si­bil­ities, ei­ther so­cial/​emo­tional or pro­ject/​pro­duc­tive (once again end­ing with a whole group re­treat). At some point be­tween the sec­ond and third phase, the house will also ramp up for its first large-scale pro­ject, which is yet to be de­ter­mined but will be roughly on the scale of putting on a CFAR work­shop in terms of time and com­plex­ity.

That’s a lot of time, but man­age­able. I would shift more of it into the pro­ject work, and worry less about de­vot­ing quite so much time to the other stuff. Hav­ing less than a quar­ter of the time be­ing spent to­wards an out­side goal is not good. I can ac­cept a few weeks of phase-in since mov­ing in and get­ting to know each other is im­por­tant, but ten weeks in only three hours a week of ‘real work’ is be­ing done.

Even more im­por­tant, as stated above, I would know ev­ery­one’s in­di­vi­d­ual small and medium scale pro­jects, and the first group pro­ject, be­fore any­one moves in, at a bare min­i­mum. That does not mean they can’t be changed later, but an an­swer that is ex­cit­ing needs to be in place at the start.

Should the ex­per­i­ment prove suc­cess­ful past its first six months, and worth con­tin­u­ing for a full year or longer, by the end of the first year ev­ery Dragon shall have a skill set in­clud­ing, but not limited to:
  • Above-av­er­age phys­i­cal capacity

  • Above-av­er­age introspection

  • Above-av­er­age plan­ning & ex­e­cu­tion skill

  • Above-av­er­age com­mu­ni­ca­tion/​fa­cil­i­ta­tion skill

  • Above-av­er­age cal­ibra­tion/​de­bi­as­ing/​ra­tio­nal­ity knowledge

  • Above-av­er­age sci­en­tific lab skill/​abil­ity to the­o­rize and rigor­ously in­ves­ti­gate claims

  • Aver­age prob­lem-solv­ing/​de­bug­ging skill

  • Aver­age pub­lic speak­ing skill

  • Aver­age lead­er­ship/​co­or­di­na­tion skill

  • Aver­age teach­ing and tu­tor­ing skill

  • Fun­da­men­tals of first aid & survival

  • Fun­da­men­tals of fi­nan­cial management

  • At least one of: fun­da­men­tals of pro­gram­ming, graphic de­sign, writ­ing, A/​V/​an­i­ma­tion, or similar (em­ploy­able men­tal skill)

  • At least one of: fun­da­men­tals of wood­work­ing, elec­tri­cal en­g­ineer­ing, weld­ing, plumb­ing, or similar (em­ploy­able trade skill)

Fur­ther­more, ev­ery Dragon should have par­ti­ci­pated in:
  • At least six per­sonal growth pro­jects in­volv­ing the de­vel­op­ment of new skill (or hon­ing of prior skill)

  • At least three part­ner- or small-group pro­jects that could not have been com­pleted alone

  • At least one large-scale, whole-army pro­ject that ei­ther a) had a rea­son­able chance of im­pact­ing the world’s most im­por­tant prob­lems, or b) caused sig­nifi­cant per­sonal growth and improvement

  • Daily con­tri­bu­tions to evolved house culture

Or longer, as noted above, is scary, so this should make it clear what the max­i­mum time length is, which should be not more than two years.

The use of ‘above-av­er­age’ here is good in a first draft, but not good in the fi­nal product. This needs to be much more ex­plicit. What is above av­er­age phys­i­cal ca­pac­ity? Put num­bers on that. What is above av­er­age pub­lic speak­ing? That should mean do­ing some pub­lic speak­ing suc­cess­fully. Cal­ibra­tion tests are a thing, and so forth. Not all the tests will be perfect, but none of them seem im­prac­ti­cal given the time com­mit­ments ev­ery­one is mak­ing. The test is im­por­tant. You need to take the test. Even if you know you will pass it. No cheat­ing. A lot of these are easy to de­sign a test for – you ask the per­son to use the skill to do some­thing in the world, and suc­ceed. No bul­lshit.

The test is nec­es­sary to ac­tu­ally get the re­sults, but it’s also im­por­tant to prove them. If you de­clare be­fore you be­gin what the test will be, then you have pre­reg­istered the ex­per­i­ment. Your re­sults then mean a lot more. Ideally the pro­jects will even be picked at the start, or at least some of them, and definitely the big pro­ject. This is all for sci­ence! Isn’t it?

It’s also sus­pi­cious if you have a skill and can’t test it. Is the skill real? Is it use­ful?

Yes, this might mean you need to do some oth­er­wise not so effi­cient things. That’s how these things go. It’s worth it, and it brings re­stric­tions that breed cre­ativity, and com­mit­ments that lead to ac­tion.

Speak­ing of evolved house cul­ture…

Be­cause of both a) the ex­pected value of so­cial ex­plo­ra­tion and b) the cu­mu­la­tive pos­i­tive effects of be­ing in a group that’s try­ing things reg­u­larly and tak­ing ex­per­i­ments se­ri­ously, Dragon Army will en­deavor to adopt no fewer than one new ex­per­i­men­tal norm per week. Each new ex­per­i­men­tal norm should have an in­tended goal or re­sult, an in­for­mal the­o­ret­i­cal back­ing, and a set re-eval­u­a­tion time (de­fault three weeks). There are two routes by which a new ex­per­i­men­tal norm is put into place:

  • The ex­per­i­ment is pro­posed by a mem­ber, dis­cussed in a whole group set­ting, and meets the min­i­mum bar for adop­tion (>60% of the Army sup­ports, with <20% op­posed and no hard ve­tos)

  • The Army has pro­posed no new ex­per­i­ments in the pre­vi­ous week, and the Com­man­der pro­poses three op­tions. The group may then choose one by vote/​con­sen­sus, or gen­er­ate three new op­tions, from which the Com­man­der may choose.

Ex­am­ples of some of the early norms which the house is likely to try out from day one (hit the ground run­ning):
  • The use of a spe­cific ges­ture to greet fel­low Dragons (house salute)

  • Var­i­ous call-and-re­sponse pat­terns sur­round­ing house norms (e.g. “What’s rule num­ber one?” “PROTECT YOURSELF!”)

  • Prac­tice us­ing hook, line, and sinker in so­cial situ­a­tions (three items other than your name for in­tro­duc­tions)

  • The anti-Singer rule for open calls-for-help (if Dragon A says “hey, can any­one help me with X?” the re­spon­si­bil­ity falls on the phys­i­cally clos­est house­mate to ei­ther help or say “Not me/​can’t do it!” at which point the buck passes to the next phys­i­cally clos­est per­son)

  • An “in­ter­rupt” call that any Dragon may use to pause an on­go­ing in­ter­ac­tion for fif­teen seconds

  • A “cul­ture of abun­dance” in which food and lef­tovers within the house are de­fault available to all, with ex­cep­tions de­liber­ately kept as rare as possible

  • A “graf­fiti board” upon which the Army keeps a run­ning in­for­mal record of its mood and thoughts

I strongly ap­prove of this con­cept, and ideally the ex­per­i­menter already has a note­book with tons of ideas in it. I have a feel­ing that he does, or would have one quickly if he bought and car­ried around the note­book. This is also where the out­side com­mu­nity can help, offer­ing more sug­ges­tions.

It would be a good norm for peo­ple to need to try new norms and sys­tems ev­ery so of­ten. Every week is a bit much for reg­u­lar life, but once a month seems quite rea­son­able.

In terms of the in­di­vi­d­ual sug­ges­tions:

I am in fa­vor of the house salute, the in­ter­rupt and the graf­fiti board. Of those three, the in­ter­rupt seems most likely to turn out to have prob­lems, but it’s definitely worth try­ing and seems quite good if it works.

Hook, line and sinker seems more like a tool or skill to prac­tice, but seems like a good idea for those hav­ing trou­ble with good in­tro­duc­tions.

Call and re­sponse is a thing that nat­u­rally evolves in any group cul­ture that is hav­ing any fun at all, and leads to more fun (here is a prime ex­am­ple and im­por­tant safety tip), so I en­courag­ing more and more for­mal use of it seems good at first glance. The worry is that if too for­mal and used too much, this could be­come anti-epistemic, so I’d keep an eye on that and re-cal­ibrate as needed.

The anti-Singer rule (ASR) is in­ter­est­ing. I think as writ­ten it is too broad, but that a less broad ver­sion would likely be good.

There are four core prob­lems I see.

The first prob­lem is that this de­stroys in­for­ma­tion when the suit­abil­ity of each per­son to do the task is un­known. The first per­son in line has to give a Yes/​No to help be­fore the sec­ond per­son in line re­veals how available they are to help, and so on. Let’s say that Alice asks for help, Bob is clos­est, then Carol, then David and then Eve. Bob does not know if Carol, David or Eve would be happy (or able) to help right now – maybe Bob isn’t so good at this task, or maybe it’s not the best time. Without ASR, Bob could wait for that in­for­ma­tion – if there was a long enough pause, or David and Eve said no, Bob could step up to the plate. The flip side is also the case, where once Bob, Carol and David say no, Eve can end up helping even when she’s clearly not the right choice. Think of this as a no-go­ing-back search al­gorithm, which has a rea­son­ably high rate of failure.

The sec­ond and re­lated prob­lem is that Bob has to ei­ther ex­plic­itly say no to Alice, which costs so­cial points, so he may end up do­ing the thing even when he knows this is not an effi­cient al­lo­ca­tion. Even if David is happy to help where Bob is not, Bob still had to say no, and you’d pre­fer to have avoided that.

The third prob­lem is that this in­ter­rupts flow. If Alice re­quests help, Bob has to ex­plic­itly re­spond with a yes or no. Most peo­ple and all pro­gram­mers know how dis­rup­tive this can be, and in this house, and I worry no one can ‘check out’ or ‘fo­cus in’ fully while this rule is in place. It could also just be seen as costly in terms of the amount of noise it gen­er­ates. This seems es­pe­cially an­noy­ing if, for ex­am­ple, David is the one clos­est to the door, and Alice asks some­one to let Eli in, and now mul­ti­ple peo­ple have to ei­ther ex­plic­itly re­fuse the task or do it even though do­ing it does not make sense.

The fourth prob­lem is that this im­plic­itly re­wards and pun­ishes phys­i­cal lo­ca­tion, and could po­ten­tially lead to peo­ple avoid­ing phys­i­cal prox­im­ity or the cen­ter of the house. This seems bad.

This means that for classes of help that in­volve large com­mit­ments of time, and/​or large var­i­ance in peo­ple’s suit­abil­ity for the task, es­pe­cially var­i­ance that is in­visi­ble to other peo­ple, that this norm seems like it will be de­struc­tive.

On the other hand, if the re­quest is some­thing that any­one can do (some­thing like “give me a hand with this” or “an­swer the phone”) es­pe­cially one that benefits from phys­i­cal prox­im­ity, so the de­fault of ‘near­est per­son helps’ makes sense, this sys­tem seems ex­cel­lent if com­bined with some com­mon sense. One ob­vi­ous ex­ten­sion is that if some­one else thinks that they should do the task, they should speak up and do it (or even just start do­ing it), even if the per­son re­quest­ing didn’t know who to ask. As with many similar things, hav­ing semi-for­mal norms can be quite use­ful if they are used the right amount, but if abused they get dis­rup­tive – the in­for­mal sys­tems they are re­plac­ing are of­ten quite effi­cient and be­ing too ex­plicit lets sys­tems be gamed.

The cul­ture of abun­dance is the norm that seems at risk of ac­tively back­firing. The com­ments pointed this out mul­ti­ple times. The three ob­vi­ous failure modes are tragedy of the com­mons (you don’t buy milk be­cause ev­ery­one else will drink it) and in­effi­cient al­lo­ca­tion (you buy milk be­cause you will soon bake a cake, and by the time you go to bake it, the milk is gone), and in­abil­ity to plan (you buy milk, but you can never count on hav­ing any for your break­fast un­less you mas­sively over­sup­ply, and you might also not have any ce­real).

The re­sult is likely ei­ther more and more ex­cep­tions, less and less available food, or some com­bi­na­tion of the two, po­ten­tially lead­ing to much higher to­tal food ex­penses and more trips to su­per­mar­kets and restau­rants. The closer the house is to the su­per­mar­ket, the bet­ter, even more so than usual.

Of course, if ev­ery­one uses com­mon sense, ev­ery­one gets to know their house­mates prefer­ences, and the food bud­get is man­aged rea­son­ably such that buy­ing food doesn’t mean sub­si­diz­ing ev­ery­one else, this can still mostly work out, and cer­tainly some amount of this is good es­pe­cially with sta­ple sup­plies that if man­aged prop­erly should not come close to run­ning out. How­ever, this is not a norm that is self-sus­tain­ing on its own – it re­quires care­ful man­age­ment along mul­ti­ple fronts if it is to work.

Dragon Army Code of Con­duct
While the norms and stan­dards of Dragon Army will be muta­ble by de­sign, the fol­low­ing (once re­vised and rat­ified) will be the im­mutable code of con­duct for the first eight weeks, and is un­likely to change much af­ter that.

  1. A Dragon will pro­tect it­self, i.e. will not sub­mit to pres­sure caus­ing it to do things that are dan­ger­ous or un­healthy, nor wait around pas­sively when in need of help or sup­port (note that this may cause a Dragon to leave the ex­per­i­ment!).

  2. A Dragon will take re­spon­si­bil­ity for its ac­tions, emo­tional re­sponses, and the con­se­quences thereof, e.g. if late will not blame bad luck/​cir­cum­stance, if an­gry or trig­gered will not blame the other party.

  3. A Dragon will as­sume good faith in all in­ter­ac­tions with other Dragons and with house norms and ac­tivi­ties, i.e. will not en­gage in straw­man­ning or the horns effect.

  4. A Dragon will be can­did and proac­tive, e.g. will give other Dragons a chance to hear about and in­ter­act with nega­tive mod­els once they no­tice them form­ing, or will not sit on an emo­tional or in­ter­per­sonal prob­lem un­til it fes­ters into some­thing worse.

  5. A Dragon will be fully pre­sent and sup­port­ive when in­ter­act­ing with other Dragons in for­mal/​offi­cial con­texts, i.e. will not en­gage in silent defec­tion, un­der­min­ing, half­heart­ed­ness, aloof­ness, sub­tle sab­o­tage, or other ac­tions which fol­low the let­ter of the law while vi­o­lat­ing the spirit. Another way to state this is that a Dragon will prac­tice com­part­men­tal­iza­tion—will be able to si­mul­ta­neously hold “I’m deeply skep­ti­cal about this” alongside “but I’m ac­tu­ally giv­ing it an hon­est try,” and post­pone cri­tique/​com­plaint/​sug­ges­tion un­til pre­de­ter­mined check­points. Yet an­other way to state this is that a Dragon will take ex­per­i­ments se­ri­ously, in­clud­ing epistemic hu­mil­ity and ac­tu­ally see­ing things through to their ends rather than fid­dling mid­way.

  6. A Dragon will take the out­side view se­ri­ously,main­tain epistemic hu­mil­ity, and make sub­ject-ob­ject shifts, i.e. will act as a be­hav­iorist and agree to judge and be judged on the ba­sis of ac­tions and re­vealed prefer­ences rather than in­ten­tions, hy­pothe­ses, and as­sump­tions (this one’s similar to #2 and hard to put into words, but for ex­am­ple, a Dragon who has been hav­ing trou­ble get­ting to sleep but has never in­formed the other Dragons that their ac­tions are keep­ing them awake will agree that their anger and frus­tra­tion, while valid in­ter­nally, may not fairly be vented on those other Dragons, who were never given a chance to cor­rect their be­hav­ior). Another way to state this is that a Dragon will em­brace the maxim “don’t be­lieve ev­ery­thing that you think.”

  7. A Dragon will strive for ex­cel­lence in all things, mod­ified only by a) pri­ori­ti­za­tion and b) do­ing what is nec­es­sary to pro­tect it­self/​max­i­mize to­tal growth and out­put on long time scales.

  8. A Dragon will not defect on other Dragons.

There will be var­i­ous op­er­a­tional­iza­tions of the above com­mit­ments into spe­cific norms (e.g. a Dragon will read all mes­sages and emails within 24 hours, and if a full re­sponse is not pos­si­ble within that win­dow, will send a short re­sponse in­di­cat­ing when the longer re­sponse may be ex­pected) that will oc­cur once the spe­cific mem­bers of the Army have been se­lected and have in­di­vi­d­u­ally signed on. Dis­putes over vi­o­la­tions of the code of con­duct, or con­fu­sions about its op­er­a­tional­iza­tion, will first be ad­dressed one-on-one or in in­for­mal small group, and will then move to gen­eral dis­cus­sion, and then to the first officer, and then to the com­man­der.

Note that all of the above is de­liber­ately kept some­what flex­ible/​vague/​open-ended/​un­set­tled, be­cause we are try­ing not to fall prey to GOODHART’S DEMON.

Bonus points for the ex­plicit in­vo­ca­tion of Good­hart’s De­mon.
That feel­ing where things start to creep you out and feel scary, the walls are metaphor­i­cally clos­ing in and some­thing seems deeply wrong? Yeah. I didn’t get it ear­lier (ex­cept some­what when Dun­can men­tioned he was look­ing ad­mirably at the Paper Street Soap Com­pany, but that was more of a what are you think­ing mo­ment). I got that here.
So some­thing is wrong. Or at least, some­thing feels wrong, in a deep way. What is it?
First, I ob­serve that it isn’t re­lated at all to #1, #3, #4 or #7. Those seem clearly safe. So that leaves four sus­pects. On an­other pass, it’s also not #8, nor is it di­rectly #6, or di­rectly #5. On their own, all of those would seem fine, but once the feel­ing that some­thing is wrong and po­ten­tially out to get you sets in, things that would oth­er­wise be fine stop see­ing fine. This is re­lated to the good faith thing – my brain no longer is in good-faith-as­sum­ing mode. I’m pretty sure #2 is the prob­lem. So let’s fo­cus in there.
The prob­lem is clearly in the rule that a Dragon will take re­spon­si­bil­ity for their emo­tional re­sponses, and not blame the other per­son. That is what is set­ting off alarm bells.
Why? Be­cause that rule, in other forms, has a his­tory. The other form, which this im­plies, is:
Thou shalt have the “cor­rect” emo­tional re­ac­tion, the one I want you to have, and you are blame­wor­thy if you do not.
Here is some of that his­tory.
Then some of the other rules re­in­force that feel­ing of ‘and LIKE IT’ that makes me need to think about need­ing to con­trol the fist of death. With time to re­flect, I re­al­ize that this is a lot like read­ing the right to pri­vacy into the con­sti­tu­tion, in that it isn’t tech­ni­cally there but does get im­plied if you want the thing to ac­tu­ally func­tion as in­tended.
Th­ese things are tough. I fully en­dorse tak­ing full re­spon­si­bil­ity for the re­sults as a prin­ci­ple, from all par­ties in­volved, such that the amount of re­spon­si­bil­ity is of­ten hun­dreds of per­cents, but one must note the dan­ger.
Once that is iden­ti­fied and un­der­stood, I see that I mostly like this list a lot.
Ran­dom Logistics
  1. The ini­tial filter for at­ten­dance will in­clude a one-on-one in­ter­view with the com­man­der (Dun­can), who will be look­ing for a) cred­ible in­ten­tion to put forth effort to­ward the goal of hav­ing a pos­i­tive im­pact on the world, b) like­li­ness of a strong fit with the struc­ture of the house and the other par­ti­ci­pants, and c) re­li­a­bil­ity à la fi­nan­cial sta­bil­ity and abil­ity to com­mit fully to long-term en­deav­ors. Fi­nal de­ci­sions will be made by the com­man­der and may be in­for­mally ques­tioned/​ap­pealed but not over­ruled by an­other power.

  2. Once a fi­nal list of par­ti­ci­pants is cre­ated, all par­ti­ci­pants will sign a “free state” con­tract of the form “I agree to move into a house within five miles of down­town Berkeley (for length of time X with fi­nan­cial obli­ga­tion Y) some­time in the win­dow of July 1st through Septem­ber 30th, con­di­tional on at least seven other peo­ple sign­ing this same agree­ment.” At that point, the search for a suit­able house will be­gin, pos­si­bly with del­e­ga­tion to par­ti­ci­pants.

  3. Rents in that area tend to run ~$1100 per room, on av­er­age, plus util­ities, plus a 10% con­tri­bu­tion to the gen­eral house fund. Thus, some­one hop­ing for a sin­gle should, in the 85th per­centile worst case, be pre­pared to make a ~$1400/​month com­mit­ment. Similarly, some­one hop­ing for a dou­ble should be pre­pared for ~$700/​month, and some­one hop­ing for a triple should be pre­pared for ~$500/​month, and some­one hop­ing for a quad should be pre­pared for ~$350/​month.

  4. The ini­tial phase of the ex­per­i­ment is a six month com­mit­ment, but leases are gen­er­ally one year. Any Dragon who leaves dur­ing the ex­per­i­ment is re­spon­si­ble for con­tin­u­ing to pay their share of the lease/​util­ities/​house fund, un­less and un­til they have found a re­place­ment per­son the house con­sid­ers ac­cept­able, or have found three po­ten­tial vi­able re­place­ment can­di­dates and had each one re­jected. After six months, should the ex­per­i­ment dis­solve, the house will re­vert to be­ing sim­ply a house, and peo­ple will bear the nor­mal re­spon­si­bil­ity of “keep pay­ing un­til you’ve found your re­place­ment.” (This will likely be eas­iest to en­force by sim­ply hav­ing as many names as pos­si­ble on the ac­tual lease.)

  5. Of the ~90hr/​month, it is as­sumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are in­de­pen­dent or vol­un­tar­ily-paired work. Fur­ther­more, it is as­sumed that the com­man­der main­tains sole au­thor­ity over ~15 of those hours (i.e. can re­quire that they be spent in a spe­cific way con­sis­tent with the aes­thetic above, even in the face of skep­ti­cism or op­po­si­tion).

  6. We will have an in­ter­nal econ­omy whereby peo­ple can trade effort for money and money for time and so on and so forth, be­cause heck yeah.

I’ll leave the lo­cal lo­gis­tics mostly to the lo­cals, but will note that five miles is a long dis­tance to go in an ar­bi­trary di­rec­tion – if I was con­sid­er­ing this, I’d want to know a lot more about the ex­act lo­ca­tions that would be con­sid­ered.
The one here that needs dis­cus­sion is #6. You would think I would strongly en­dorse this, and you would be wrong. I think that an in­ter­nal econ­omy based on money is a bad idea, es­pe­cially con­sid­er­ing Dun­can ex­plic­itly says in the com­ments that it would ap­ply to push-ups. This com­pletely mi­s­un­der­stands the point of push-ups, and (I think) the type of cul­ture nec­es­sary to get a group to bond and be­come al­lies. The rich one can’t be al­lowed to buy off their chores and definitely not their pun­ish­ments. The ac­tivi­ties in­volved have a bunch of goals: Self-im­prove­ment and learn­ing good habits, team bond­ing and build­ing and co­or­di­na­tion, and so forth. They are not sim­ply di­vi­sion of la­bor. Peo­ple buy­ing gifts for the group builds the group, it is not sim­ply di­vid­ing costs.
The whole point of cre­at­ing a new cul­ture of this type is, in a sense, to cre­ate new sa­cred things. Those things need to re­main sa­cred, and ev­ery­one needs to be fo­cused away from money. Thus, an in­ter­nal econ­omy that has too wide a scope is ac­tively de­struc­tive (and dis­tract­ing) to the pro­ject. I would recom­mend against it.
I feel so weird tel­ling some­one else to not cre­ate a mar­ket. It’s re­ally strange, man.
Predictions
Now that we’ve reached the end (there are many com­ments, but one must draw the line some­where), what do I think will ac­tu­ally hap­pen, if the ex­per­i­ment is done? I think it’s likely that Dun­can will get to do his ex­per­i­ment, at least for the ini­tial pe­riod. I’d di­vide the re­sults into four rough sce­nar­ios.
I think that the chances of suc­cess as defined by Dun­can’s goals above are not that high, but sub­stan­tial. Even though none of the goals are fan­tas­ti­cal, there are a lot of ways for him to fall short. Doubtless at least one per­son will fail at least one of the goals, but what’s the chance that most of the peo­ple will stay, and most who stay will hit most of the goals and the group, such that vic­tory can be de­clared? I’d say maybe 20%.
The most likely sce­nario, I think, is a suc­cess­ful failure. The house does not get what it came for, but we get a lot of data on what did and did not work, at least some peo­ple feel they got a lot out of it per­son­ally, and we can if we want to run an­other ex­per­i­ment later, or we learned why we should never do this again with­out any se­ri­ous dam­age be­ing done. I’d give things in this range maybe 30%.
The less bad failure mode, in my mind, is pe­ter­ing out. This is where the house more or less looks like a nor­mal house by the end, ex­cept with the vague sense of let down and what might have been. We get there through a com­bi­na­tion of peo­ple leav­ing, peo­ple ‘leav­ing’ but stay­ing put, peo­ple slowly ig­nor­ing the norms more and more, and Dun­can not run­ning a tight enough ship. The house keeps some good norms, and noth­ing too bad hap­pens, but we don’t re­ally know whether the idea would work, so this has to be con­sid­ered a let down. Still, it’s not much worse than if there was no house in the first place. I give this about 25%.
The other failure mode is dis­aster. This is where there are big fights and power strug­gles, or peo­ple end up feel­ing hurt or abused, and there is Big Drama and lots of blame to go around. Alter­na­tively, the power thing gets out of hand, and out­siders think that this has turned into some­thing dan­ger­ous, per­haps work­ing to break it up. A lot of group houses end in this way, so I don’t know the base rate, but I’d say with the short time frame and nat­u­ral end point these to­gether are some­thing like 25%. Break­ing that down, I’d say 15% chance of or­di­nary house drama be­ing the ba­sic story, and 10% chance that scary unique stuff hap­pens that makes us con­clude that the ex­per­i­ment was an Ar­rested Devel­op­ment level huge mis­take.
Good luck!

No comments.