On Dragon Army

Analysis Of: Dragon Army: Theory & Charter (30 Minute Read)

Epistemic Status: Varies all over the map from point to point

Length Status: In theory I suppose it could be longer

This is a long post is long post responding to a almost as long (and several weeks and several controversy cycles old, because life comes at you fast on the internet) post, which includes extensive quoting from the original post and assumes you have already read the original. If you are not interested in a very long analysis of another person’s proposal for a rationalist group house, given that life is short, you can (and probably should) safely skip this one.

Dragon Army is a crazy idea that just might work. It probably won’t, but it might. It might work because it believes in something that has not been tried, and there is a chance that those involved will actually try the thing and see what happens.

Scott made an observation that the responses to the Dragon Army proposal on Less Wrong were mostly constructive criticism, while the responses on Tumblr were mostly expressions of horror. That is exactly the response you would expect from a project with real risks, but also real potential benefits worth taking the risks to get. This updates me strongly in favor of going forward with the project.

As one would expect, the idea as laid out in the charter is far from perfect. There are many modifications that need to be made, both that one could foresee in advance, and that one could not foresee in advance.

My approach is going to be to go through the post and comment on the components of the proposal, then pull back and look at the bigger picture.

In part 1, Duncan makes arguments, then later in part 2 he says the following:

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it’s those aesthetics that will be used to resolve epistemic gridlock). In other words, it’s not so much those arguments as it is the fact that Duncan finds those arguments compelling.

I agree in one particular case that this is the important (and worrisome) thing, but mostly I disagree and think that we should be engaging with the arguments themselves. This could be because I am as interested in learning about and discussing general things using the proposal as a taking-off point, as I am in the proposal itself. A lot of what Duncan discusses and endorses is the value of doing a thing at all even if it isn’t the best thing and I strongly agree with that – this is me going out and engaging with this thing’s thingness, and doing a concrete thing to it.

Purpose of post: Threefold. First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who’s interested in skimming through it for Things To Steal. Second, since my initial proposal to found a house, I’ve noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it’s entirely unfair for me to expect that to stop unless I make my skull-noticing evident. Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere. I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery).

All of this is good, and responses definitely did not do enough looking for Things To Steal, so I’d encourage others to do that more. Duncan (the author) proposes a lot of concrete things and makes a lot of claims. You don’t need to agree with all, most or even many of them to potentially find worthwhile ideas. Letting people know you’ve thought about all the things that can go wrong is also good, although actually thinking about those things is (ideally) more important, and I worry from the interactions that Duncan is more concerned with showing that he’s considered all the concerns than he is concerned with the actual concerns, but at a sufficient level of rigor that algorithm might not be efficient but it will still be sufficient. And of course, I think Less Wrong was the right place to post this.

What is Dragon Army? It’s a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination. Tongue-in-cheek referred to as the “fascist/​authoritarian take on rationalist housing,” which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for. Aesthetically modeled after Dragon Army from Ender’s Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/​Tyler and Eli Tyre in the role of Bean/​The Narrator.

I applaud Duncan’s instinct to use the metaphors he actually believes apply to what he is doing, rather than the ones that would avoid scaring the living hell out of everyone. Less points for actually believing those metaphors without thinking there is a problem.

I have seen and heard arguments against structuring things as an army, or against structuring things in an authoritarian fashion. As I note in the next section, I think these are things to be cautious of but that are worth trying, and I do not find either of them especially scary when they are based on free association. If our kind can’t cooperate enough to have a temporary volunteer metaphoric army that does not shoot anyone and does not get shot at, then we really can’t cooperate. Ender is a person you may wish to emulate, at least until some point in books that were written and happen later, and may or may not exist.

What should freak Duncan (and everyone else) out is the reference that people seem to be strangely glossing over, which is the Paper Street Soap Company. Fight Club is a great movie (and book), and if you haven’t seen it yet you should go see it and/​or read it, but – spoiler alert, guys! – Tyler Durdan is a bad dude. Tyler Durdan is completely insane. He is not a person you want to copy or emulate. I should not have to be saying this. This should be obvious. Seriously. The original author made this somewhat more explicit in the book, but the movie really should have been clear enough for everyone.

That does not mean that Tyler Durdan did not have a worthwhile message for us hidden under all of that. Many (including villains) do, but no, Tyler is not the hero, and no, he does not belong to the Magneto List of Villains Who Are Right. You can notice that your life is ending one minute at a time, and that getting out there in the physical world and taking risks is good, and that the things you own can end up owning you, and even learn how to make soap. Fine.

You do not want to be trying to recreate something called Project Mayhem, unless your Dragon Army is deep inside enemy territory and fighting an actual war (in which case you probably still don’t want to do that, but at least I see the attraction).

Also, if you want to show that you can delegate and trust others, and you’re referring to your second in command as ‘The Narrator’ I would simply say “spoiler alert,” and ask you to ponder that again for a bit.

The weird part is that the proposal here does not, to me, evoke the Paper Street Soap Company at all, so what I am worried about is why this metaphor appealed to Duncan more than anything else.

Why? Current group housing/​attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways. First, there’s not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it’s largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments). Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that’s hitting the rationalist community specifically and the millennial generation more generally. There are a bunch of competitors for “third,” but for now we can leave it at that.

Later in the post, Duncan hits on what third is, and I think that third is super important: Even if you think we’ve got a good version of the group house concept, we are doing far too much exploitation of the group house concept, and not enough exploration. There is variation on the location, size and mix of people that compose a house, and some tinkering with a few other components, but the basic structures remain invariant. The idea of a group house built around the group being a group that does ambitious things together, and/​or operating with an authoritarian structure, has not been tried and been found wanting. It has been found scary and difficult, and not been tried. Yes, I have heard others note that the intentionally named Intentional Community Community did ban authoritarian houses because they find that they rarely work out, but they also all focus on ‘sustainability’ rather than trying to accomplish big things in the world, so I am not overly worried about that.

His second reason also seems important. There is an epidemic of loneliness hitting both our community and the entire world as well. Younger generations may or may not have it worse, but if one can state with a straight face that the average American only has 2-3 close friends (whether or not that statistic is accurate) there is a huge problem. I have more than that, but not as many as I used to, or as many as I would like. If group houses are not generating close friendships, that is very bad, and we should try and fix it, since this is an important unmet need for many of us and they should be a very good place to meet that need.

His first reason I am torn about because it is not obvious that stuff should be actually happening inside the houses as opposed to the houses providing an infrastructure for people who then cause things to happen. Most important things that happen in the world happen in professional organizations or as the result of unusually agenty individuals. Houses could be very successful at causing things to happen without any highly visible things happening within the houses. The most obvious ways to do this are to support the mechanisms Duncan mentions. One could provide support for people to devote their energies to important organizations and projects elsewhere, by letting people get their domestic needs met for less time and money, and by steering them to the most important places and projects. One could also do other things that generate more unusually agenty individuals, or make those individuals more effective when they do agenty things (and/​or make them do even more agenty things), which in my reading is one of two main goals of Dragon Army, the other being to increase connection between its inhabitants.

Duncan’s claim here is that there are things that could be happening directly in the houses that are not happening, and that those things represent low-hanging fruit. This seems plausible, but it does not seem obvious, nor does it seem obvious what the low-hanging fruit would be. The rest of the post does go into details, so judgment needs to be based on those details.

Problem 1: Pendulums

This one’s first because it informs and underlies a lot of my other assumptions. Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal. The society is “stuck” at one point, realizes that there’s something wrong about that point (e.g. that maybe we shouldn’t be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton’s fence in the process.

For example, my experience leads me to put a lot of confidence behind the claim that we’ve traded “a lot of people trapped in marriages that are net bad for them” for “a lot of people who never reap the benefits of what would’ve been a strongly net-positive marriage, because it ended too easily too early on.” The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it’s nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.

Proposed solution: Rather than choosing between absolutes, integrate. For example, I have two close colleagues/​allies who share millennials’ default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good. So they’ve decided to do handfasting, in which they’re fully committed for a year and a day at a time, and there’s a known period of time for asking the question “should we stick together for another round?”

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes. Sort of like building a gate into the Chesterton’s fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth.

Caveat/​skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying. And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must. In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).

I think the pendulum is a very bad model of social progress. It seems pretty rare that we exist (are stuck) at point A, then we try point B, and then we realize that our mistake was that we swung a little too far but that a point we passed through was right all along. This is the Aristotle mistake of automatically praising the mean, when there is no reason to think that your bounds are in any way reasonable, or that you are even thinking about the right set of possible rules, actions or virtues. If anything, there is usually reason to suspect otherwise.

Even in examples where you have a sort of ‘zero-sum’ decision where policy needs to choose a point on a number line, I think this is mostly wrong.

I guess there are some examples of starting out with the equilibrium “Abortions for no one,” then moving to the equilibrium “Abortions for everyone,” and then settling on the equilibrium “Abortions for some, miniature American flags for others” and that being the correct answer (I am not making a claim of any kind of what the correct answer is here). Call this Thesis-Antithesis-Synthesis.

There are a lot more examples of social progress that go more like “Slaves for everyone” then getting to “Slaves for some” and finally reaching “actually, you know what, slaves for no one, ever, and seriously do I even need to explain this one?” Call this Thesis-LessThesis-Antithesis, and then Anti-Thesis just wins and then we get progress.

There is also the mode where someone notices a real problem but then has a really, mindbogglingly bad idea, for example Karl Marx, the idea is tried, and it turns out it is not only Not Progress but a huge disaster. Then, if you are paying attention, abandon it and try something else, but you learn from what happened. Now you understand where some of the fences are and what they are for, which helps you come up with a new plan, but your default should absolutely not be “well, sure, that was a huge disaster so we should try some mixture of what we just did and the old way and that will totally be fine.”

There is no reason to assume you were moving in the correct direction, or even the correct dimension. Do not be fooled by the Overton Window.

If anything, to the extent that you must choose a point on the number line, moving from 0 to 1 and finding 1 to be worse is not a good reason to try 0.5 unless your prior is very strong. It might well be a reason to try −0.5 or −1! Maybe you didn’t even realize that was an option before, or why you might want to do that.

Problem 2: The Unpleasant Valley

As far as I can tell, it’s pretty uncontroversial to claim that humans are systems with a lot of inertia. Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.

I have some unqualified speculation regarding what’s going on under the hood. For one, I suspect that you’ll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave. People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you’re doing than to cobble together a new system. For another, I think hyperbolic discounting gets way too little credit/​attention, and is a major factor in knocking people off the wagon when they’re trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain.

But in short, I think the picture of “I’m going to try something new, eh?” often looks like this:

… with an “unpleasant valley” some time after the start point. Think about the cold feet you get after the “honeymoon period” has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/​exercise regime, or your second year of being forced to take piano lessons.

The problem is, people never make it to the third year, where they’re actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it. Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just … make you keep going). But left to our own devices, we’ll often get halfway through an experiment and just … stop, without ever finding out what the far side is actually like.

Proposed solution: Make experiments “unquittable.” The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line. If (big if) we take those as a given, then it should be safe to, in essence, “lock oneself in,” via any number of commitment mechanisms. Or, to put it in other words: “Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal? Fine, then—Medium-Term Future Me doesn’t get a vote.” Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.

Caveat/​skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should’ve built in an ejector seat. This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don’t make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.

And, of course, you do build in an ejector seat. See next.

This is the core thesis behind a lot of the concrete details of Dragon Army. Temporary commitment allows you to get through the time period where you are getting negative short-term payoffs that sap your motivation, and reach a later stage where you get paid off for all your hard work, while giving you the chance to bail if it turns out the experiment is a failure and you are never going to get rewarded.

I would have drawn the graph above with a lot more random variations, but the implications are the same.

I think this is a key part that people should steal, if they do not have a better system already in place that works for them. When you are learning to play the piano, you are effectively deciding each day whether to stick with it or to quit, and you only learn to play the piano if you never decide to quit (you can obviously miss a day and recover, but I think the toy model gives the key insights and is good enough). You can reliably predict that there will be variation (some random, some predictable) in your motivation from day to day and week to week, and over longer time frames, so if you give yourself a veto every day (or every week) then by default you will quit far too often.

If every few years, you hold a vote on whether to leave the European Union and destroy your economy, or to end your democracy and appoint a dictator, eventually the answer will be yes. It will not be the ‘will of the people’ so much as the ‘whim of the people’ and you want protection against that. The one-person case is no different.

The ejector seat is important. If things are going sufficiently badly, there needs to be a way out, because the alternatives are to either stick with the thing, or to eject anyway and destroy your ability to commit to future things. Even when you eject for good reasons using the agreed upon procedures, it still damages your ability to commit. The key is to calibrate the threshold for the seat, in terms of requirements and costs, such that it being used implies that the decision to eject was over-determined, but with a bar no higher than is necessary for that to be true.

For most commitments, your ability to commit to things is far more valuable than anything else at stake. Even when the other stakes are big, that also means the commitment stakes are also big. This means that once you commit, you should follow through almost all the time even when you realize that agreeing to commit was a mistake. That in turn means one should think very carefully about when to commit to things, and not committing if you think you are likely to quit in a way that is damaging to your commitment abilities.

I think that if anything, Duncan under-states the importance of reliable commitment. His statements above about marriage are a good example of that, even despite the corrective words he writes about the subject later on. Agreeing to stay together for a year is a sea change from no commitment at all, and there are some big benefits to the year, but that is not remotely like the benefits of a real marriage. Giving an agreement an end point, at which the parties will re-negotiate, fundamentally changes the nature of the relationship. Longer term plans and trades, which are extremely valuable, cannot be made without worrying about incentive compatibility, and both sides have to worry about their future negotiating positions and market value. Even if both parties want things to continue, each year both parties have to worry about their negotiating position, and plan for their future negotiating positions.

You get to move from a world in which you need to play both for the team and for yourself, and where you get to play only for the team. This changes everything.

It also means that you do not get the insurance benefits. This isn’t formal, pay-you-money insurance. This is the insurance of having someone there for you even when you have gone sick or insane or depressed, or other similar thing, and you have nothing to offer them, and they will be there for you anyway. We need that. We need to count on that.

I could say a lot more, but it would be beyond scope.

Problem 3: Saving Face

If any of you have been to a martial arts academy in the United States, you’re probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups. The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.

I posit that what’s actually going on includes that, but is somewhat more subtle/​complex. I think the real benefit of the pushup system is that it closes the loop.

Imagine you’re a ten year old kid, and your parent picked you up late from school, and you’re stuck in traffic on your way to the dojo. You’re sitting there, jittering, wondering whether you’re going to get yelled at, wondering whether the master or the other students will think you’re lazy, imagining stuttering as you try to explain that it wasn’t your fault—

Nope, none of that. Because it’s already clearly established that if you fail to show up on time, you do some pushups, and then it’s over. Done. Finished. Like somebody sneezed and somebody else said “bless you,” and now we can all move on with our lives. Doing the pushups creates common knowledge around the questions “does this person know what they did wrong?” and “do we still have faith in their core character?” You take your lumps, everyone sees you taking your lumps, and there’s no dangling suspicion that you were just being lazy, or that other people are secretly judging you. You’ve paid the price in public, and everyone knows it, and this is a good thing.

Proposed solution: This is a solution without a concrete problem, since I haven’t yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress). But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face. Ways to hit the ejector seat on an experiment that’s going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that’s geared toward focusing everyone on perfection. In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there’s no question about whether they’re trying to make amends, or whether that attempt is sufficient.

Caveat/​skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100. The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time). Lastly, there’s something in the mix about arbitrariness—what do pushups have to do with lateness, really? I mean, I get that it’s paying some kind of unpleasant cost, but …

I think the idea of closing the loop being important is very right. Humans need reciprocity and fairness, but if the cost is known and paid, and everyone knows this, we can all move on and not worry about whether we can all move on. One of the things I love about my present job is that we focus hard on closing this loop. You can make a meme-level huge mistake, and as long as you own up to it and fix the issue going forward, everyone puts it behind them. The amount to which this improves my life is hard to over-state.

It is important to note that the push-ups at the dojo are pretty great. They are in some sense a punishment for present me but are not even a punishment as such. Everyone did lots of push-ups anyway. Push-ups are a good thing! By doing them, you show that you are still serious about trying to train, and you do something more intense to make up for the lost time. The push-ups are practical. In expectation, you transfer your push-ups from another time to now, allowing the class to assign less push-ups at other times based on the ones people will do when they occasionally walk in late or otherwise mess up.

This means that you get the equivalent a pigouvian tax. You create perception of fairness, you correct incentives, and you generate revenue (fitness)! Triple win!

I once saw a Magic: The Gathering team do the literal push-up thing. They were playing a deck with the card Eidelon of the Great Revel, which meant that every time an opponent cast a spell, they had to say ‘trigger’ to make their opponent take damage. They agreed that if anyone ever missed such a trigger, after the round they had to do push-ups. This seemed fun, useful and excellent.

The ‘price’ being an action that is close to efficient anyway is key to the system being a success. If push-ups provided no fitness benefit, the system would not work. The best prices do transfer utility from you to the group, but more importantly they also transfer utility from present you to future you.

Problem 4: Defections & Compounded Interest

I’m pretty sure everyone’s tired of hearing about one-boxing and iterated prisoners’ dilemmas, so I’m going to move through this one fairly quickly even though it could be its own whole multipage post. In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system. Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what’s needed is something like 99.99%.

There’s something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there. Similarly, there’s something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers “if you’re 95% reliable, that means I can’t rely on you.” That’s because I’m in a context where “rely” means really trust that it’ll get done. No, really. No, I don’t care what comes up, DID YOU DO THE THING? And if the answer is “Yeah, 19 times out of 20,” then I can’t give that person tasks ever again, because we run more than 20 workshops and I can’t have one of them catastrophically fail.

(I mean, I could. It probably wouldn’t be the end of the world. But that’s exactly the point—I’m trying to create a pocket universe in which certain things, like “the CFAR workshop will go well,” are absolutely reliable, and the “absolute” part is important.)

As far as I can tell, it’s hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn’t properly weight the impact to those distant, cumulative effects (just like the person who’s going to end up with no retirement savings because they wanted those new shoes this month instead of next month). 1.01^n takes a long time to look like it’s going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.

But something magical does accrue when you make the jump from 99% to 100%. That’s when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting). It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be. When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you’re just casually trying out as an informal experiment), with said norm to be modified/​iterated only during predecided strategic check-in points and not on the fly, in the middle of things. Build a habit of clearly distinguishing targets you’re going to hit from targets you’d be happy to hit. Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn’t. Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it’s clear in advance when a line is about to be crossed. Be ridiculously nitpicky and anal about supporting standards that don’t seem worth supporting, in the moment, if they’re in arenas that you’ve previously assessed as susceptible to compounding. Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.

Caveat/​skull: Obviously, because we’re humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I’ve chafed under standards I fought to install). At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough. The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc. This goes wrongest when things fester and people feel they can’t speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/​= attack).

It brings me great joy that someone out there has taken the need for true reliability, and gone too far.

I do not think the exponential model above is a good model. I do think something special happens when things become reliable enough that you do not feel the need to worry about or plan for what you are going to do when they do not happen, and you can simply assume they will happen.

A lot of this jump is that your brain accepts that things you agreed to do just happen. You are not going to waste time considering whether or not they are going to happen, you are only going to ask the question how to make them happen. They are automatic and become habits, and the habit of doing this also becomes a habit. Actually being truly reliable is easier in many ways than being unreliable! This is similar to the realization that it is much easier and less taxing to never drink than to drink a very small amount. It is much easier and less taxing to never cheat (for almost all values of cheating) than to contain cheating at a low but non-zero level. Better to not have the option in your mind at all.

There is another great thing that happens when you assume that getting a ‘yes I will do this thing’ from someone means they will do the thing, and if it turns out they did not do the thing, it is because it was ludicrously obvious that they were not supposed to do the thing given the circumstances, and they gave you what warning or adaptation they could. Just like you no longer need to consider the option of not doing the thing, you get to not consider that they will choose not do the thing, or what you need to do to ensure they do the thing.

It is ludicrously hard to get 99.99% reliability from anyone. If you are telling me that I need to come to the weekly meetup 99.9% of the time you are telling me I can miss it one time in twenty years. If you ask for 99.9% it means meeting every day and missing once in twenty years. Does anyone have real emergencies that are that rare? Do opportunities that are worth taking instead come along once every few decades? This doesn’t make sense. I believe we did manage to go several years in a row in New York without missing a Tuesday night, and yes that was valuable by letting people show up without checking first, knowing the meetup would happen. No single person showed up every time, because that’s insane. You would not put ‘Tuesday meetup’ in the ‘this is 100% reliable’ category if you wanted the ‘100% reliable’ category to remain a thing.

There are tasks that, if failed, cause the entire workshop to catastrophically fail, and those cannot be solely entrusted to a 95% reliable person without a backup plan. But if your model says that any failure anywhere will cause catastrophic overall failure then your primary problem is not needing more reliable people, it is engineering a more robust plan that has fewer single points of failure.

If you abuse the ‘100% reliable’ label, the label becomes meaningless.

Even if you use the label responsibly, when you pull out ‘100% reliable’ from people and expect that to get you 99.9%, you have to mean it. The thing has to be that important. You don’t need to be launching a space shuttle, but you do have to face large consequences to failure. You need the kind of horribleness that requires multiple locked-in reliable backup plans. There is no other way to get to that level. Then you work in early warning systems, so if things are going wrong, you learn about it in time to invoke the backup plans.

I strongly endorse the idea of drawing an explicit contrast between places where people are only expected to be somewhat reliable, and those where people are expected to be actually reliable.

I also strongly endorse that the default level of reliability needs to be much, much higher than the standard default level of reliability, especially in The Bay. Things there are really bad.

When I make a plan with a friend in The Bay, I never assume the plan will actually happen. There is actual no one there I feel I can count on to be on time and not flake. I would come to visit more often if plans could actually be made. Instead, suggestions can be made, and half the time things go more or less the way you planned them. This is a terrible, very bad, no good equilibrium. Are there people I want to see badly enough to put up with a 50% reliability rate? Yes, but there are not many, and I get much less than half the utility out of those friendships than I would otherwise get.

When I reach what would otherwise be an agreement with someone in The Bay, I have learned that this is not an agreement, but rather a statement of momentary intent. The other person feels good about the intention of doing the thing, and if the emotions and vibe surrounding things continue to be supportive, and it is still in their interest to follow through, they might actually follow through. What they will absolutely not do is treat their word as their bond and follow through even if they made what turns out to be a bad deal or it seems weird or they could gain status by throwing you under the bus. People do not cooperate in this way. That is not a thing. When you notice it is not a thing, and that people will actively lower your status for treating it as a thing rather than rewarding you, it is almost impossible to keep treating this as a thing.

For further details on the above, and those details are important, see Compass Rose, pretty much the whole blog.

Duncan is trying to build a group house in The Bay that coordinates to actually do things. From where he sits, reliability has ceased to be a thing. Some amount of hyperbole and overreaction is not only reasonable and sympathetic, but even optimal. I sympathize fully with his desire to fix this problem via draconian penalties for non-cooperation.

Ideally, you would not need explicit penalties. There is a large cost to imposing explicit large penalties in any realm. Those penalties crowd out intrinsic motivation and justification. They create adversarial relationships and feel bad moments, and require large amounts of time upkeep. They make it likely things will fall apart if and when the penalties go away.

A much better system, if you can pull it off and keep it, is to have everyone understand that defection is really bad and that people are adjusting their actions and expectations on that basis, and have them make an extraordinary effort already. The penalty that the streak will be over, and the trust will be lost, should be enough. The problem is, it’s often not enough, and it is very hard to signal and pass on this system to new people.

Thus, draconian penalties, while a second best solution, should be considered and tried.

Like other penalties, we should aim to have these penalties be clear to all, be clearly painful in the short term, and clearly be something that in the long term benefits (or at least does not hurt) the group as a whole – they should be a transfer from short term you to some term someone, ideally long term everyone, in a way that all can understand. I am a big fan of exponentially escalating penalties in these situations.

What is missing here is a concrete example of X failure leading to Y consequence, so it’s hard to tell what level of draconian he is considering here.

Problem 5: Everything else

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some “topsoil” of simple/​trivial/​arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/​letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can’t build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.

But I’m going to hold off on going into those in detail until people insist on hearing about them or ask questions/​pose hesitations that could be answered by them.

I think these are good instincts, and also agree with the instinct not to say more here.

Section 2 of 3: Power dynamics

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members. It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community’s evolved norms haven’t really produced results (in the group houses) commensurate with the promises of EA and rationality.

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it’s those aesthetics that will be used to resolve epistemic gridlock). In other words, it’s not so much those arguments as it is the fact that Duncan finds those arguments compelling. It’s worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like “does this guy actually think things through,” “is this guy likely to be stupid or meta-stupid,” “will this guy listen/​react/​update/​pivot in response to evidence or consensus opposition,” and “when this guy has intuitions that he can’t explain, do they tend to be validated in the end?”

In other words, it’s fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that’s what it is. In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they’ve earned their underlings’ trust; rationalists tend to have a much higher bar before they’re willing to subordinate their decisionmaking processes, yet still that’s something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary “try things with benefit of the doubt” sort of way). I posit that Dragon Army Barracks works (where “works” means “is good and produces both individual and collective results that outstrip other group houses by at least a factor of three”) if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they’re willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).

And since that’s a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it’s entirely appropriate that it be given the greatest scrutiny. Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it’s actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction. The rest of you will have to make do with grilling me in the comments here.

Trusting individuals to respond to incentives minute to minute does not, on its own, work beyond the short term. Period. You need to figure out how to make agreements and commitments, to build trust and reciprocity, and work in response to long term incentives toward a greater goal. Otherwise, you fail. At best, you get hijacked by what the incentive gradient and zeitgeist want to happen, and something happens, but you have little or no control over what that something will be.

It’s quite the leap from there to having a person prove general trustworthiness, and to have people trust that person more than they trust their own sense of things. Are there times and places where there have been people I trusted on that level? That is actually a good question. There are contexts and areas in which it is certainly true – my Sensei at the dojo, my teachers in a variety of subjects, Jon Finkel in a game of Magic. There are people I trust in context, when doing a particular thing. But is there anyone I would trust in general, if they told me to do something?

If they told me to do the thing without taking into consideration what I think, then my brain is telling me the answer is no. There are zero such people, who can tell me in general what to do, and I’d do it even if I thought they were wrong. That, however, is playing on super duper hard mode. A better question is, is there someone who, if they told you I know that you disagree with me, but despite that trust me you should do this anyway, even if I didn’t have a good reason to do the thing other than that they said so, I would do the thing, pretty much no matter what it is?

The answer is still no, in the sense that if they spoke to me like God spoke to Abraham, and told me to sacrifice my son, I would tell each and every person on Earth to go to hell. The bar doesn’t have to be anything like that high, either – there might be people who could talk me into a major crime, but if so they’d have to actually talk me into it. No running off on Project Mayhem.

Wait. I am not sure that is actually true. If one of a very few select people actually did tell me to do something that seemed crazy, I might just trust it, because the bar for them to actually do that would be so high. Or I might not. You never know until the moment arrives.

Duncan, I hope, is asking for something much weaker than that. He is asking for small scale trust. He is asking that in the moment, with no ‘real’ stakes, members of the house trust him absolutely. This is more like being in a dojo and having a sensei. In the moment, you do not question the sensei within the sacred space. That does not even require you to actually trust them more than you trust yourself. It simply means that you need to trust that things will go better if you do what they say without questioning it and then you follow through on that deal. In limited contexts, this is not weird or scary. If the sensei did something outside their purview, the deal would be off, and rightfully so.

I even have some experience with hypnosis, which is a lot scarier than this in terms of how much trust is required and what can be done if the person in charge goes too far, and there are people I trust to do that, knowing that if they try to take things too far, I’ll (probably, hopefully) snap out of it.

In short, this sounds a lot scarier than it is. Probably. The boundaries of what can be asked for are important, but the type of person reading this likely needs to learn more how to trust others and see where things go, rather than doing that less often or being worried about someone abusing that power. If anything, we are the most prepared to handle that kind of overreach, because we are so (rationally irrationally? the other way around?) scared of it.

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol’ incentive structures and regular ol’ fallible humans. I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that’s exactly the same claim an egomaniac would make, and I acknowledge that the link between “Duncan makes all his housemates wake up together and do pushups” and “the world is incrementally less likely to end in gray goo and agony” is not obvious.

And it doesn’t quite solve things to say, “well, this is an optional, consent-based process, and if you don’t like it, don’t join,” because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone’s building a coercive trap, it’s everyone’s problem.

Power corrupts. We all know this. Despite that, and despite my being an actual Discordian who thinks the most important power-related skill is how to avoid power, and who thinks that in an important sense communication is only possible between equals, and whose leadership model if I was founding a house would be Hagbard Celine, and I still think this is unfair to power. It is especially unfair to voluntary power.

Not only are there not twelve anti-vaxxers using power for every pro-vaxxer, there are more than twelve pro-vaxxers trying to get parents to vaccinate (mostly successfully) for every one that tries to stop them (mostly unsuccessfully). For every traitorous soldier trying to let the barbarians into the gate, there are more than twelve doing their duty to keep the barbarians out (and violations of this correlate quite well to where barbarians get through the gate, which most days and years does not happen). Think about the ‘facts’ the government tries to promote. Are they boring? Usually. A waste of your taxpayer dollar? Often I’d agree. The emotions they try to evoke are often bad. But most of the time, aside from exceptions like the campaign trail, cops investigating a crime (who are explicitly allowed and encouraged to lie) and anything promoting the lottery, their facts are true.

Yes, power encourages people to hold back information and lie to each other. Yes, power is often abused, but the most important way to maintain and gain power is to exercise that power wisely and for the benefit of the group. This goes double when that power exchange is voluntary and those who have given up power have the ability and the right to walk away at any time.

A certain amount of ‘abuse’ of power is expected, and even appropriate, because power is hard work and a burden, so it needs to have its rewards. Some CEOs are overpaid, no doubt, but in general I believe that leaders and decision makers are undercompensated and underappreciated, rather than the other way around. Most people loathe being in charge even of themselves. Leaders have to look out for everyone else, and when they decide it’s time to look out for themselves instead, we need to make sure they don’t go too far at the expense of others, but if you automatically call that abuse, what you are left with are only burned out leaders. That seems to be happening a lot.

That does still leave us with the problem that power is usually anti-epistemic, due to the SNAFU principle: (Fully open and honest) communication is only possible between equals. The good news is that this is an observation about the world rather than a law of nature, so the better frame is to ask why and how power is anti-epistemic. Social life is also anti-epistemic in many similar ways, largely because any group of people will involve some amount of power and desire to shape the actions, beliefs and opinions of others.

SNAFU’s main mechanism is that the subordinate is under the superior’s power, which results in the superior giving out rewards and punishments (and/​or decisions that are functionally rewards and punishments). This leads the subordinate to not be able to communicate to the superior, which in turn makes the superior in turn engage in deception in order to find out as much of the truth as possible. This gets a lot more complicated (for more detail, and in general, I recommend reading Robert Anton Wilson’s Prometheus Rising and many of his other works) but the core problem is that the subordinate wants to be rewarded and to avoid punishment, as intrinsic goods. The flip side of that is if the superior wants to give out punishments and avoid rewards.

Wanting to get rewards and avoid punishments when you don’t deserve it, or to give out punishments and avoid rewards to those who don’t deserve it, is the problem. If the student wants to avoid push-ups, the student will deceive the master. If the student wants to be worthy and therefore avoid push-ups, treating the push-ups as useful incentive and training and signal, then the student will remain honest. In an important sense, the master has even successfully avoided power here once they set the rules, because the student’s worthiness determines what happens even if the master technically gives out the verdict. The master simply tries to help make the student worthy.

Power is dangerous, but most useful things are. It’s a poor atom blaster that can’t point both ways.

That’s my justification, let’s see what his is.

But on the flip side, we don’t have time to waste. There’s existential risk, for one, and even if you don’t buy ex-risk à la AI or bioterrorism or global warming, people’s available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die. I personally feel that I am operating far below my healthy sustainable maximum capacity, and I’m not alone in that, and something like Dragon Army could help.

So. Claims, as clearly as I can state them, in answer to the question “why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?”

1. Somebody ought to run this, and no one else will. On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/​hardcore one, and also not very many impressive results coming out of our houses. Due diligence demands investigation of the opposite hypothesis. On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can’t even conceive of, at this point, because we don’t have a deep grasp of what new affordances appear once you get there.

2. I’m the least unqualified person around. Those words are chosen deliberately, for this post on “less wrong.” I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/​head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/​fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist. If anybody’s intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.

3. There’s never been a safer context for this sort of experiment. It’s 2017, we live in the United States, and all of the people involved are rationalists. We all know about NVC and double crux, we’re all going to do Circling, we all know about Gendlin’s Focusing, and we’ve all read the Sequences (or will soon). If ever there was a time to say “let’s all step out onto the slippery slope, I think we can keep our balance,” it’s now—there’s no group of people better equipped to stop this from going sideways.

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/​dry run, we went around the circle and people talked about concerns/​dealbreakers/​things they don’t want to give up. One interesting thing that popped up is that, according to consensus, it’s literally impossible to find a time of day when the whole group could get together to exercise. This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly.

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable. And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment. You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it’s impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options. On top of that, there’s a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/​breaks deadlock, and absorbs all of the blame for the fact that it’s unpleasant to be forced to do things you know you ought to but don’t want to do.

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time. That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.

5. There isn’t really a status quo for power to abusively maintain. Dragon Army Barracks is not an object-level experiment in making the best house; it’s a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question “how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?” It’s taken as a given that we’ll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless. More importantly, the fundamental conceit of the model is “Duncan sees a better way, which might take some time to settle into,” but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway. In short, my tyranny, if net bad, has a natural time limit, because people aren’t going to wait around forever for their results.

6. The experiment has protections built in. Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained. Like the Constitution, Dragon Army’s charter and organization are meant to be “living documents” that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.

I strongly agree with point one, and think this line should be considered almost a knock-down argument in a lot of contexts. Someone has to and no one else will. Unless you are claiming no one has to, or there is someone else who will, that’s all that need be said. There is a wise saying that ‘those who say it can’t be done should never interrupt the person doing it.’ Similarly, I think, once someone has invoked the Comet King, we should follow the rule that ‘those who agree it must be done need to either do it or let someone else do it.’ As far as I can tell, both statements are true. Someone has to. No one else will.

I do not think Duncan is the least unqualified person around if we had our pick of all people, but we don’t. We only have one person willing to do this, as far as I know. That means the question is, is he qualified enough to give it a go? On that level, I think these qualifications are good enough. I do wish he hadn’t tried to oversell them quite this much.

I also don’t think this is the most safe situation of all time to try an exchange of power in the name of group and self improvement, and I worry that Duncan thinks things like double crux and circling are far more important and powerful than they are. Letting things ‘get to his head’ is one thing Duncan should be quite concerned about in such a project. We are special, but we are not as special as this implies. What I do think is that this is an unusually safe place and time to try this experiment, but I also don’t think the experiment is all that dangerous even before all the protections in point six and that Duncan explains elsewhere (including the comments) and that were added later or will be added in the future. Safety first is a thing but our society is often totally obsessed with safety and we need to seriously chill out.

I also think point five is important. The natural time limit is a strong check (one of many) on what dangers do exist. However, there seems to be some danger later on of slippage on this if you read the charter, so it needs to be very clear what the final endpoint is and not allow wiggle room later – you can have natural end points in between, but things need to be fully over at a fixed future point, for any given resident, with no (anti?) escape clause.

Section 3 of 3: Dragon Army Charter (DRAFT)

Statement of purpose:

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order. In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/​day plus occasional weekend activities).

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre). The commander’s role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/​make decisions when speed or simplification is required. The first officer’s role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure. Other “management” positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/​ratification.

Initial areas of exploration:

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:

  • Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)

  • Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/​study hall)

  • Regular activities for growth and development (talk night, tutoring/​study hall, bringing in experts, cross-pollination)

  • Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)

  • Projects with “shippable” products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)

  • Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective

All of this, I think, is good. My worry is in the setting of priorities and allocation of time. We have six bullet points here, and only the fifth bullet point, which is part of the third priority out of three, involves doing something that will have a trial by fire in the real world (and even then, we are potentially talking about a talk or blog post, which can be dangerously not-fire-trial-like). The central goal will be self-improvement.

The problem is that in my experience, your real terminal goal can be self-improvement all you like, but unless you choose a different primary goal and work towards that, you won’t self-improve all that much. The way you get better is because you need to get better to do a thing. Otherwise it’s all, well, let’s let Duncan’s hero Tyler Durden explain:

This is importantly true (although in a literal sense it is obviously false), and seems like the most obvious point of failure. Another is choosing Tyler’s solution to this problem. Don’t do that either.

So yes, do all six of these things and have all three of these goals, but don’t think that down near the bottom of your list is doing a few concrete things every now and then. Everyone needs to have the thing, and have the thing be central and important to them, whatever the thing may be, and that person should then judge their success or failure on that basis, and the group also needs a big thing. Yes, we will also evaluate whether we hit the self-improvement marks, but on their own they simply do not cut it.

Credit to my wife Laura Baur for making this point very clear and explicit to me, so that I realized its importance. Which is very high.

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting. After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/​wk, e.g. Tue/​Fri/​Sun)

  • Whole group dinner and retrospective (120min, 1x/​wk, e.g. Tue evening)

  • Small group baseline skill acquisition/​study hall/​cross-pollination (90min, 1x/​wk)

  • Small group circle-shaped discussion (120min, 1x/​wk)

  • Pair debugging or rapport building (45min, 2x/​wk)

  • One-on-one check-in with commander (20min, 2x/​wk)

  • Chore/​house responsibilities (90min distributed)

  • Publishable/​shippable solo small-scale project work with weekly public update (100min distributed)

… for a total time commitment of 16h/​week or 128 hours total, followed by a whole group retreat and reorientation. The house will then enter an eight-week trial phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/​wk)

  • Whole group dinner, retrospective, and plotting (150min, 1x/​wk)

  • Small group circling and/​or pair debugging (120min distributed)

  • Publishable/​shippable small group medium-scale project work with weekly public update (180min distributed)

  • One-on-one check-in with commander (20min, 1x/​wk)

  • Chore/​house responsibilities (60min distributed)

… for a total time commitment of 13h/​week or 104 hours total, again followed by a whole group retreat and reorientation. The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/​emotional or project/​productive (once again ending with a whole group retreat). At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.

That’s a lot of time, but manageable. I would shift more of it into the project work, and worry less about devoting quite so much time to the other stuff. Having less than a quarter of the time being spent towards an outside goal is not good. I can accept a few weeks of phase-in since moving in and getting to know each other is important, but ten weeks in only three hours a week of ‘real work’ is being done.

Even more important, as stated above, I would know everyone’s individual small and medium scale projects, and the first group project, before anyone moves in, at a bare minimum. That does not mean they can’t be changed later, but an answer that is exciting needs to be in place at the start.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
  • Above-average physical capacity

  • Above-average introspection

  • Above-average planning & execution skill

  • Above-average communication/​facilitation skill

  • Above-average calibration/​debiasing/​rationality knowledge

  • Above-average scientific lab skill/​ability to theorize and rigorously investigate claims

  • Average problem-solving/​debugging skill

  • Average public speaking skill

  • Average leadership/​coordination skill

  • Average teaching and tutoring skill

  • Fundamentals of first aid & survival

  • Fundamentals of financial management

  • At least one of: fundamentals of programming, graphic design, writing, A/​V/​animation, or similar (employable mental skill)

  • At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)

Furthermore, every Dragon should have participated in:
  • At least six personal growth projects involving the development of new skill (or honing of prior skill)

  • At least three partner- or small-group projects that could not have been completed alone

  • At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world’s most important problems, or b) caused significant personal growth and improvement

  • Daily contributions to evolved house culture

Or longer, as noted above, is scary, so this should make it clear what the maximum time length is, which should be not more than two years.

The use of ‘above-average’ here is good in a first draft, but not good in the final product. This needs to be much more explicit. What is above average physical capacity? Put numbers on that. What is above average public speaking? That should mean doing some public speaking successfully. Calibration tests are a thing, and so forth. Not all the tests will be perfect, but none of them seem impractical given the time commitments everyone is making. The test is important. You need to take the test. Even if you know you will pass it. No cheating. A lot of these are easy to design a test for – you ask the person to use the skill to do something in the world, and succeed. No bullshit.

The test is necessary to actually get the results, but it’s also important to prove them. If you declare before you begin what the test will be, then you have preregistered the experiment. Your results then mean a lot more. Ideally the projects will even be picked at the start, or at least some of them, and definitely the big project. This is all for science! Isn’t it?

It’s also suspicious if you have a skill and can’t test it. Is the skill real? Is it useful?

Yes, this might mean you need to do some otherwise not so efficient things. That’s how these things go. It’s worth it, and it brings restrictions that breed creativity, and commitments that lead to action.

Speaking of evolved house culture…

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that’s trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week. Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks). There are two routes by which a new experimental norm is put into place:

  • The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)

  • The Army has proposed no new experiments in the previous week, and the Commander proposes three options. The group may then choose one by vote/​consensus, or generate three new options, from which the Commander may choose.

Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
  • The use of a specific gesture to greet fellow Dragons (house salute)

  • Various call-and-response patterns surrounding house norms (e.g. “What’s rule number one?” “PROTECT YOURSELF!”)

  • Practice using hook, line, and sinker in social situations (three items other than your name for introductions)

  • The anti-Singer rule for open calls-for-help (if Dragon A says “hey, can anyone help me with X?” the responsibility falls on the physically closest housemate to either help or say “Not me/​can’t do it!” at which point the buck passes to the next physically closest person)

  • An “interrupt” call that any Dragon may use to pause an ongoing interaction for fifteen seconds

  • A “culture of abundance” in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible

  • A “graffiti board” upon which the Army keeps a running informal record of its mood and thoughts

I strongly approve of this concept, and ideally the experimenter already has a notebook with tons of ideas in it. I have a feeling that he does, or would have one quickly if he bought and carried around the notebook. This is also where the outside community can help, offering more suggestions.

It would be a good norm for people to need to try new norms and systems every so often. Every week is a bit much for regular life, but once a month seems quite reasonable.

In terms of the individual suggestions:

I am in favor of the house salute, the interrupt and the graffiti board. Of those three, the interrupt seems most likely to turn out to have problems, but it’s definitely worth trying and seems quite good if it works.

Hook, line and sinker seems more like a tool or skill to practice, but seems like a good idea for those having trouble with good introductions.

Call and response is a thing that naturally evolves in any group culture that is having any fun at all, and leads to more fun (here is a prime example and important safety tip), so I encouraging more and more formal use of it seems good at first glance. The worry is that if too formal and used too much, this could become anti-epistemic, so I’d keep an eye on that and re-calibrate as needed.

The anti-Singer rule (ASR) is interesting. I think as written it is too broad, but that a less broad version would likely be good.

There are four core problems I see.

The first problem is that this destroys information when the suitability of each person to do the task is unknown. The first person in line has to give a Yes/​No to help before the second person in line reveals how available they are to help, and so on. Let’s say that Alice asks for help, Bob is closest, then Carol, then David and then Eve. Bob does not know if Carol, David or Eve would be happy (or able) to help right now – maybe Bob isn’t so good at this task, or maybe it’s not the best time. Without ASR, Bob could wait for that information – if there was a long enough pause, or David and Eve said no, Bob could step up to the plate. The flip side is also the case, where once Bob, Carol and David say no, Eve can end up helping even when she’s clearly not the right choice. Think of this as a no-going-back search algorithm, which has a reasonably high rate of failure.

The second and related problem is that Bob has to either explicitly say no to Alice, which costs social points, so he may end up doing the thing even when he knows this is not an efficient allocation. Even if David is happy to help where Bob is not, Bob still had to say no, and you’d prefer to have avoided that.

The third problem is that this interrupts flow. If Alice requests help, Bob has to explicitly respond with a yes or no. Most people and all programmers know how disruptive this can be, and in this house, and I worry no one can ‘check out’ or ‘focus in’ fully while this rule is in place. It could also just be seen as costly in terms of the amount of noise it generates. This seems especially annoying if, for example, David is the one closest to the door, and Alice asks someone to let Eli in, and now multiple people have to either explicitly refuse the task or do it even though doing it does not make sense.

The fourth problem is that this implicitly rewards and punishes physical location, and could potentially lead to people avoiding physical proximity or the center of the house. This seems bad.

This means that for classes of help that involve large commitments of time, and/​or large variance in people’s suitability for the task, especially variance that is invisible to other people, that this norm seems like it will be destructive.

On the other hand, if the request is something that anyone can do (something like “give me a hand with this” or “answer the phone”) especially one that benefits from physical proximity, so the default of ‘nearest person helps’ makes sense, this system seems excellent if combined with some common sense. One obvious extension is that if someone else thinks that they should do the task, they should speak up and do it (or even just start doing it), even if the person requesting didn’t know who to ask. As with many similar things, having semi-formal norms can be quite useful if they are used the right amount, but if abused they get disruptive – the informal systems they are replacing are often quite efficient and being too explicit lets systems be gamed.

The culture of abundance is the norm that seems at risk of actively backfiring. The comments pointed this out multiple times. The three obvious failure modes are tragedy of the commons (you don’t buy milk because everyone else will drink it) and inefficient allocation (you buy milk because you will soon bake a cake, and by the time you go to bake it, the milk is gone), and inability to plan (you buy milk, but you can never count on having any for your breakfast unless you massively oversupply, and you might also not have any cereal).

The result is likely either more and more exceptions, less and less available food, or some combination of the two, potentially leading to much higher total food expenses and more trips to supermarkets and restaurants. The closer the house is to the supermarket, the better, even more so than usual.

Of course, if everyone uses common sense, everyone gets to know their housemates preferences, and the food budget is managed reasonably such that buying food doesn’t mean subsidizing everyone else, this can still mostly work out, and certainly some amount of this is good especially with staple supplies that if managed properly should not come close to running out. However, this is not a norm that is self-sustaining on its own – it requires careful management along multiple fronts if it is to work.

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.

  1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).

  2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/​circumstance, if angry or triggered will not blame the other party.

  3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.

  4. A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.

  5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/​official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit. Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold “I’m deeply skeptical about this” alongside “but I’m actually giving it an honest try,” and postpone critique/​complaint/​suggestion until predetermined checkpoints. Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.

  6. A Dragon will take the outside view seriously,maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one’s similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior). Another way to state this is that a Dragon will embrace the maxim “don’t believe everything that you think.”

  7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/​maximize total growth and output on long time scales.

  8. A Dragon will not defect on other Dragons.

There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on. Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.

Note that all of the above is deliberately kept somewhat flexible/​vague/​open-ended/​unsettled, because we are trying not to fall prey to GOODHART’S DEMON.

Bonus points for the explicit invocation of Goodhart’s Demon.
That feeling where things start to creep you out and feel scary, the walls are metaphorically closing in and something seems deeply wrong? Yeah. I didn’t get it earlier (except somewhat when Duncan mentioned he was looking admirably at the Paper Street Soap Company, but that was more of a what are you thinking moment). I got that here.
So something is wrong. Or at least, something feels wrong, in a deep way. What is it?
First, I observe that it isn’t related at all to #1, #3, #4 or #7. Those seem clearly safe. So that leaves four suspects. On another pass, it’s also not #8, nor is it directly #6, or directly #5. On their own, all of those would seem fine, but once the feeling that something is wrong and potentially out to get you sets in, things that would otherwise be fine stop seeing fine. This is related to the good faith thing – my brain no longer is in good-faith-assuming mode. I’m pretty sure #2 is the problem. So let’s focus in there.
The problem is clearly in the rule that a Dragon will take responsibility for their emotional responses, and not blame the other person. That is what is setting off alarm bells.
Why? Because that rule, in other forms, has a history. The other form, which this implies, is:
Thou shalt have the “correct” emotional reaction, the one I want you to have, and you are blameworthy if you do not.
Here is some of that history.
Then some of the other rules reinforce that feeling of ‘and LIKE IT’ that makes me need to think about needing to control the fist of death. With time to reflect, I realize that this is a lot like reading the right to privacy into the constitution, in that it isn’t technically there but does get implied if you want the thing to actually function as intended.
These things are tough. I fully endorse taking full responsibility for the results as a principle, from all parties involved, such that the amount of responsibility is often hundreds of percents, but one must note the danger.
Once that is identified and understood, I see that I mostly like this list a lot.
Random Logistics
  1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors. Final decisions will be made by the commander and may be informally questioned/​appealed but not overruled by another power.

  2. Once a final list of participants is created, all participants will sign a “free state” contract of the form “I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement.” At that point, the search for a suitable house will begin, possibly with delegation to participants.

  3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund. Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/​month commitment. Similarly, someone hoping for a double should be prepared for ~$700/​month, and someone hoping for a triple should be prepared for ~$500/​month, and someone hoping for a quad should be prepared for ~$350/​month.

  4. The initial phase of the experiment is a six month commitment, but leases are generally one year. Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/​utilities/​house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of “keep paying until you’ve found your replacement.” (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)

  5. Of the ~90hr/​month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work. Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).

  6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

I’ll leave the local logistics mostly to the locals, but will note that five miles is a long distance to go in an arbitrary direction – if I was considering this, I’d want to know a lot more about the exact locations that would be considered.
The one here that needs discussion is #6. You would think I would strongly endorse this, and you would be wrong. I think that an internal economy based on money is a bad idea, especially considering Duncan explicitly says in the comments that it would apply to push-ups. This completely misunderstands the point of push-ups, and (I think) the type of culture necessary to get a group to bond and become allies. The rich one can’t be allowed to buy off their chores and definitely not their punishments. The activities involved have a bunch of goals: Self-improvement and learning good habits, team bonding and building and coordination, and so forth. They are not simply division of labor. People buying gifts for the group builds the group, it is not simply dividing costs.
The whole point of creating a new culture of this type is, in a sense, to create new sacred things. Those things need to remain sacred, and everyone needs to be focused away from money. Thus, an internal economy that has too wide a scope is actively destructive (and distracting) to the project. I would recommend against it.
I feel so weird telling someone else to not create a market. It’s really strange, man.
Predictions
Now that we’ve reached the end (there are many comments, but one must draw the line somewhere), what do I think will actually happen, if the experiment is done? I think it’s likely that Duncan will get to do his experiment, at least for the initial period. I’d divide the results into four rough scenarios.
I think that the chances of success as defined by Duncan’s goals above are not that high, but substantial. Even though none of the goals are fantastical, there are a lot of ways for him to fall short. Doubtless at least one person will fail at least one of the goals, but what’s the chance that most of the people will stay, and most who stay will hit most of the goals and the group, such that victory can be declared? I’d say maybe 20%.
The most likely scenario, I think, is a successful failure. The house does not get what it came for, but we get a lot of data on what did and did not work, at least some people feel they got a lot out of it personally, and we can if we want to run another experiment later, or we learned why we should never do this again without any serious damage being done. I’d give things in this range maybe 30%.
The less bad failure mode, in my mind, is petering out. This is where the house more or less looks like a normal house by the end, except with the vague sense of let down and what might have been. We get there through a combination of people leaving, people ‘leaving’ but staying put, people slowly ignoring the norms more and more, and Duncan not running a tight enough ship. The house keeps some good norms, and nothing too bad happens, but we don’t really know whether the idea would work, so this has to be considered a let down. Still, it’s not much worse than if there was no house in the first place. I give this about 25%.
The other failure mode is disaster. This is where there are big fights and power struggles, or people end up feeling hurt or abused, and there is Big Drama and lots of blame to go around. Alternatively, the power thing gets out of hand, and outsiders think that this has turned into something dangerous, perhaps working to break it up. A lot of group houses end in this way, so I don’t know the base rate, but I’d say with the short time frame and natural end point these together are something like 25%. Breaking that down, I’d say 15% chance of ordinary house drama being the basic story, and 10% chance that scary unique stuff happens that makes us conclude that the experiment was an Arrested Development level huge mistake.
Good luck!