The most common cause of the collapse of high investment intentional communities is romantic drama.
(Maybe the Dragon Barracks are so obviously a boy thing that you’re taking for granted that there will be no girls in the house, but all the weird non-gendered pronouns like “a Dragon will brush its teeth” imply either an attempt to have a team composed of both men or women, or else a hilarious level of contempt for the agency of your space monkeys. I’m going to assume that you’re imagining mixed gender living arrangements rather than already starting with verbal de-personalization of presumed uniformly male space monkeys...)
So anyway, assuming men and women in the house at the same time, that’s what usually causes things to collapse in the long run.
The two standard failure modes are Bonobo egalitarianism that collapses due to the accumulation of residual jealousies over time or else a harem forms around the charismatic cult leader (which isn’t necessarily a failure mode… it is just a sign of a cult leader whose stated community goals are a load of hypocritical baloney compared to the real goal of getting more than his “fair share” of tail—cue the Limp Bizkit song).
There are lots of patches for this sort of thing that have historically worked for various kinds of communities. Requiring celibacy is an obvious one that monasteries often use. Disallowing any romantic statuses except “single” and “closed dyadic marriage” (with a managed “courting” status to mediate the one way transition) is another standard trick.
Whatever the rule is, the standard enforcement mechanism is “ostracism” because the real problem from a social engineering perspective is the accumulation of complicated feelings that slow and redirect the workings of the social machine away from its stated purposes and towards managing the wreckage of new and old love triangles. If you throw away the cogs that are liable to have “complicated feelings” and replace them with non-complicated cogs… then the machine should continue to run as designed?
(I think maybe the romantic mores that were junked in the US in the 1960′s arose in the first place because villages are kinda like auto-poetic intentional communities. The pragmatically useful norms of village romance, that kept the village from exploding, could be semi-safely junked because (well, obviosuly “the pill” but also because) cities are anonymous and moderately well mixed… essentially everyone in a city is already pre-ostrasized by everyone else, and we each are desperately struggling to create a synthetic village-like community despite the isolating forces of urban mixing. In an already chaotic urban romantic economy a divorce causing additional minor lesioning of the local social graph is like a dust devil in a hurricane. There might actually be a lot of dust devils caused by hurricane turbulence for all I know, but I’m pretty sure no one cares much because the actual hurricane make them irrelevant.)
Anyway, for the above reasons, you might want to just say “this is a fraternity and if women want to start a rationalist sorority that can be a separate thing”. Alternatively, think about romantic norms up front.
One idea that is probably necessary but not sufficient is for the Commander (and anyone else with any authority in the house) to have an absolute commitment not to sleep with anyone else in the house.
Edit: with this rule, a different/earlier version of me might have been interested. Without it I would never be.
Anyway, for the above reasons, you might want to just say “this is a fraternity and if women want to start a rationalist sorority that can be a separate thing”.
Possible advantage of this solution: I’ve noticed that male bonding gets a lot easier when a group goes from being “almost all guys” to “all guys”. (I imagine it would get easier still if you are regularly doing testosterone-elevating things that require coordination with your group of guys, the way sports teams, armies, fraternities, and heavy metal bands do. I suspect men have a pack hunting instinct that gets activated in circumstances like these.)
Data point to the contrary: I spent two years in a closed military unit with 44 guys and 5 girls (in Israel). Each of the girls went through at least a couple of in-unit boyfriends at the time, but that wasn’t a major source of drama. It took quite a bit of suffering to forge the unit bonds (a 4-month combat boot camp to start our service), but by the end of it, people cared about “the unit” as a whole more than about personal drama. I certainly can’t imagine that the “bonding” could have been any stronger without the girls there.
And one final point of support for DA: while I was living in a closed barracks, with five girls, a huge workload, strict rules and significant barriers to exit, I read Ender’s Game and thought “this is exactly like my life, and it’s awesome”.
I agree with some of the critics here that Duncan is overconfident in his ability to make this work. I also agree that there’s a limit to how much you can learn from a work of fiction about space monkey superchildren. But a lot of the criticism here is even more overconfident, and it comes from people who never lived in DA-like situation in their lives so all the evidence they’re basing their criticism on is fictional.
It’s especially worth noting that the group is highly competent and self-selecting for the environment, too, so we’re likely to respond in the same way you did (i.e. if we want to say that your experience “beat outside view,” then we’re pretty well set up for ours to beat outside view similarly, even if that outside view is somewhat unpromising).
it comes from people who never lived in DA-like situation in their lives so all the evidence they’re basing their criticism on is fictional.
I’ve been going off statistics which, AFAIK, aren’t fictional. Am I wrong in my assumption that the military, which seems like a decent comparison point, has an above average rate of sexual harassment, sexual assault, bloated budgets, and bureaucratic waste? All the statistics and research I’ve read suggest that at least the US Military has a lot of problems and should not be used as a role-model.
Listening to the ways that various interviewees talked about the military ethos, and used value language to describe their experiences, I found myself thinking, like we Rogerians do, “What is it they are trying to get me to understand? There is something they are trying to be insistent about, but it’s not clear; what is it?”
Eventually, I found something between the lines. It’s hard to express directly; it works best if we start with what I hypothesize is the other side.
The US military, and probably all militaries ever, have a really quite low tolerance for fuckups. When somebody isn’t dependable, when somebody doesn’t exercise adequate restraint in their conduct, they get marginalized so they can’t do too much damage, or simply gotten rid of.
All these youngsters join up, and have it drummed into them that they have these huge responsibilities to their fellow warriors and their nation, and they must do their jobs right. It’s not just that they have to cover their squad mates in fire-fights, but things like, “If you don’t clean this surface correctly, the guy who is going to try to land a plane on this deck will die and maybe take a bunch of us with it.” And they discover, yes, they have it in them to do their jobs that well, that dependably. They are somebody who pulls their weight and can be counted on.
And furthermore, they discover they are in a whole society of people who are equally determined to be dependable, to pull their weight and be somebody who can be counted on. That can be a down-right rapturous experience; I know, because there’s other ways to have at least some of that experience, such as through the performing arts, and having tasted it, I can attest it’s positively intoxicating. It’s like falling in love. Or maybe it is falling in love: this probably is more the basis of that intense camaraderie shared by veterans who served together than common adversity or common purpose.
Civilian society, as a whole, is, in contrast, replete with fuckups. People who can’t get out of their own way enough to be depended on, people who don’t take commitments seriously, people who are exploitative, who phone it in, to try to get away with minimal contributions, who don’t care about those who rely on their work, who don’t want to be relied upon, people who don’t want to have self-restraint. We don’t get to throw those people out of society, so there they are, being part of civilian society, fucking up, and their fucking up being tolerated.
People in the military, who subscribe to the discipline of speech and courtesy described above, are way, way, way, way, way too polite to actually come out and say, “We’re different from civilians because we’re not used to putting up with fuckups,” but that is what it sounds like is lurking between the lines. It feels like they’re trying to apologetically and politely say something that more bluntly put might sound like, “See, among us, fucking up is not okay; being a fuck up is not okay. We have these values and stuff which say it’s not okay. And we totally get that that’s okay in civilian life, where if you want to be a fuckup, that’s your free choice. In our culture, the military culture, we see that as not a legitimate choice. We see that as bad – and comport ourselves accordingly.”
If I am correct that this is the subtext, it also explains some of the difficulty that discharged service members can experience reintegrating into civilian society. The go-to explanation for difficulties reintegrating is usually PTSD or other socio/emotional “damage” that prevents reintegration. But that would be how civilian society sees it: “if you can’t join us, it must be because you’re broken.” But what if it’s just straight-up acculturative stress, from (re)joining a society with a very different value system, and one which does not support and espouse values that were not merely emotionally important, but plainly and obviously organized the left society in ways one prized?
Personally, I don’t think that the military helps. The claim is implausible as personality traits are pretty stubborn things. Anecdotes are definitely confounded as militaries these days can be selective (literally administering IQ tests), and young men who enlist will mature as a simple matter of time. Military-style boot camps are one of the juvenile justice interventions we can say don’t work well or maybe at all (“Preventing future offending of delinquents and offenders: what have we learned from experiments and meta-analyses?”, Mackenzie & Farrington 2015) despite being aimed at the ‘youngsters’ who ought to most benefit from not being ‘fuckups’ and being aimed much more explicitly at that goal with a lower bar of success. And the natural experiments I know of like the Vietnam War draft lottery show permanent large harms to income from being drafted (most famously, Angrist 1990), which is certainly not what one would expect from a magical organization which turns fuckup civilians into reliable soldiers and explains why super-competent soldiers have such difficulty comporting in & reintegrating into a civilian life of tragic incompetence everywhere.
Some confounds/conflations in the above? Like, I agree with the truth value of the specific examples you’ve cited, but I think I disagree with the implicit claim that they’re necessarily entangled with the thing Kaj is quoting.
e.g. yes, juvenile military institutions don’t prevent people from being deliquent or discourage future criminality, but that’s not to say that they don’t cause those people, while embedded, to be reliable for object-level tasks and deadlines.
Similarly, the absolute horror and chaos that was Vietnam War combat, and the subsequent shredding of the psyches of people who didn’t volunteer to be there, seems fundamentally different from e.g. modern duty on an aircraft carrier or WWII quartermastering. It doesn’t seem incoherent or contradictory to say both [military culture promotes reliability] and also [being drafted in Vietnam screws you up, military schools don’t fix teenage delinquency].
I also note that both examples cited talk about people who don’t self-select in, which—if relevant—wouldn’t surprise me.
I think “implausible because personality traits are pretty stubborn” is an overconfident statement—personality traits are pretty stubborn, but being thoroughly embedded in a culture that forces you to practice certain skills and surrounds you with coherent social pressures is also pretty stubborn. And in point of fact, while within that context, culture clearly dominates over personality traits, whatever else happens afterwards.
If I’ve misunderstood your claims, please forgive and correct—I feel like I might’ve missed your crux.
Duncan’s comment already touched upon this, but just to highlight it: both of your cited studies are about situations where people were literally forced to join against their will; the Vietnam example additionally has those people exposed to the horror that was Vietnam. Being forced to join something against one’s will tends to make people very resistant against the norms advocated there, and even to actively behave in the opposite way as soon as they get out of there. (I’m reminded of all the kids who decided, for many years afterwards, they want to have nothing to do with sports or exercise because they had to suffer through school gym class.) It’s not a condition where you’d even expect to get much of the internalized pride in the group norms, and desire to act accordingly, that was discussed in my quote.
I get that you picked those studies to combat the confounding from selection (both in the military screening its candidates and the candidates themselves self-selecting), but the context of this discussion was “is Dragon Army a good idea”. Dragon Army participants are also going to be both self-selected and heavily screened for suitability, so whether or not this kind of an intervention would work for the population at large isn’t actually the question we’re interested in.
Unfortunately I think at this point the discussion can only go towards a back and forth on what is good and bad about the military, which can’t be very profitable, and this kind of debate has gone on for so long already that it’s embedded into popular culture. It’s also very heavily culture-warish.
Clearly, the military is adapted for one task, which requires an extraordinary amount of dependability and low likelihood of failure. There’s also an extraordinary cost for that low likelihood of failure, which encompasses the things you pointed out. I don’t think any society has survived very long being converted into 100% military culture, nor has it survived getting rid of it completely.
Clearly, the military is adapted for one task, which requires an extraordinary amount of dependability and low likelihood of failure.
Maybe a low likelihood of certain kind of errors for which it optimizes, but not in general. An above average rate of sexual assault is a sign of failure.
The NSA lost their cyber-weapons (maybe to Russian spies) and now you have civilian targets like hospitals getting attacked because they didn’t do their OPSec properly.
Romantic entanglements and their fallout are not ruled out by all male environments even if the members do not identify as homosexual. So still important to consider these issues even if there are no women at all.
Can confirm. I was in a fraternity in college with many gay members, some of whom occasionally hooked up and caused manageable levels of drama. This was a relatively recent phenomenon in the history of the fraternity; I think as recently as 10 years before my time nobody was out, and then some people came out after joining.
Currently there are both men and women interested (though many more men than women).
All of your points above seem sound at first glance, and yes, it’s on the docket to be sorted out. I don’t think I want to go full monastery, but there’s a decent chance the house itself will end up being activity-restricted in some way.
I want to add a strong “romantic entanglements are a big risk” voice.
My worst experience with rationalists (and possibly some of their worst experiences with me) were when romance/sex conflict came up. It turns out people are really bad at being rational when that happens. (This was exacerbated by a lot of people being inexperienced, which may or may not be the case in Dragon Army, but it makes sense for romance and sex drive being something just overwhelms the prefrontal cortex)
1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that’s more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.
2) I don’t think I’ve met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don’t see you owning this desire anywhere in your long post. Like, if you had said, just once, “I think I would enjoy being a leader, and I think you might enjoy being led by me,” I would feel calmer. Instead I’m worried that you have convinced yourself that you are grudgingly stepping up as a leader because it’s necessary and no one else will. If you’re not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?
3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you’ve clearly put some thought and care into articulating them, but it’s not clear whether your proposals are backed up by research, or whether you’re just reasoning from your armchair.
4) Why should anyone follow you on an epic journey to improve their time management skills if you’re sleep-deprived and behind schedule on writing a blog post? Don’t you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?
I don’t think I’ve met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc.
As someone who knows Duncan moderately well in person and has been under his leadership in a few contexts (CFAR instructor training and the recent Dragon Army experiment), I can confirm that this is nowhere close to true. What Duncan is hungry for is for the world to be better, and he thinks as a contingent fact that being the chief of this particular tribe is the best way for him to do that. I agree with Duncan’s assessment of himself that if someone else stepped up to do the thing he would breathe an enormous sigh of relief, rather than be in any way jealous.
Why should anyone follow you on an epic journey to improve their time management skills if you’re sleep-deprived and behind schedule on writing a blog post?
It depends on how urgent you think Duncan thinks having this blog post out sooner rather than later is. If Duncan were optimizing for looking like he has his shit together he could have either just not mentioned that he was sleep-deprived and behind schedule, or he could have gotten more sleep and fallen further behind schedule. Instead he posted the blog post, and went out of his way to mention that he was sleep-deprived and behind schedule, because he is optimizing for something else.
2) Nope, you’re just way off (though I appreciate the candor). I thought about coming up with some sort of epistemically humble “maybe” or “I can see where you got that impression,” but it seems more advisable to simply be direct, and to sound as confident as I am. I’ve been a leader, and I’ve been a follower, and I’ve transitioned in both directions within the same contexts, and there’s no special draw there along any of the lines you laid out. In particular, I think the statement “this needs to happen, and no one else is going to do it” is actually true; if some contender wants to stand up and credibly claim they can pull this off better than me, I will IMMEDIATELY hand them the baton and breathe a sigh of relief—my actual favorite place to be is second or third in command.
Feel free to PM me if you’re actually curious about my history, or to poke around my reputation within the community, or to ask any of the dozen or so people who’ve worked with me for a couple of years, or the twenty people who attended the dry run experiment last week (I can point you in their direction more specifically, also through PM).
(I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won’t make any deliberate effort.)
3) I think you and I might disagree fairly strongly on the importance/value/worth of “the literature” in this arena. Part of the whole point here is that I have a solid inside view developed from a unique set of experiences that a lot of other people are doing it wrong. I think there’s some value in literature review (e.g. the sources that Benquo listed up above seem worth at least an afternoon’s perusing), but in three separate fields I’ve found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn’t work did, in fact, work, and produced excellent results; I’m not actually convinced that there’s enough EV to justify more than a quick, 80⁄20 skim of the available info. I’m currently reasoning from my armchair—that’s a fair point. But also the whole screed is “let’s get down to the business of running experiments and gathering data,” and I note again that we did already do a test weekend that gave promising preliminary support to a lot of my models and claims.
4) Another quite sound/reasonable criticism, taking the outside view with no priors to add detail to your model. In point of fact, though, it’s been a 90th percentile unusual month (I’m the curriculum director in an org that just ran its most ambitious sprint of events to date, including bringing in a round of new employees whose training I was almost entirely responsible for, and then since that ended I’ve been churning hard on this project), and it’s not particularly strong evidence about other months. Also, I think it’s reasonable to posit that one needs to be more or less in control before leading others, but I note it’s not obvious—I can clearly envision (for instance) models in which one person sacrifices themselves to push everyone else forward. That’s not what I plan to do, but the picture isn’t as straightforward as a clever-sounding false equivalency.
Also, lastly, remember the house is supposed to help me, too:
I personally feel that I am operating far below my healthy sustainable maximum capacity, and I’m not alone in that, and something like Dragon Army could help.
I’m not the only one with skills, and a big part of it is creating a construct that I can use to level up and improve. The part where I impose structure is separate from the part where maybe I could leverage social pressure to improve my own workflow.
I think the statement “this needs to happen, and no one else is going to do it” is actually true
Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides “let’s try and see what this button does”)?
in three separate fields I’ve found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn’t work did, in fact, work, and produced excellent results
“Who needs literature, I’m smarter than all of them” is a worrisome attitude. By the way, did you check what the literature actually said? In my experience what “everyone says” literature claims is usually NOT what the literature really claims.
the whole screed is “let’s get down to the business of running experiments and gathering data,”
What is the price for the experiment and who will pay it?
Er … I think the whole post above is all about answering your first question? I’m confused, and feel somewhat strawmanned by the summary “let’s try it and see what this button does.” Because high-commitment, high-structure environments have a long, long history of being actually productive and useful and net-good for a lot of the people that go through them, and ought to be in the toolkit despite their known failure modes, and given the rationalist community’s strong predilections towards individualism, prioritizing flexibility and following short-term motivation, and not committing to things, it seemed naive to expect that a high-commitment, high-structure environment would come into existence via committee. Note that, while not super emphasized in the post above, a major assumption is “if I’m right, I should be able to largely put down the baton six months in when the thing is clearly working,” i.e. it’s more about the structure than the authoritarianism specifically (the authoritarianism being simply a necessary catalyst imo).
The price for the experiment is largely distributed across its members; it’s the money involved in housing and whatever difficulty people suffer from giving up a not-insignificant-but-overall-fairly-small fraction of their agency and self-determination. It’s roughly analogous, I think, to the price one pays to become a black belt, only condensed down into six months rather than spread across several years.
As far as “who needs literature, I’m smarter than all of them” being worrisome—I’m okay with people being worried. Those people are being actively encouraged to influence things here, and also the whole system is based on iteration, and also I object to the strawmanning again (I’ve said more than once that there’s some value to be had there, but am being summed up as rejecting it entirely), and also I am, in fact, smarter than a lot of them. Not all, but a lot, and it’s been proven before in multiple domains, and I’d be an idiot to ignore that.
I’m confused, and feel somewhat strawmanned by the summary “let’s try it and see what this button does.”
That wasn’t a summary of your position, that was a straw counterpoint for you to kick :-)
high-commitment, high-structure environments have a long, long history of being actually productive
Well… it’s complicated. Such environments are good for producing tools for a purpose. Cogs in a machine, maybe, or mass-produced minds from the same mold, or even cannon fodder if you’re unlucky—note that the military is the prototypical “high-commitment, high-structure” institution.
Having tools is certainly productive from the point of the view of the purpose. And it is true that some (maybe many) people feel that being a tool gives you a purposeful life, better than being pointlessly adrift. But, as I said, it’s complicated :-/
it’s more about the structure than the authoritarianism specifically
Structure needs to be enforced—otherwise everyone could easily set up the needed amount of structure in their life themselves. The point of the exercise is, basically, “I will organize your life for you” and that doesn’t work in the no-stick all-carrot setups.
I guess the concept I worry about is responsibility: if you will organize my life for me, you become responsible for it while my responsibility diminishes.
I am, in fact, smarter than a lot of them
That’s a good thing to be, but not necessarily to believe in :-D
In any case, I’m not saying you should do what the literature says, I’m saying you should know what the literature says, and not on the basis of hearsay either.
The price for the experiment is largely distributed across its members
Yes. The price (I’m mostly speaking about things other than money) is uncertain, in statistical terms it’s a random variable with a particular distribution. The question is how far the tail stretches: how bad is the worst-case scenario?
I think the point of the exercise is less “I will organize your life for you,” and more “we will reduce our ability to hide from one another, and therefore all be more likely to conform to our shared sense of that-which-is-endorsed.” The “I will organize” part is more “I will get us all together and turn on some of the relevant and hopefully-appropriate spotlights, and then moderate the discussion about which spotlights should turn back off.”
I have hopes that we can see the worst-case scenarios coming in time to avert them or eject, and that therefore the effective worst-case scenario is basically something like “I had a rough six months and have to find another room to rent again.”
Strong agreement with basically everything you say above.
Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides “let’s try and see what this button does”)?
Because in real world there are many successful authoritarian organisations? More or less every company you heard about is de facto authoritarian inside (sure, there are exceptions, too).
Because “our kind” seems to have bias against coordination, and an authoritarian leadership is a possible way to solve it?
Because in real world there are many successful authoritarian organisations?
The issue isn’t so much “authoritarian” as it is the combination of “authoritarian” and “commune”.
Communes tend to be totalitarian and this one is explicitly set up as such (high-commitment, full-immersion, etc.) This makes it a dangerous environment—if people mention noticing the skulls, that’s because there are a LOT of skulls. “Authoritarian” means submission to the authority and in a totalitarian context that means total submission.
Authoritarian organizations like companies merely claim about 40 hours of your time per week plus obedience to a set of mostly external rules. And, of course, they pay you recognizing that their claim is a burden on you :-)
I understand where the impulse comes from: grassroots left is notoriously disorganized with the Occupy movement having been, perhaps, the peak of that—no leadership, no specific demands, lots of talking, zero achieved. But I would be a lot more comfortable with a “normal” goal-directed organization which focuses on external goals and not on molding the minds of its members. I’m very suspicious of mind-molding.
Besides, Duncan’s comments throughout the last week left me with grave doubts about his suitability to lead this kind of project. Low credence, of course, since I’m reacting merely to an internet persona and not to someone I know in real life, but my opinion of that persona took a marked turn to the worse.
an authoritarian leadership is a possible way to solve it?
Sure, it’s a possible way. I’m concerned with the cost / benefit ratio, though. Plus benevolent God Emperors are in short supply.
Not in the sense that the secret police will check your underwear drawer for forbidden literature, but in the sense that they require conforming in more encompassing and more personal ways than the usual institutions of the society (like a workplace or a college, etc.)
Note that things which are basically shared living arrangements on a smaller or larger scale are sometimes called communes even though they don’t requite active integration into the life of that mini-society—I don’t have those in mind.
And, of course, this totalitarianism is not a binary variable but an axis with, essentially, a solitary isolated individual at one end and a hive mind on another.
I disagree about 2. After having (a) participated in the weekend experiment and (b) done some “back-channel” references on Duncan, my impression is that he hates the fact that leadership will isolate him from the group he really wants to be a part of. I expect that if the experiment is successful, Duncan will eagerly set aside leadership and integrate himself with the group.
I think the troll obliquely raised on good point with their criticism of the example for Rule 6:
For example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior
Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?
Treating something like your sleep disturbances as your responsibility is fine if e.g. you (like me) have lots of trouble falling asleep and something like people whispering 15 metres from your room is keeping you from falling asleep. In that case, those people are doing everything right and really don’t know that they’re hurting you. It is unreasonable to get angry at them if you haven’t explained to them why their behaviour is bad for you.
Sometimes it’s less clear though. I sometimes use the microwave after midnight. I know that the microwave can be heard in my room and in my room mate’s room. When I use the microwave and think he might be asleep, I stop it before the timer finishes and it beeps loudly. There’s not much excuse to wait for my room mate to specifically request that I do this; I’m more than capable of figuring out a) the microwave beeping at the end is loud and the sort of thing that can disrupt sleep and b) there’s a way I can stop that from happening. It does show some failure of consideration if I were to shrug off the potential inconvenience that the microwave could present for my room mate for the slight benefit of not having to watch the microwave.
This points to one of the failure modes of Tell Culture, where people use it as an excuse to stop doing any thinking about how their actions can affect other people. This actually suggests that one potential house experimental norm could be something like “before making an action that might effect another Dragon, pause and consider how it might effective them and if the effect will be a net positive.”
What this all comes down to for me is that it seems unfair to ask people to assume goodwill without also asking them to always attempt to act with goodwill.
I like this comment but I think what this and the original trollpost miss out on is that LW community in general, due to having a lot of people with autism and sensory issues, has a ton of people who actually do NOT have “reasonable expectations of what other people want to guide their behavior”. The OP quoted here is making a common typical-mind type error. Of COURSE it’s better to live with people who intuit your preferences and act in accordance to them without being told what they are. But it’s obnoxious to shit on attempted to solutions to a problem by insisting that morally good people could never have the problem in the first place.
Agreed. I have a bunch of social anxiety and dislike it when a certain degree of social smoothness is treated as necessary to be sorted into to the category of “good person”.
My specific criticism is of people (and I don’t just mean other people; I’ve failed here before) who could (with ease, not with Herculean effort) intuit preferences but use Tell Culture or direct communication norms to completely avoid doing so. This is especially maddening if you have social anxiety, because you’re left anxious about bringing the thing up, especially to someone who seems so otherwise socially competent.
Yeah, +1 for not “hiding” behind Tell Culture to save effort.
One of the fixes for the anxiety thing is Circling/Focusing/pair debugging culture, which goes a loooooong way toward both a) building the trust and safety required to bring up such issues with less anxiety and b) actually providing Schelling points for when to say it. We’re also doing a weekly retrospective where it’ll be low-cost and high-support to gently point at such things.
-- A note: I originally sent Duncan criticism privately. I didn’t want to add too much negativity to the discussion. But Duncan asked me to post publicly and I will defer to his judgement. Its his project and he is a very capable guy. I really hope DA succeeds, the rationalist community could be doing much better on many metrics. In general I find the model of DA very promising. But I have some serious concerns.
-- The ethics code seems extremely strict.
For example this rule strikes me as extraordinarily hard to follow: “A Dragon will assume good faith in all interactions with other Dragons”. As does “A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts”.
Earlier in the document Duncan said “Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings”. This implies to me that Duncan intends to enforce the CoC pretty strictly. Should Duncan be confidant its reasonable to expect such large deviations from how humans normally operate? I should note that normal bootcamps do not require as much psychologically from their recruits. Even though bootcamps require obedience they don’t normally require recruits to think a certain way.
Duncan explicitly said he was willing to modify norms that members felt were too hard to follow (” Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.”). But he also said that the CoC was unlikely to change. If I thought the CoC was meant more as a set of guidelines than strict rules I would be less worried. But that is not how I interpreted the post.
-- How many people do we expect to leave or get kicked out?
I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.
The section on reparations reassured me that Duncan was thinking hard about to keep people from falling off the path. In addition, unlike most internet communities, the DA recruits will be heavily vetted. But in order to enforce the reparations you either have to appeal to social pressure or the threat of kicking people out. I think the standards are very strict so serious discipline might be needed.
-- Are there practical or ethical problems with this plan?
People who get kicked out of DA are still required to pay rent until they can find a replacement. Assuming they are on the lease it seems highly unlikely you can kick them out of the house. However if someone gets kicked out of the house they might be pretty negative towards the rest of the group. It probably a bad situation to keep them around, but maybe they can’t easily find a replacement or a new place to live.
Secondly people who get kicked out might be psychologically unable to remain at the DA barracks. But until they can find someone to replace them they are on the hook for rent. In my personal opinion joining dragon army should be a “Good deal” for anyone involved. Its important that the downside of: “get kicked out” → “lose friends, need to find a replacement despite the fact that you got kicked out and maybe can’t give DA a good review, on the hook for lots of rent” is manageable. I would really hate to see anyone get hurt. I assume Duncan shares my concerns but he didn’t address them in the post.
In addition, has Duncan looked into the legalities surrounding renter’s rights in California (and Berkeley in particular)? This isn’t in the post even if he has done the research.
-- Duncan said the following “I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won’t make any deliberate effort.
Its plausible to me they aren’t much of an outlier. I had the same reaction, as did several people I showed Duncan’s post to (though other people thought Duncan’s post sounded fine). If I didn’t know Duncan was the curriculum director at CFAR I would have thought he was crazy and probably dangerous. Stuff about “living under my thumb”, self comparisons to Tyler Durden and the ender’s game quote about “quick, decisive obedience” really worried me. Some of the most shocking stuff, from my perspective, was in the pop culture references. But a number of things in the main text gave off an extremely strong cult vibe. Some examples include the “house salute” and the “Various call-and-response patterns surrounding house norms”. I should note I am not accusing Duncan of anything, based on his reputation he seems trustworthy. But his tone definitely set off loud alarm bells for me.
--
Again I am really happy people are considering new rationalist norms. Duncan seems like a very good choice to lead an experimental project. The general strategy of DA seems like a good one. But I wanted to share my concerns.
+1; general appreciation for your willingness to make the commentary public, so that I and others can interact with it openly.
EDIT: I got distracted dealing with the troll. I still hope to return to this comment, but if I fail to, please know that I am definitely mulling it over and taking its content seriously, and that I again thank you for posting.
I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.
In an internet community, you have less tools to change behavior than in personal conversations (and I say that as having moderated in a big personal development internet forum for years).
As far as personal development frameworks go ideas like of “code of perfection” can be found in Landmark (/The Four Agreements). On the other hand the actual verbal techniques advocated are NVC/Circling/Focusing/Internal Double Crux, which have values of authenticity and accepting the emotions that arise in the moment.
Humans sometimes do have instincts to see other people in bad faith. There are two ways to deal with it.
① Surpress it because you have a codex that doesn’t allow the instinct to be carried out.
② Bring it authentically to the front and be open about it.
Landmarkish thought would advocate ① while Circling leads to ②. Both can work as cultural norms but they are different and if there’s a desire to be in Circling mode, don’t have rules that require the other.
I’m managing/leading an internet gaming community, and the only tools I’ve ever had to use are selection and conversation.
I’ve had one person leave because their goal in joining was to acquire enough information and power to cause harm and they were so unsubtle about it that I was able to identify that and stop them. One additional person left because our norms of ‘don’t cheat’ and ‘be nice to our friends’ were given to him gently by everyone in voice chat every time they were violated.
Oddly enough, both of those people ended up joining a specific competing group that held neither of the norms ‘don’t cheat’ nor ‘don’t make public rape threats towards people who call out your cheating’.
And my selection method? Be public and pushy about what kind of norms you have, and push away people who don’t already have and want to follow those norms.
This post is so thoroughly repulsive and disgusting that I made an account for the sole purpose of pointing out how transparently and obviously perverse this fucked-up proposal is. Naturally I don’t have any actual desire to be critical or rude; it’s just that nobody else is doing it, so because of my infinite kindness and charity (if you have any doubts, rest assured that my closest friends and colleagues will all attest to my beneficent nature), I find myself obligated to step up to the batting plate, so to speak. Ah, if only someone could release me from this great burden. If only.
The author seems to have missed the part of Ender’s Game about the protagonists being children. It’s generally not a good thing for adults to role-play as children (the reasons for which are, I hope, sufficiently obvious to not require elaboration). The dominant impression I get from this is that this resembles the antifa movement and the anti-antifa movement: it’s a bunch of immature adults LARPing but pretending that they aren’t doing so.
Note that despite the author’s insistence on the validity of his experience as a CFAR instructor, he fails to actually point to any concrete benefits that people have derived from that instruction—plausibly because those benefits, when concretely stated without embellishment, are at best underwhelming. Note also that (1) no mention of dealing with problems arising from interpersonal romance are mentioned in the post and (2) the author’s reply to the comment that does point out the probable future existence of such problems receives what can at best be termed a cursory and dismissive reply.
This suggests that, contrary to the author’s assertion of having amassed a diverse and broad range of skills, and contrary to whatever accolades his colleagues may see fit to place upon him, he hasn’t yet attained the level of social awareness of a typical American high school student. It also suggests that the author’s ability to model himself and to model others has more-or-less not yet attained the level of sophistication required to view people as more than one-dimensional. I.e., the post seems to suggest an attitude of “I, a good person, will find a bunch of good people, and we’ll make these good things happen”. I’m pretty sure I’ve met high school students with a more nuanced (and less optimistic) understanding of human nature.
Naturally, this would be excused if the Berkeley rationalist community were full of people who are actually good people and who tend to get things done. Let’s check: Qiaochu Yuan, one of the most mathematically sophisticated members, has to the best of my knowledge hit a dead end in his PhD, and is becoming a CFAR instructor in Seattle, which makes it seem as though he’s actually concretely worse off compared to the counterfactual in which the rationalist community didn’t exist; Eliezer Yudkowsky has shifted in the direction of posting practically-untrue, self-aggrandizing bullshit on Twitter and Facebook instead of doing anything productive; Arbital is best described as a failure; word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years, leading to severe dissatisfaction among the staff of MIRI; despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren’t actually women; CFAR itself is trending in the direction of adding bureaucracy for bureaucracy’s sake; my own personal experience with people branded as “CFAR instructors” has been extremely negative, with them effectively acting arrogant out of proportion to their competence, not to mention their below-average levels of empathy; there was that bizarre scandal last year in which someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child; etc., etc., etc.
In effect, there seems to be some sort of self-deception around the fact that the Berkeley rationalist community is by almost all reasonable standards severely dysfunctional, with the best people actually being on the periphery of the community. It’s almost as if the author is coming up with the “Dragon Army” in an attempt to help everyone collectively delude themselves into believing they’re much better than they are, because he can’t bear to actually look at the Berkeley rationalist community and see it for what it is: a pile of garbage. Just like how a child from a broken family might imagine that everyone’s getting along. Unfortunately(?), flinching away from the truth doesn’t actually make reality go away.
Amusingly, it actually does seem as though the author partially realizes this. Let’s review the criteria which the author hopes the members of “Dragon Army” will fulfill after a year’s worth of cult membership:
(1) Above-average physical capacity
(2) Above-average introspection
(3) Above-average planning & execution skill
(4) Above-average communication/facilitation skill
(5) Above-average calibration/debiasing/rationality knowledge
(6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims
(7) Average problem-solving/debugging skill
(8) Average public speaking skill
(9) Average leadership/coordination skill
(10) Average teaching and tutoring skill
(11) Fundamentals of first aid & survival
(12) Fundamentals of financial management
(13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
(14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
“Above-average”? “Average”? Not exactly a high bar. “At least one employable mental skill, and at least one employable trade skill”? Is the correct inference here that the typical participant is actually expected to be not employable at all (i.e., deficient in both categories)? “First aid & survival”—if there was ever any doubt that this is actually just sophisticated childish role-playing… The fact that I (in contrast with the Berkeley rationalist community) have put very little directed effort into the meta-goal of self-improvement and nevertheless plausibly already satisfy 11 of these 14 criteria, with the other 3 not seeming particularly difficult to attain, is not a good sign!
Despite the fixation on “evolving norms” or whatever, the author seems to be particularly blind to what social reality is actually like and what actually makes communities get along. Consider, e.g., the following quote:
for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior
Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?
There are two inferences to be made here:
Members of the Berkeley rationalist community are particularly prone to using bureaucratic rule-setting as a way to compensate for their severely below-average social skills, and
Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don’t actually care whether or not what they’re doing might bother others until they’re told to stop.
In my personal experience, both inferences are correct. Ultimately, what this comes down to is a bunch of socially-inept losers with near-autistic social skills trying to attain the sort of basic social harmony that comes naturally to more competent people via a combination of bizarre mimicry and a mountain of bureaucracy. Naturally, and contrary to the author’s bizarre childish idealism, one can expect a hell of a lot of repressed irritation, interpersonal drama, and general unpleasantness from this experiment.
To top off the turd cake with a cherry, the author’s science fiction writing is trash:
I felt my stomach twist, felt that same odd certainty, this time wrapped in a layer of the coldest, blackest ice. “You came to kill us,” I said. There was a soft rustle as the others straightened, pressure on my shoulders as the space between us closed. “You came to kill us all.”
Anyone who can vomit that out on a page and feel proud of it isn’t fit to lead or teach anything. Period. The world would be concretely better off if the author, and anyone like him, killed themselves.
In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.
I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one’s dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.
So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.
I appreciate Duncan’s attempts to do that conversion and speak to the converted form of the argument.
But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428′s intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.
Ergo, request to all:
Do not feed trolls.
PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.
I’m the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428′s skepticism about the community in the Bay Area, but I strongly agree with Val’s comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
(Also ditto everything Val said about not replying to 18239018038528017428)
I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author’s original tone, and so just editing out the vitriol seems problematic.
With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.
For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter’s opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)
It’s true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the “hold off on proposing solutions” essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than “hold off on proposing solutions”.) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn’t clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.
Here’s another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence “The world would be concretely better off if the author, and anyone like him, killed themselves.” Would you tell them to add it in or not? If not, I suspect there’s status quo bias, or something like it, in operation here.
Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.
In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).
My other comment should hopefully clarify things, as least with regard to politicization in particular.
To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people’s mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even “nerds” or “intellectuals” behave like typical humans.)
It is in fact possible for “heated” or even “nasty” discourse to be very information-rich; this makes sense if you realize that what counts as “nasty” depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as “nasty”, despite the fact that the author was specifically intending to communicate content.
Now, of course I don’t consider 18239018038528017428′s comment to be optimally worded—but then, I wouldn’t, because I didn’t write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.
I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.
As someone who doesn’t live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I’m part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it’s not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it’s out in the open so I may face the full consequences of my mistakes.
I know lots of people mentioned in ’18239018038528017428′ comment. I either didn’t know those things about them, or I wouldn’t characterize what I did know in such terms. Based on their claims, ’18239018038528017428′ seems to have more intimate knowledge than I do, and I’d guess is also in or around the Bay Area rationality community as well. Yet they’re on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn’t been established other than “works at MIRI/CFAR”, and what they’re doing is just insulting and accusing regular people like the rest of us on the internet. They’re not facing the consequences of their actions.
The information provided isn’t primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of ’18239018038528017428′s comment were to express frustration, slander certain individuals, and undermine and discredit Duncan’s project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.
There’s nothing I do which is more policed in terms of tone on the basis of sensitivity that ’18239018038528017428′ isn’t doing. While we’re talking about norms of sensitivity, let’s talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn’t always splendid or sensitive, and how ’18239018038528017428′ do it, are what separates people who have a non-zero respect for norms, and those who don’t. This coming from me, a guy who lots of people think probably already flaunts social norms too much.
I am anti-sympathetic to ’18239018038528017428′ and whether they’re censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don’t like seeing this sort of drama dominate discourse, and in particular there are lots of us who don’t care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That’s not what anyone needs. Since we’ve established ’18239018038528017428′ seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user ‘18239018038528017428’ wouldn’t need to out themselves in front of everyone to do it. They could’ve had had a friend do it.
There are plenty of ways they could’ve accomplished everything they would’ve wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there’s no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton’s fence for discourse is being torn down here, I don’t believe that’s what’s going on here, and I think the preferences of everyone else on LessWrong who isn’t personally involved deserves a say on what they are and aren’t okay with being censored on this site.
You don’t seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428′s comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.
In particular, the speech is not being allowed “to the chagrin of all other users”. I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.
Needless to say, to be allowed is not to be approved.
By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information.
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Because you’ve publicly expressed assent with extreme bluntness
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
Your comment makes you come across as someone who has led a very sheltered upper-class existence
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
But people don’t choose goals.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
Most people certainly care much more about not being attacked physically than discovering truth.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like.
Yeah but exposure therapy doesn’t work like that though. If people are too sensitive, you can’t just rub their faces in the thing they’re sensitive about and expect them to change. In fact, what you’d want to desensitize people is the exact opposite—really tight conversation norms that still let people push slightly outside their comfort zone.
Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.
What goes through my mind here is, “Trolls spend a lot of time and energy making comments like this one too, and don’t stay silent when they could, so I’m not at all convinced that those points are more consistent with a world where they’re truth-seeking than they are with a world in which they’re just trolling.”
I still think that’s basically true. So to me those points seem irrelevant.
I think what I mean is something more like, “Unless and until I see enough evidence to convince me otherwise….” I’ll go back and edit for that correction.
Strong support for this person’s willingness to contribute the opposite opinion.
Strong support for this person’s willingness to take the time to write things up in detail.
Strong appreciation for the trust implicit in this being posted here (i.e. it’s a compliment along the lines of “I expect not to be punished for speaking the truth as I see it.”)
Some regret/sadness that they’re this triggered and vitriolic, and for the tendency toward choosing the worst or straw-est interpretation at every point rather than taking the time to question their own responses and include nuance, but on the other hand, still appreciation for how this contributes to the overall health of the discussion by opening up new threads for debate and ensuring that there isn’t an echo chamber (i.e. maybe it takes that level of aggression to accomplish the thing, and a gentler critique wouldn’t be taken seriously enough?).
Significant disagreement with the choice to hijack the topic at hand to vent about things that are either mostly or completely unrelated, and make claims that are unsubstantiated or wildly inaccurate, and engage in some specious logic toward the end (e.g. ad hominem fallacy).
Hope to have some time later today to respond to the better points this raises.
The fact that you think it’s “ad hominem” is itself a betrayal of your own inexperience and lack of perception. It’s perhaps one of the most relevant and least fallacious arguments to make: your fiction is a direct expression of your aesthetics, and the inference I draw from your fiction is that you do not have good aesthetics, and therefore should not be trying, or even pretending, to do something that by nature requires very good aesthetic sense.
It also indicates a tremendous amount of immaturity and childishness. I could have written something better in high school. That’s not a good sign. Your ability to write characters and dialogue is directly tied to your ability to model the world accurately and understand the nuances of human behavior. Ergo, clichéd and trite writing is very damning.
Many words. Probably took a while to write. Some unnecessary things like telling the writer to kill themselves and levelling inherent criticism like attributes of other writing. Other writing is pretty irrelevant to the qualities of this piece. You may have some points in this dung heap but you make it hard to find them. Is it even worth engaging you in conversation?
Oh, I see. You’re what the Eternal September phenomenon is all about. You shouldn’t feel ashamed that you aren’t cognitively gifted enough to quickly and rapidly comprehend the salient points I made without substantial expenditure of mental effort, because you were born this way, which also accounts for your overestimation of the amount of time it took for me to write my comments. But please don’t pollute the comment space under my comments with your puerile excretions.
Perhaps your excessive cognition is ironically blinding you to the grandiose mediocrity of your overwrought replies, such as this one here, which sounds like something I would have written in third grade if I wasn’t already too smart to have written it then, which, as a truly capable mind might have already conceived, I was.
Your original comment, though harsh, at least contained some useful insights. Don’t ruin that by posting comments that are nothing more than 6 lines of insults that no one wants to read.
Most of the arguments you set forth are more fallacious and less relevant than not liking all the author’s fiction.
But that’s because most of the arguments you set forth were of the type “Bay Area rationalists have had a lot of problems and therefore this specific plan will have similar problems.”
Oh, I see. This is the part where you’re too attached to your ingroup to realize what a total failure the Berkeley rationalist community is. I bet you also think the Sequences and HPMOR are well-written.
[Note: I’ve typed this comment without refreshing the page, and thus have not seen any of the other responses that may have cropped up in the past few hours, nor taken those responses into account in any way yet. I’m seeing only the original reply, here.]
Part 1 of ?
Repeating my thanks before heading into what will be a mix of concession and disagreement—I have qualms about the way you engaged with this post, but am grateful for the fact that you did engage, at all, rather than just staying quiet, and I want to support the core of that even as I complain about certain aspects of your chosen method.
I think your first paragraph had one clear point: “I, as a smart, perceptive person who sees things others often fail to see, found a lot of this viscerally upsetting, which is probably a sign that there are actual problems.” I liked that you added this point, and I think it would’ve been stronger if you hadn’t been so deliberately assholish with the rest of it. I’m going to take the core point seriously as I read further, and see if I can get a clear sense of what it is you see that I don’t.
The comment about Ender’s Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up—there’s no wargaming in the plan, there’s no battle room, there are no other groups of people playacting as other armies. The aesthetic of Dragon Army was, in short: everyone is expected to keep their eyes open and act independently to do what seems right and sane in the moment. Groups should practice coordinating together to build trust and be capable of action-requiring-more-than-one-individual, but the assumption is that an army run by forty minds will trump an army run by one.
In paragraph 3, you make a valid point about the efficacy and usefulness of CFAR, which is indeed worth questioning, and the side you’re holding down is not obviously wrong. It’s a bit overwrought, given that the phrase “insistence on the validity of his experience as a CFAR instructor” is a clear strawman; I was almost as emphatic about the fact that I’ve written nerdy fanfic, so I think you were just looking for an opportunity to climb up on a soapbox? That being said, your point about interpersonal romance being a relevant and important factor matches my own intuition, and I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.
In paragraph four, you make an entirely unfounded leap that is beneath the quality of what’s expected from a poster on this forum. All of your “this suggests” are false handwaving, and I find the rest of your assertions generally laughable, given that there’s only one person in this thread so far who’s demonstrated deep antisocial behavior, and that you’re hurling these insults from a position of anonymity. However, I’m going to continue to take things one paragraph at a time rather than assuming that I’ve seen your entire position as soon as I’ve got a mockable straw model, so we’ll start fresh with your next point.
Hmmm. In the first sentence of paragraph 5, you and I seem to converge somewhat—we both agree that the Bay Area rationalist community is not living up to its promise, and has too few people doing good and impactful work. I’m glad to share this bit of world-model with you. I note that my idea for what to do about it—try a different sort of house/community—is just one possible strategy among many, and I’m curious if you have other concrete suggestions that you’d be willing to offer. I’m especially curious what you’re actually doing, as you seem to have a sort of … scathing dismissal? … of everyone else, and I’d expect from your tone that you must be engaged in at least one concretely high-promise project (else it all smacks of rank hypocrisy). Would you be willing to detail a) what you’re up to, or b) a few concrete proposals that you suspect are higher promise? At this point, it’d be hard to simply abandon the Dragon Army idea, but if a good enough alternative came along, I would take it. The point is not to be seen to be right, it’s to actually make an impact.
I notice that the rest of that paragraph is basically off-topic. Without contributing to the off-topicness, I want to say that I do, indeed, find at least a couple of worthwhile points of agreement within it, but I think most of it is wrong, in addition to being somewhat morally reprehensible re: vicious attacks, and that you’re overconfident in your assertions. If you’d like to shoot me a private message, I’d be happy to say where I agree and where I disagree.
Oh, interesting—paragraph six also begins with a claim I have a lot of sympathy for/agreement with. I don’t hold it as strongly as you do, but I do think there’s a lot of clear dysfunction and self-deception in the community, and I’d like to take steps to correct it. I don’t know how to evaluate your claim that the best people are on the periphery (as I’m a weird mix of professionally central and socially somewhat distant), but again—if you’d like to make concrete recommendations about who I should talk to, or direct some of the people you hold in high esteem to comment on this thread, I suspect you’re right about there being a lot of untapped value. I do note that Dragon Army is not actually pulling from the central or highest status people, but thus far looks to be made up of a lot of solid, normal, representative rationalists, so I think your claim about trying to delude people is straightforwardly false, as is your assumption that I don’t see or don’t want to see any warts and flaws. (I believe there are lots of people who will back me up on this, including some who will claim that I’ve been too hostile or critical. That’s partially why I sympathize with the strength of your negativity.)
Ah, paragraph seven contains the unword “cult,” which I think you’re using to say something, but I’d rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label. Like, I think if you laid out specific, concrete objections, I and others could benefit from them, but just saying cult is lazy name-calling.
I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn’t expect to be able to figure out the right stuff on the first try, would’ve clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?
As a meta note, I think that people who cower behind anonymity don’t deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I’m treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You’re currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.
Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist’s lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw. I’m curious whether, setting aside your mockery of a subpoint, you agree with that point.
Interestingly enough, I have reasonable credence in your two inferences. In my experience, members of this community do attempt to install norms to compensate for social failings (and do have a somewhat higher-than-average level of social ineptitude). And also, I think many people in this community are low-empathy and embody the bad side of individualism. However, unlike you, I see that a lot of people are trying damn hard to correct this, and I’m curious whether you think they should be written off for not being good enough already, or whether you have specific suggestions that differ from the ones already being tried. I note that a big part of what Dragon Army intends to do is just try a whole bunch of stuff (including stuff already known to work; there’s no premium on novelty), and that I think data will be better than armchair ranting.
I suspect you haven’t done much in the way of looking in the mirror when you type the words “repressed irritation, interpersonal drama, and general unpleasantness.” Certainly you don’t meet any of my standards for “how a decent person behaves.” I’m going to try to avoid the fundamental attribution error here, though, and assume that we’ve hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.
I’m not going to engage with the ad hominem attack at the end, which, in addition to being wrong as a tactic, also fails in specific. I think that if you compare yourself, who is suggesting suicide as a solution, with OSC, who is definitely wrong about a lot of things but has never gone so far as to claim a fellow human would be better off killing themselves, you’ll note that you might be on the wrong side. I’d check my cap for a skull, at least in the context of today’s mood.
For anyone else—I welcome calm, reasoned elaboration on any of the on-topic points this person made. When I went through blow-by-blow, there were fewer than I’d hoped, but there are true and valuable and important criticisms here, and I’m glad they’ve been added to the mix, and I wouldn’t mind further discussion of them.
I liked that you added this point, and I think it would’ve been stronger if you hadn’t been so deliberately assholish with the rest of it.
Sure, but it’s fun to be an asshole. I love knocking people down a peg. Especially in public.
The comment about Ender’s Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up
Asserting that this isn’t elaborate playacting is not very convincing in light of the fact that your first two proposed group norms are (1) a greeting salute and (2) a call-and-response mechanism. I played the beginning of Final Fantasy XIII two nights ago and thought that was the most cringeworthy stuff I’ve seen in months, but you managed to top even that.
I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.
The more important thing here is that you imagine this as a problem that can be solved when in fact if the problem did arise, that would itself preclude it from being easily solved. The “solution” is to not select immature people who you can reasonably expect to get into interpersonal drama, which precludes the vast majority of the rationalist community, which is part of the point of my comment.
if you’d like to make concrete recommendations about who I should talk to
I can suggest that you talk to Satvik Beri, and maybe direct him to my comment as well, although I feel slightly bad for potentially causing him to spend time on this.
Ah, paragraph seven contains the unword “cult,” which I think you’re using to say something, but I’d rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label.
I mean that the Berkeley rationalist community is a cult in the full and unqualified sense of the word “cult”. You, as a high priest, naturally disagree.
Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist’s lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw.
This is a good thing practically by construction.
My point is that this is almost completely unnecessary in a world where people begin by defaulting to behavior that is very unlikely to bother others. I am also gesturing at the following:
The rationalist community does not default to such behavior, which is an indication of the conjunction of near-autistic social skills and remarkably low empathy, and
The rationalist community does not default to such behavior, but instead of anyone pointing out that this is a reasonable thing to default to (c.f. Japanese society), people try to patch it up with legalism, bureaucracy, and a laundry list of rules, which in my experience makes it feel like I’m talking to the low-IQ HR department of a large multinational conglomerate.
The fact that the Berkeley rationalist community seems particularly bad at this is a major red flag in almost every conceivable fashion.
However, unlike you, I see that a lot of people are trying damn hard to correct this, and I’m curious whether you think they should be written off for not being good enough already
I think they should be thrown off a bridge, either metaphorically or literally. I find it detestable to have them near me at all.
I suspect you haven’t done much in the way of looking in the mirror when you type the words “repressed irritation, interpersonal drama, and general unpleasantness.” Certainly you don’t meet any of my standards for “how a decent person behaves.” I’m going to try to avoid the fundamental attribution error here, though, and assume that we’ve hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.
Two questions:
Does it look to you like my irritation is “repressed”?
I’m completely anonymous. Exactly what interpersonal drama am I causing here?
I agree that I can be, when I want to be, a very unpleasant person.
I don’t think you actually succeeded in knocking anyone down a peg, though. I’d bet ~$50 that a neutral, outside observer (say, from a different English speaking country) would say that a) you come off far worse than anyone else in the thread and b) they didn’t find your post convincing.
I think our disagreement over the distinction between playacting and not boils down to something like, I believe that the very small nuts-and-bolts of social interaction (jargon, in-jokes, simple trigger-action responses like sneeze “bless you”) are more important than most people give them credit for. In other words, I think the silly theater ends up actually mattering? Or, to be more specific—I think most of it doesn’t matter, but some small bits of it end up being really important, and so it’s an arena I want to do explicit experimentation with. I want to see whether the small salute actually ends up being relevant to bonding and sense-of-purpose, and no, I don’t have a double blind or anything like that, but I will be asking a bunch of fairly introspective people for their thoughts afterward.
I suspect, from your reaction, that you’d basically assert that this premise is false, and that the … skin? … of social interaction is meaningless, at least compared to the actual connections and information conveyed. This seems like a sensible, plausible position to take, but I think your mockery of the alternative hypothesis is unfounded.
I agree that if romance/sex/etc pop up, that would preclude the problem from being easily solved, but where did you get the impression that I was afraid of attempting to solve hard problems? There’s definitely a filter to screen out immature or uncontrolled people; while you yourself might make it through, the persona you’re currently expressing would’ve been rejected by the second paragraph of your original response. We’ve already turned away people for a variety of reasons, and at least one because of exactly this axis.
I appreciate the recommendation that I run things by Satvik. He’s a perceptive thinker and I haven’t run this by him yet. I wish that you’d responded in specific to more of my requests to draw out your suggestions—you’re continuing to clarify your models of the problems, but not offering much in the way of replacements for the things I’m planning to try.
You’re still not saying what you actually mean by the word “cult.” There’s a decent chance I’d agree with you—I’ve described the Bay Area rationalist community as a cult myself, even recently, when talking to friends and family members. But I was careful to disambiguate exactly what I meant by that, and I can’t help but note that your continued refusal to spell it out makes me suspect that you don’t actually have a coherent thing to say, and are just trying to score easy points.
I agree again with 1 (low empathy, etc.) though I think the strength of the effect is smaller than you seem to think it is. I think that you’re still not believing me when I say I agree with 2? Note that I’m calling you out for unacceptable rudeness in this thread, for instance. I also suspect you have a huge typical mind thing going on, and vastly underestimate how easy it is for people to rub each other wrong while acting in complete good faith in a normal society—the bed example was maybe poorly chosen, but I disagree with you that it’s easy to “default to behavior that is very unlikely to bother others.” I’ve been in a wide range of social milieu, and it’s much less about the actual behavior and much more about people’s cough willingness to pick nits and start fights.
I think that you’ve lost all moral authority by doubling down on your “people should die for this” claim, and because of that, I think this’ll be my last attempt to engage with you as an equal (you’re not my equal; at least this facet of your personality is my clear inferior). I will, however, continue to read if you make those concrete suggestions I’m hoping you have somewhere.
In answer to your last two questions: yes, it looks like your irritation is repressed. Not here, because my main hypothesis is that here is where you finally felt safe to vent a ton of irritation that you’ve been repressing in other arenas, for long amounts of time. Just look back at your first post—maybe a quarter of it was in response to me, and the rest is long-simmering, long-festering frustration about a bunch of other things (some of them valid and some of them not). Textbook repress-then-explode. And 2, your claim that posting anonymously equates to not causing interpersonal drama is again so laughable that unless it’s a deliberate joke, you’re revealing this persona to be less socially aware than literally the most awkward and inept rationalist I’ve ever met.
You’re not unpleasant so much as just … not showing yourself to be worth the time. I really hoped I could get more out of you, because I actually know, on a deep level, that I don’t have all the answers and the opposition is the first best place to look. But in terms of useful-criticism-per-word, you’ve been outdone by every other person who’s registered reservation or disagreement here.
I don’t know if I’m neutral (no, because I have an account here for a while now), but I wouldn’t have the same confidence to swing that bet out of there like you do. The post in and of itself is not convincing enough for me to say that your idea won’t work, but it certainly makes me go “hmm, well, he might have a point there”.
Specifically:
“Normal” people don’t need to explicitly write out all the rules for their housing with regards to social rules.
But here there’s a large list of rules and activitities and all that with the goal of getting group housing to work properly.
Also, here’s some examples of the group of people that you want to source your participants from having low social skills.
By the way, if you set up a ton of rules then it usually won’t work.
Thus, there’s a pretty big chance that the rules will not work out and that the social skills of the participants will be too low to have the group housing work.
I am not convinced that this is the truth.
However, if I read in a year from now that this is what happened, I would not be surprised.
Basically what I’m saying is I can see 1 or 2 people leaving due to drama despite the rules if you try this, with a chance greater than, I dunno, 10%?
You’re looking at content, not status (as implied by ‘knocking someone down a peg’). My immediate reaction to the top-level comment was: “well, they have some good points, but damn are they embarassing themselves with this language”. Possibly shaped by me being generally sceptical about the ideas in the OP.
As far as the bet is about the form of the post, rather than the content, I think Duncan’s pretty safe.
“Normal” people don’t need to explicitly write out all the rules for their housing with regards to social rules.
I have seen normies having endless fights about trivial things, such as “who should buy toilet paper”, that a simple explicit norm could solve. (For example “people keep buying the paper in turns, when you buy one check this box to keep everyone informed” or “Joe buys the paper, everyone else gives Joe $2 each month” or whatever.)
The best case, of course, would be trying to be nice by default, and solve explicitly the situations where the default behavior fails. But that seems like what would quite likely happen in the Dragon Army anyway… or maybe I am just applying the typical mind fallacy here.
I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn’t expect to be able to figure out the right stuff on the first try, would’ve clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?
I’m not the originator of this thread, but that part did resonate with me. I don’t think there’s anything wrong with those skills, but the combination of choice of skills and the desired level of competency does seem to be decidedly mediocre given the effort and people involved.
1) Above-average physical capacity
What is average? In the US, you could probably be somewhat overweight with no strength, speed, endurance, or agility to speak of and still be “above average.”
(2) Above-average introspection
I would expect almost all of the people who volunteer to be part of a rationalist group house to be there or pretty close to there already.
I think my previous comment applies here as well. Perhaps you have a different conception of “average” than I do, but I think if you’re going to establish a long-term mini-dictatorship of a group house, you should be aiming for quite a bit higher than “above average.”
(6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims
I don’t really understand this one. Is your group house actually going to have the ability to practice conducting laboratory experiments? That’s a very high overhead endeavor.
(7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill
Average? Your goals are to reach average, after a year of dedicated effort? Getting into the 80th percentile of anything numbered 1-10 on this list should require a minimum of effort on the part of dedicated individuals following strict rules, unless you have some specific medical condition interfering.
(11) Fundamentals of first aid & survival
How fundamental is fundamental? This also shouldn’t take very long if you are willing to put in the effort and practice a bit (2 weeks, at the outside, though you could the true basics in a long weekend). I don’t know how it’s related to the rest of the goals, though, or why it’s important enough to be on the rest of the list. Also, you should practice many of these skills in the actual wilderness, which means time away from everything else.
(12) Fundamentals of financial management
Again, I’m not sure what’s “fundamental.” You could spend 2 days on this, or the entire year.
(13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Do you have the ability to teach/practice trade skills at the house? I would expect leaning any of these things, to an employable level, within a year, would require spending time similar to a full-time job somewhere that has infrastructure, in addition to a significant investment of money (at least a few thousand dollars). (I checked some local welding and plumbing classes at community colleges, which is where I’m getting those numbers).
Someone who already has one of these skills (I’m guess you’ll have a few coders at least) is going to be at a tremendous advantage in terms of time and possibly money compared to someone who is not. 13 and 14 are going to each represent a greater time investment than the others combined, unless you already have them.
As a meta note, I think that people who cower behind anonymity don’t deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I’m treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You’re currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.
I don’t know if you care, but I would say I already meet a similar number of these criteria. The only one I definitely don’t meet is 14. I’m willing to tie this account to my real name and explain/prove why I meet them (though some of them would be quite difficult to really prove, I could only argue).
The problem seems to be to be the tradeoff between going deep and going wide, with the added complexity that going deep on the wrong thing seems strictly worse than going wide, and so we’re defaulting to going wide where there’s uncertainty.
Put another way, it’s unlikely that any of those specific skills are going to be particularly important to any of our longest-term goals, but it also seems counterproductive to just sit there thinking about which direction to go in. I’m usually not the biggest expert in the room, but I usually am the most generally competent in terms of being able to fill holes or solve whatever problem crops up, and it’s because I have a habit of just constantly churning and picking up new skills and methods and heuristics wherever I go. I suspect that others would benefit from a similar habit, in particular because once “the right skill” does come along, you have both the affordance to start learning it and a variety of experiences allowing you to learn quickly and efficiently.
That’s a claim. Not necessarily supported, but reasonable, I think, and worth trying out.
I note that I disagree that it’s easy to break averages in all of these things at once. People who don’t actually check their abilities against a standard tend to be wildly overconfident, and people tend to underestimate how long it will take them to learn X or accomplish Y; these things are solidly documented. And while competence does tend to cluster (e.g. “G”), so the picture’s not quite as bleak as the second half of this sentence, once you’ve got a dozen different domains and shooting to be above the 50% mark in all of them, you’re looking at a person who’s approximating one in four thousand, and when you try to get a whole group to hit that mark, the challenge is pretty real. I wouldn’t be surprised if most people have most of this easy, but I think you’re not fully grokking the difficulty of making everybody baseline competent in all of these domains. For instance, you note that many of these skills require only a few weeks, but I don’t know if you added up all of those weeks, compared them to the time commitment, and noted that they’re all being practiced off-hours and people have their own jobs and lives as well.
It’s a floor, though, not a ceiling—we’re aiming at “world class skill,” we’re just not naively expecting that getting there is going to be easy, and initial expectations are meant to be exceeded.
Various additional points …
The trade skill goal got scaled back in response to another comment; it was the hardest/sketchiest one to begin with.
We will have some ability to practice trade skills at the house, and are adopting a norm of going and seeking professional instruction outside from time to time.
I buy that you meet a large number of these criteria; I meet most of them myself. But the ones I don’t have are sticky/tricky.
. And while competence does tend to cluster (e.g. “G”), so the picture’s not quite as bleak as the second half of this sentence, once you’ve got a dozen different domains and shooting to be above the 50% mark in all of them, you’re looking at a person who’s approximating one in four thousand,
I don’t think these skills are anywhere near independent. It’s also not obvious that they’re normally distributed. And, being above the 50% mark in a dozen skills by coincidence being unlikely does not at all tell you how hard it is to gain skills if you put in some deliberate work.
I generally am sympathetic to the argument that stuff can be harder than one assumes, but I also am generally cynical about the “average” level of most of these skills. Most people probably don’t even know what “calibration” means precisely enough to test their own level of calibration. I’m not trying to be arrogant here, I pretty much have only heard about the idea of writing down your confidence level of a bunch of predictions and seeing what comes true from the rationalist community and rationalist-adjacent ones.
For the sake of avoiding this issue, and because rather than using terms like “above-average,” I would attempt to pin down ahead of time requirements that are as specific as possible to measure progress in each of the areas you care about.
For instance, you note that many of these skills require only a few weeks, but I don’t know if you added up all of those weeks, compared them to the time commitment, and noted that they’re all being practiced off-hours and people have their own jobs and lives as well.
I don’t think it should take a few weeks each to exceed average in most of these skills. I expect it to take a few weeks total (or 1 day a week for a few months).
I’m plausibly interested in betting a few hundred dollars against you, especially if (as seems likely, given your confidence) you were to bet $1000 against my $250 or something like that. If I imagine the hundred closest people I know uttering the above, I think all but one or two of them are wrong/overconfident.
What statement, specifically, would we be betting on? It’s certainly plausible that I’m underestimating the difficulty in getting an entire group to above these standards in comparison to getting one person. Though, I think the main issue may be a difference in what we perceive as average, rather than a model of how hard learning these skills is.
I spent five minutes trying to operationalize, but I couldn’t come up with anything that seemed workable. For now, we’ll just proceed knowing that at least one of us is wrong. =)
Either way is fine with me, but if you can express in any way what you think “average” is for some of these skills, I would like to know because now I’m really curious.
Thanks for taking so much time to keep responding to a fairly random commenter!
As a meta note, I think that people who cower behind anonymity don’t deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I’m treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall).
The amount of criteria he hit’s likely depends on the definition of average. The reference class matters a great deal.
It would be much better if it were less inflammatory. The last sentence, in particular, is reprehensible. But you respond to the substance of the criticism you get, not the criticism you might want or wish to have at a later time. Otherwise you might as well be slashing your own tires. The vast majority of the discussion below is simple tone policing. Someone’s telling you that your house is on fire, and you’re complaining that they’re shouting.
It’s correct that it’s incredibly troubling that the author didn’t even consider romantic drama in designing his bootcamp. It’s correct that these are really not impressive outcomes. They’re moderately-functional outcomes. Shouldn’t there be some sort of control group where people attempt a similar level of life-changing upward momentum on their own and see if it was actually effective to cede their autonomy? It is correct that trying to LARP a bizarre combination of Ender’s Game and Fight Club is perhaps not a sign that this person has any idea how grown-ups work.
And most troubling of all, why weren’t these issues noted by anyone who Duncan ran this idea by first? Why does it take this level of willingness to break with social norms to notice the skulls? And no, intoning “I Have Noticed The Skulls” doesn’t mean you’ve actually addressed the problem unless you actually address it. Twelfth virtue!
In a broader sense, what the hell happened? I read the Sequences roughly when they came out, commented here occasionally, moved over to SSC and, more often, the associated subreddit. I donate effectively and regularly, I do my best to tax people’s bullshit with bets, and I do feats with spaced repetition. Apparently while I was doing that and not being directly involved in the community, it turned into… this. Scott Alexander is getting published in moderately prestigious outlets. AI risk is mainstream. Effective Altruism is considerably more mainstream than it was. But the community at the center of it has, if anything, regressed, from what I’ve seen here.
I am super, super in favor of this experiment, and would have enthusiastically participated fully in it something like 2 years ago, before moving to Terabithia. I think it’s tackling the biggest things missing from the community and am very excited to see what happens.
Well, given the trajectory of your own life, Qiaochu, I think that actually counts as an argument against “Dragon Army”, and really the rationalist community as a whole, being good for the participants. I notice that you’ve shifted from posting insightful, detailed blog posts to impersonally spamming links to rationalist ingroup bullshit on Facebook all the time—in some sense it’s like you’ve been trending in the direction of being less and less of a real person as time goes on. (Which, as a friend of mine pointed out, is actually generically very common, like how a smart and quirky high school student goes to Harvard, starts adopting more and more of a “professional” demeanor, becomes progressively less interesting, and eventually dies a mental death far in advance of their physical expiration...)
Oh, dear. This is terrible, and I wish you hadn’t posted it, because there’s literally no value to be had in delivering this sort of message in this sort of way. Disendorse; I claim this is evidence that most of your arguments about social capability should be somewhat discounted, since they’re coming from someone unskilled.
I honestly think this person has been engaged with enough, at least until they make the kind of concrete claims you’ve been asking for. I think it’s commendable to have responded with the good mix of “look at their plausibly good points while calling them out on their bad points”, but at some point it becomes uncommendable to engage with people who are clearly not arguing in good faith.
Our ability to concretely describe the effects of social groups on people in general are kind of limited, but things like “person X joined social group Y and now they concretely do behavior Z” are available. If you see people join a group and then become concretely worse (in your own assessment), I think it can be valuable to refer to specifics. I think it can be important and virtuous to convey what you think is a pernicious process, and unfortunately naming someone you personally know is a very effective, if cruel way to do it. Anecdata, and especially anecdata based on the content of someone’s facebook feed, is not a great snapshot of a person at different times, but it’s still a source of information.
I’m not sure what you think a better sort of way to deliver this sort of message is, but to some extent any nicer way to do it would be less effective in conveying how bad you think the situation is.
That seems true and correct to me. I note that my response to this specific comment was … motivationally entangled? … with my responses to this person’s other comments, and that I was adopting a cross-comment strategy of “try to publicly defend certain norms while engaging with everything else that doesn’t violate those norms.”
I think it’s defensible to say that, in so doing, I lost … fine-grained resolution? … on the specific thing being said above, and could’ve teased out the value that you were able to identify above separate from my defense of a) norms and b) Qiaochu.
Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don’t actually care whether or not what they’re doing might bother others until they’re told to stop.
someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child
They were just doing their part against dysgenics and should be commended.
word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years
Sounds interesting, I’d like to hear more about this.
despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren’t actually women
Being only on the periphery of the community, I’m extremely curious who said valiant man is (full disclosure: this is so I can avoid them and/or assess why the community has not yet shunned them, as I would hope they’d shun you).
Being only on the periphery of the community, I’m extremely curious why your instinctual reaction to a very politically incorrect idea is to shun the people supporting it, and why your model of the world bizarrely concludes that (1) people who live 20+ years as men and then decide, because of their autogynephilic fetish and repressed femininity, that they’re better off as women and therefore are women, and (2) people who have severe mental illnesses that cause them to become suicidal upon contemplation of their own bodies are somehow Actually the Opposite Sex in some timeless, eternal manner which becomes true as soon as they realize it’s true.
Being only on the periphery of the community, I’m extremely curious why you imagine people who are objectively a bunch of losers who can’t seem to accomplish anything of value would be the ones shunning me rather than the other way around. If I were a member of the cultlike “community”, sure, social ostracization would be possible. (Thankfully, I’m not.)
I’ve had some thoughts and feelings in this vein; skepticism of trans and so forth. I hold that skepticism with skepticism, though, and I do not reach the point of telling the several extremely smart, perceptive, capable, and empathetic trans humans I know that they’re e.g. dumb or wrong or sick or confused, when I have no inside view, and I think it’s somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way. That being said, I’d be curious to hear more of the thoughts on the other side of the zeitgeist. If you feel like naming this valiant man in private, I commit to not sharing their name any farther than they themselves say is okay.
If you feel like naming this valiant man in private, I commit to
Hi! 18239018038528017428 is almost certainly referring to me! (I would have predicted that you’d already have known this from Facebook, but apparently that prediction was wrong.)
somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way.
I tried that first. It turns out that it doesn’t work: any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful. I still had something incredibly important to protect (there is a word for the beautiful feeling at the center of my life, and the word is not woman; I want the right to use my word, and I want the right to do psychology in public and get the right answer), so I started trying other things.
Zack, I think the problem (from my perspective) is that you tried being respectful in private, and by the time you started talking about this publicly, you were already being really harsh and difficult to talk to. I never got to interact with careful/respectful you on this topic.
(I understand this may have been emotionally necessary/unavoidable for you. But still, from my perspective there was a missing step in your escalation process. Though I should acknowledge that you spurred me to do some reading & writing I would not otherwise have done, and it’s not impossible that your harshness jolted me into feeling the need to do that.)
Thanks. I don’t think that would be good for me, at least right now, but thanks for the offer.
My thoughts on the matter are mostly in my ITT entry on Ozy’s blog and then also in the most recent thread on this topic on their blog. I guess I’d be somewhat curious about your responses to those thoughts.
any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful
I agree. E.g. Scott Alexander has said he will ban people from his blog is they do not speak as if the trans theories were true, even if they believe them to be false. But that doesn’t mean it is a good option to be as rude as possible, like 18239018038528017428 above. (Obviously I am not saying that you have adopted this approach either.)
I do not reach the point of telling the...humans I know that they’re e.g. dumb or wrong or sick or confused
If you’ll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.
Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they’re wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your own confusion or wrongness, and encouraging others to notice it for you when you don’t; the norm, as I understand it, is a cooperative one).
Telling people when they might be sick is (if you’ll forgive me) hardly something to sneeze at, either. They might want to visit a doctor. Health is, for understandable reasons, generally considered important. (This includes mental health.)
As for dumb, well, I simply doubt that comes up often enough to make the statement meaningful. Whatever may be said about the rationalist community, it does not appear to draw its membership disproportionately from those of specifically low intelligence. Your acquaintances—whatever their other characteristics—probably aren’t “dumb”, so to tell them they are would simply be to assert a falsehood.
So: may I be so bold as to suggest either a reformulation of the thought you were trying to express, or even a reconsideration of the impulse behind it, in the event that the impulse in question wasn’t actually a good one?
This is a fair point. I absolutely do hold as my “daily bread” letting people know when my sense is that they’re wrong or confused, but it becomes trickier when you’re talking about very LARGE topics that represent a large portion of someone’s identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through.
I don’t have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that’ll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.
Here’s what it looks like to me, after a bit of reflection: you’re in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).
In this sort of situation, I don’t think it’s necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it’s probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever reason, do go around proclaiming P. At least leave that to the people who are confident that P is false. Otherwise, you are doing epistemic anti-work, by systematically un-correlating normative group beliefs from reality.
My sense was that you were sort of doing that above: you were seeking to reproach someone for being loudly contrarian in a direction that, from your perspective (according to what you say), may well be the right one. This is against your and your friends’ epistemic interests.
(A friendly reminder, finally, that talk of “being a total jerk” and similar is simply talk about social norms and their enforcement.)
I was not aiming to do “that above.” To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.
Your comment was perfectly fine, and you don’t need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there’s a strong chance I’ll be without internet for several days and likely won’t be able to further engage with this topic.
Duncan’s original wording here was fine. The phrase “telling the humans I know that they’re dumb or wrong or sick or confused” is meant in the sense of “socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect”.
To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that’s a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.
I’m frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they’ve written are all ways of socially discouraging someone from doing something. I think that Duncan’s comment was fine, I certainly think that he didn’t need to apologize for it, and I’m fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.
“socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people “diplomatic” communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary directness is, because the more people are already preoccupied with mentally modeling them.
I’m frustrated by your comment, komponisto
If we’re engaging in disclosures of this sort, I have felt similarly about many a comment of yours, not least the one to which I am replying. In your second paragraph, for example, you engage in passive aggression by deceptively failing to acknowledge that the people you are criticizing would accuse you of the exact same sin you accuse them of (namely, equating “trans people disproportionately have certain traits” and “boo trans people”). That’s not a debate I consider myself to be involved in, but I do, increasingly, feel myself to be involved in a meta-dispute about the relative importance of communicative clarity and so-called “niceness”, and in that dispute, come down firmly on the side of communicative clarity—at least as it pertains to this sort of social context.
I read your comment as a tribal cheer for the other, “niceness”, side, disingenuously phrased as if I were expected to agree with your underlying assumptions, despite the fact that my comments have strongly implied (and now explicitly state) that I don’t.
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.
As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
Moreover, the cost is not the same for everyone
It’s fairly common for this cost to go down with practice. Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I’m not necessarily claiming that you or any specific person is acting this way; I’m just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.
communicative clarity and so-called “niceness”
That’s a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they’ve made to other’s emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.
communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.
What it looks like to me is that LW and its associated “institutions” and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically endorsing this process, specifically the aspect wherein unique subcultural norms are being overwritten by general societal norms.
The way this comes about is that the high-status members of the subculture eventually become tempted by the prospect of high status in general society, and so in effect “sell out”. Unless previously-lower-status members “step up” to take their place (by becoming as interesting as the original leaders were), the subculture dies, either collapsing due to a power vacuum, or simply by being memetically eaten by the general culture as members continue to follow the old leaders into (what looks like) the promised land.
Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”
If you don’t have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information.
“Did you symptoms X and Y are signs of clinical mental illness Z?” is likely more effective than telling the person “You have mental illness Z.”
If the other person doesn’t feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it’s more likely that they will end up doing what’s right afterwards.
I don’t think “Did you know symptoms X and Y are signs of clinical mental illness Z?” is appreciably different from “You very possibly have mental illness Z”, which is the practical way that “You have mental illness Z” would actually be phrased in most contexts where this would be likely to come up.
Nevertheless, your first and third paragraphs seem right.
In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it.
The phrasing of information matters.
I have not disputed “autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren’t actually women”, though neither have I affirmed it.
Regardless, I still would not want you, personally, in any community I’m part of, because your behavior is bad. I’m not interested in debating this this; obviously we disagree on what acceptable behavior looks like. Whatever; different strokes for different folks—clearly this community is not for you, but also you seem to still be here, for some reason.
And I would still want to know who’s going around trying to convince people of that statement, so that I could avoid them (for their proselytizing, not for their beliefs) and/or assess why the community has not yet shunned them. (Obviously you can shun the community while it simultaneously shuns you. These are not mutually exclusive.)
So, again, I still want to know who you’re talking about. Who are you talking about?
Hi! 18239018038528017428 is almost certainly talking about me! My detailed views are probably more nuanced and less objectionable than you might infer from the discussion in this thread? But to help you assess for yourself why “the community” (whatever that is) has not yet shunned me, maybe start with this comment (which also contains links to my new gender blog).
Ah, thanks. Turns out I do know who you are and have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction. (And yes, the throwaway’s description of you is somewhat misleading, though mostly that’s because, from their behavior, I would expect anyone they praise to be terrible without redeeming features).
have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction.
For obvious reasons, I’m extremely curious to hear your analysis if you’re willing to share. (Feel free to PM me.)
from their behavior, I would expect anyone they praise to be terrible without redeeming features
I don’t think that’s a good inference! (See the anti-halo effect and “Are Your Enemies Innately Evil?”) Even if you think the throwaway’s rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?
(from the great-grandparent)
This is about behavior, not belief. [...] (for their proselytizing, not for their beliefs)
I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn’t a punishment for the belief. What would be the point of restricting speech if the goal isn’t to restrict thought?
For obvious reasons, I’m extremely curious to hear your analysis if you’re willing to share. (Feel free to PM me.)
Probably this is going to be too blunt, but it’s honest, and I’m assuming you’d prefer that:
Basically, because you are psychotic, not an asshole (or at least, afaict, only an asshole as a consequence). And dealing with people who are behaving poorly because of mental issues is a hard problem, especially in a community where so many people have mental issues of one sort or another.
Again, this doesn’t mean I disagree with you (and again neither have I claimed to agree). The fact of your psychosis is not obviously prior to your beliefs. But it is very obviously prior to how you have acted on those beliefs. Or at least it is obvious to me, having spent a great deal of time with friends who behave like you’ve behaved (in public, at any rate; of course you should discount this evidence given that I haven’t interacted with you in person, or at least not much).
Even if you think the throwaway’s rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?
It’s evidence, yes.
I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn’t a punishment for the belief. What would be the point of restricting speech if the goal isn’t to restrict thought?
… This is a much larger conversation for another time. If you have not already internalized “just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it’s true”, I don’t know that I will be able to briefly explain to you why that is the case.
but it’s honest, and I’m assuming you’d prefer that
Yes, thank you!
Basically, because you are psychotic
I definitely went through some psychosis states back in February and April, but I seem to be pretty stably back to my old self now. (For whatever that might be worth!) I have a lot of regrets about this period, but I don’t regret most of my public comments.
If you have not already internalized “just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it’s true”, I don’t know that I will be able to briefly explain to you why that is the case.
Oh, I think I understand why; I’m not that socially retarded. Even so—if there’s going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on “arguing for true propositions about human psychology because they’re true” and relatively less emphasis on social acceptability, shouldn’t it be _us_? I could believe that there are such things as information hazards—I wouldn’t publicize instructions on how to cheaply build a suitcase nuke—but this isn’t one of them.
if there’s going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on “arguing for true propositions about human psychology because they’re true” and relatively less emphasis on social acceptability, shouldn’t it be us?
Sure. And we do put relatively more emphasis. But we have not completely and totally thrown away all social convention. Nor should we: much of it exists for good reason.
That seems so obviously true the idea of shunning someone for fighting against people arguing the opposite seems crazy to me. I thought we just called used “she” to be polite, not thought we believed them to be women in any meaningful sense.
I cannot imagine participating in this community for any length of time and sincerely concluding that the mental state you’ve described is actually universal.
Hi! I believe I’m the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I’m spending about half of my time on. To be fair, there are a couple of places where Zack is less welcome now also, (I don’t think either of us have been successfully excluded from anything other than privately hosted events we weren’t likely to go to anyways), and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance. So, I guess we’re in a stalemate-like de facto ceasefire, though I’d be happy to pick up the issue again.
I still stand by my response to Zack. It would have been better if I’d been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself; that’s an area where I’m still trying to grow. I think that collaborative truthseeking is aided rather than hindered by shunning people who call others “delusional perverts” because of their gender. This is, at least in part, because keeping discussions focused on truthseeking, impact, etc. is easier when there are social incentives (i.e. small social nudges that can later escalate to shunning) in place that disincentivize people from acting in ways that predictably push others into a state where they’re hurt enough that they’re unable to collaborate with you, such as by calling them delusional perverts. I know that the process of applying said social incentives (i.e. shunning) doesn’t look like truthseeking, but it’s instrumental to truthseeking (when done with specificity and sensitivity/by people with a well-calibrated set of certain common social skills).
a large (but not the only) factor in getting me to step down from a leadership position in a project I’m spending about half of my time on. [...] and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance.
I wasn’t aware of this, but it seems unfortunate. If successfully ostracizing me isn’t going to happen anyway, “both of you step down from something that you previously wanted to do” seems like a worse outcome than “neither of you step down.”
(For my own part, while I wouldn’t invite you to any parties I host at my house, I have no interest in trying to get other people to exclude you from their events. I consider my goal in this whole affair as simply to make it clear that I don’t intend to let social pressure influence my writing—a goal at which I think I’ve succeeded.)
shunning people who call others “delusional perverts” because of their gender
I hadn’t bothered addressing this earlier, because I wanted to emphasize that my true rejection was “I don’t negotiate with emotional blackmailers; I’m happy to listen and update on substantive criticism of my writing, but appeal to consequences is not a substantive criticism”, but since it is relevant, I really think you’ve misunderstood the point of that post: try reading the second and third paragraphs again.
What I’m trying to do there is highlight my disapproval of the phenomenon where the perceived emotional valence of language overshadows its literal content. I understand very well that the phrase “delusional pervert” constitutes fighting words in a way that “paraphilic with mistaken views” doesn’t, but I’m interested in developing the skill of being able to simultaneously contemplate framings with different ideological/emotional charges, especially including framings that make me and my friends look bad (precisely because those are the ones it’s most emotionally tempting to overlook). People who aren’t interested in this skill probably shouldn’t read my blog, as the trigger warning page explains.
(Seriously, why isn’t the trigger warning page good enough for you? It’s one thing to say my writing to should have a label to protect the sensitive, but it’s another thing to say that you don’t want my thoughts to exist!)
It would have been better if I’d been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself
Not all goals are achievable by sufficiently-skilled gentle social manipulation. If you can show me an argument that can persuade me to change my behavior given _my_ values, then I’ll do so. If no such argument exists, then your skill and gentleness don’t matter. (At least, I hope I’m not that hackable!)
I appreciate your offer to talk things out together! To the extent that I’m feeling bad and would feel better after talking things out, I’m inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn’t have been at all true of the old version of myself. This algorithm is a bit new to me, and I’m not sure if it’ll stick.
Overall, I’m not aware that I’ve caused the balance of the discussion (i.e. pro immediate abrasive truthseeking vs. pro incentives that encourage later collaborative truthseeking & prosociality) to shift noticeably in either way, though I might have made it sound like I made less progress than I did, since I was sort of ranting/acting like I was looking for support above.
encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed
Is this really a winning move for you? I’m not budging. It doesn’t look like you have a coalition that can deny me anything I care about. From my perspective, any activity spreading the message “Zack M. Davis should be shunned because of his writing at http://unremediatedgender.space/″ is just free marketing.
This seems similar to Leverage in a lot of ways. It seems like it would be really instructive to contrast your plan with Leverage’s plan—as initially intended, and as executed—to see what you plan to invest in that they aren’t, what you’re not doing that they are, and costs and benefits of those differences.
Other contrasting case studies might also add clarity:
Esalen
kibbutzim
the old Singularity Institute house
residential colleges
fraternities
Buddhist monasteries
Christian monasteries
actual armies
actual paramilitary organizations / militias
Sea Org
It probably makes sense to 64⁄4 these with rough sketches from memory/stereotypes/Wikipedia-ing before bothering to do any time-intensive research.
Yep. I don’t have strong ties to Leverage, but I’m talking with a couple of the people and have friends involved who have better models than me. +1 to this point.
Esalen is worth noting because it’s a place that’s extremely intellectually productive. There are many different paradigms of bodywork that come out of Esalen.
Esalen is central for the history of Feldenkrais, Rolfing and a bunch of other paradigms.
If you could build a community that succeeds to do for rationality what Esalen did for bodywork that would be a huge success.
In his Cargo Cult speech Feymann describes the place by saying:
Most people believe so many wonderful things that I decided to investigate why they did. And what has been referred to as my curiosity for investigation has landed me in a difficulty where I found so much junk to talk about that I can’t do it in this talk. I’m overwhelmed. First I started out by investigating various ideas of mysticism, and mystic experiences. I went into isolation tanks (they’re dark and quiet and you float in Epsom salts) and got many hours of hallucinations, so I know something about that. Then I went to Esalen, which is a hotbed of this kind of thought (it’s a wonderful place; you should go visit there). Then I became overwhelmed. I didn’t realize how much there was.
I was sitting, for example, in a hot bath and there’s another guy and a girl in the bath. He says to the girl, “I’m learning massage and I wonder if I could practice on you?” She says OK, so she gets up on a table and he starts off on her foot—working on her big toe and pushing it around. Then he turns to what is apparently his instructor, and says, “I feel a kind of dent. Is that the pituitary?” And she says, “No, that’s not the way it feels.” I say, “You’re a hell of a long way from the pituitary, man.” And they both looked at me—I had blown my cover, you see—and she said, “It’s reflexology.” So I closed my eyes and appeared to be meditating.
The Pareto Principle says that you can 80:20 many things, i.e. get 80% of the value from 20% of the work. If you 80:20 the 20%, you end up with 64% of the value for 4% of the work.
For the next three months, I will embark on my own experiment of living in a high-standards high-group-activity environment. Specifically, a Buddhist temple.
The temple has an even tighter schedule. All residents wake up together at 5 am and go to sleep together at 10 pm. The rest is meditation, study and work, with 4 hours of free time. The weekends are free, so it adds up to being told what to do for 85 hours per week.
Over the years, I have stayed there six times for a week. The first days are usually a fight to adjust to the lower standards of living (the unpleasant valley). As the days go by, I become increasingly energized and sharp. When I leave, I’m in the best state I can be. Not even a CFAR workshop measures up to how much I upgrade in such a short time.
And it’s not the meditation. I’ve gone for days without really meditating and I would still upgrade.
This has led me to believe that something about our individualist style of living is profoundly wrong, at least for some people. Seems like a solution to many of our problems lies in collectivism. Think mental health, akrasia, huffelpuff virtue, etc.
I am really interested in how this is going to fly. Please do post updates. I would also love to share my perspective. I think I’ll have some interesting data.
If you’re willing, sharing your perspective in more detail here is welcome (so that all the models are in one place). Else, you’re welcome to PM or email me.
In the spirit of Murphyjitsu, the most obvious failure mode that you didn’t mention is that I expect you to burn out dramatically after a few weeks, from exhaustion or the psychological strain of trying to optimize the experiences of N people. The bootcamp phase is not analogous to anything I’ve heard of you doing sustainably for an extended period of time.
So, do you expect Dragon Army Barracks to work if Eli has to take over for you in Week Four?
Hmm, interesting. My self-model is somewhat incapable of burning out during this, due to an ability to run forever on spite (that’s only somewhat tongue-in-cheek).
It’s a solid point, though. If I condition on burnout, I think that Eli manages or not based on the level of specificity and concreteness that we managed to get in place in the first few weeks. Like, I don’t think Eli is competent (yet) to create the thing, but I do think he’s competent to oversee its maintenance and preservation. So that seems to put a somewhat higher priority on early systemization and scaffold-building than might have otherwise been in my plan.
Good question.
Edit: also, probably the closest analogue to this in my past is being the sole functioning RA on a dorm hall of ~30 high schoolers in a high-stress school environment. That was probably within the same order of magnitude of juggling, once you account for the fact that my increase in skill since then is balanced by the increase in complexity/responsibility. I did a lot to try to manage the experience of those thirty people.
FWIW, my model of Duncan agrees with his model of himself here. I don’t expect him to burn out doing this.
…and even if he does, I expect that the combo of Eli plus the sort of people I imagine being part of Dragon Army would pull it through. Not guaranteed, but with a strong enough chance that I’m basically not worried about a failure mode along the lines of “Flops due to Duncan burnout and subsequent systems failures.”
I would like to say that I share your strong preference for being second in command over first and would like to add a datapoint that I find being first in command to be really stressful in a way that doesn’t hit me or mess with my decision making until after I relinquish the role, at which point it hits hard, and am curious if that happens or has happened to you. (Examples; Being first responder in a medical emergency and keeping everything going right up until the victim had arrived at the E.R. and then throwing up and shaking for the rest of the night, leading a major college class project for a semester that went really well and then essentially shutting down and hiding in my room for a week.)
If I were trying to do what you seem to be trying to do, I would be setting myself up for a major crash once I’d brought the experiment to a close or handed off the baton. Obviously our minds are different in many ways, but I figured it was worth checking to see if you had that issue and found a solution that might be stealable.
Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.
Not a full solution, but gesturing in a direction that you might find useful: build the system in such a way that gaming it is encouraged and useful, and that the punishments are somehow self-balancing.
E.g. if the punishment is “do some chores”, somebody who figures out that doing the chores is easier than their other obligations is at least clearing the list of all the chores that need to be done. If they run out of chores to do, new tasks can be added to the list, and they can choose whether doing them is still worth it.
I’m here kinda reminded of the evolution of pen’n’paper RPGs, which originally had disadvantages you could buy during character creation that made you more powerful in exchange; of course people would munchkin by “forgetting” the disadvantages during play. Newer games got past that by making disadvantages give you zero points during character creation (or even cost!), and instead had them award benefits if you roleplayed them during actual game. In general, games have gotten the better the more they have built “trying to munchkin the rules, automatically leads you to play the game more like it was designed to be played” as a fundamental game design principle.
Not sure of how to do the “self-balancing costs” thing, but I am reminded of the bidding systems some houses have for chores, where you offer money for doing some task and if someone else finds the offered amount of money more valuable than the pain of doing the chore they do it; otherwise you do it yourself.
My strongest recommendation is to beware of internal power struggles. Even if you are fully understood to be in charge, if everyone under you is in a state of emotional mutiny, you WILL become compromised, and you WILL make mistakes, and those mistakes WILL be used to justify further emotional mutiny. This will spiral until you lose everything.
Moreso, some percentage of your trusted minions WILL undergo emotional mutiny. They will discover that they’d rather be somewhere else, doing something else. They’ll discover that there are people other than you they’d like in charge of their lives. They will discover that they don’t trust you as much as they thought they did. Even if you pick the best people—hell, ESPECIALLY if you pick the best people, because the best people will have other people vying for their attention, seeking to undermine you from without.
Chiming in because the problem of helping people level up is close to my heart.
Putting the social dynamics of the experiment aside (since there are plenty of people discussing that aspect), I’d like to offer some good-natured skepticism about the overall approach. (Good-natured meaning, I hope you actually do pursue this because I’m genuinely curious about how this will play out—assuming the safety concerns others have raised are handled well, of course).
My skepticism is: this is too meta and too complicated to lead to actual progress.
I spent a few years at company that tried to inculcate a deliberate process for getting to the right answer, including a culture of radical honesty and formal procedures for making decisions and learning from mistakes. This was a major priority at the company for a long period of time (last I checked, it’s still going on), with backing from the entire senior management team, and was enforced by firing people who couldn’t or wouldn’t skillfully participate. I.e., they took it really seriously and put a lot of effort into it. The people who conceived and implemented it were in my opinion extremely smart and competent.
That said, in my opinion the effort spent on this program did more harm than good to the functioning of the company. The values and culture became an end in itself, as opposed to a means for helping achieve goals, and endless amounts of time and energy were spent debating, elucidating, learning, and critiquing the system. Competent professionals ended up becoming ineffectual because they gave up (or were forced out of) their unreflective expertise and got stuck in endless cycles of second-guessing. Some of that self-reflection may have given rise to new levels of skill (in my case, I did in fact feel like I benefited from my time there, although I think that was largely because it was my first job out of college so I didn’t have that much to un-learn), but generally people felt disempowered by the initiative rather than improved.
In contrast, for the last few years, I’ve been running a tiny company where we have very little meta discussion and mostly just do object-level work. I feel 1000x more productive now than I did at my prior job.
My takeaway from this is that the optimal ratio of meta-level tuning to object-level practice is [small number] : [large number]. Meta-level thinking is extremely valuable and important, but I view it as the rudder on a boat: you need to be constantly making adjustments to keep pointing in the right direction, but 99% of the power generation goes into the main engine pointing forward.
If I had to generate a hypothesis as to why the concrete achievements of the rationalist community are less than might be desired, it would be that the community spends way to much of its energy on meta topics instead of on object-level progress. This is understandable, since a) meta-level discussion of rationality is what created the community in the first place, and b) object-level discussion can often be very boring compared to meta-level discussion. (I miss the intellectual stimulation of my previous job, even as I see it as basically a waste of time in terms of actually building a successful company). While understandable, I think it leads to predictable outcomes: a lot of talk happens but not much gets accomplished.
Looking at the proposed charter, I suspect there will be a very high amount of meta-level discussion, probably significantly more so than at my prior job that I thought was way too meta. That’s because a) it’s built in to the daily schedule, b) it’s built into the mission, which is expected to evolve over time with the participants, and c) it’s built into the community that the participants will be drawn from.
In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me. In the company I worked for, the set of norms and practices were set in stone by executive fiat, recruits to the company were presented with them prior to accepting jobs, and adherence to them were a major part of performance evaluation, and there was still a very high employee churn rate and a general agreement that the norms / practices as specified weren’t consistently well-practiced throughout the company. The Dragon charter is for a smaller group of people, which makes things easier, but the norms / practices are expected to be a moving target, which makes things harder.
In my personal experiments with self-improvement, I’ve had the most success with extremely simple plans. My most successful self-intervention to date has been to download a simple habit tracker on my phone, and add a new daily habit, moving on to the next only after successful completion of the prior one for 30 days. When I first started trying to learn new habits, I would add a bunch of new habits at once, and I would always fail. It took me a very long time to get patient enough to only try to change one thing at a time (which requires accepting that I’m going to have habits I don’t like in the interim that I don’t try to do anything about).
Similarly, I’ve been successful growing my current company by having an extremely boring strategy of: ship code, talk to customers, ship code, talk to customers.
Simplicity does not come naturally to me; I like my ideas and strategies to be convoluted, complicated, ambitious, and interesting—I get very bored with simple, straightforward approaches. So I’m a big believer in simplicity because I’ve learned the hard way against all my natural inclinations that—unlike my natural inclinations—it actually works.
So if I were trying to design a charter, I would pick one or two things that I think would be most likely to have a game-changing impact, and just focus on those things until they worked (or didn’t). In contrast, the charter as it exists now feels to me like it has way too many moving pieces. That’s just my intuition, of course, but I hope I’ve given a feel for where that intuition comes from.
Anyway, I admire the ambition in doing a project like this, so I hope my criticism is constructive and useful.
Thanks for the long and detailed response. I enjoyed reading it.
It’s interesting that you highlight meta as being a dangerous failure mode—I actually strongly agree, which is why the aesthetic is tuned toward stuff like “just exercise” and “housemates should produce visible work.” My sense is that a strategy of just doing stuff outstrips in practice a strategy of think really hard until you find the ideal move, especially when you take into account how many iterations you can get in if you’re churning hard.
Hilariously, though, I’m further inside the rationalist bubble than I thought, because I accept your overall summation even though the intent was to be THE OBJECT LEVEL HOUSE (or at least, the house that does stuff even if it goes meta on norms). I still think we’re set up to be relatively ahead, but as you point out, that’s not necessarily a sufficient bar.
However, I’m much more concerned with:
In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me.
That rings very true to me, and has been an active concern of mine for the past couple of weeks. It seems like there are something like a hundred activities/experiments/norms/projects that are worthy of including in this, and something like 1.3 slots per week (and thus not even room for half), and I’m not at all certain how to best pick and choose and prioritize and optimize for success. In part, I’m hoping that if we just throw ourselves in and iterate (see above) we’ll do better than if we agonize, but yeah, there are a lot of moving parts, and I wouldn’t be surprised if we ended up trying to drastically simplify in like our fifth week house meeting.
If I had to really zero in on basics, I think they are:
Never give up on an experiment until its predetermined end date
Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)
… those, I think, are the iron core of the project.
Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)
I’m curious why this is so important to you, unless that it’s just something to try out. I currently live alone and I like it that way, and I see no reason why spending more time with other people would be such a great thing.
You seem really rigid about excuses though. I think the tendency will be that people will come up with an excuse which one finds it unpleasant or difficult to dispute. For example, when I was in the data science bootcamp in Berkeley, people would very frequently say, “I’m sick and I will be working from home today.” Now a lot of people were in fact sick precisely because of so much physical proximity. But it was very obvious in many cases that the basic reason they were staying home was that they were tired of all the company and felt the need to get away. They did not however feel comfortable saying, “I just feel the need to get away.”
The same thing was true when I lived in a monastery. You could not say “I just feel like sleeping in this morning,” so people said “I didn’t come this morning because I didn’t feel well.” We all knew that this simply meant they were tired and felt like sleeping in. But no one is comfortable confronting someone with the fact that they’re not really sick if they say they are.
The focus on physical presence is a combination of research showing that it matters (there’s some stuff I’ve collected from Dunbar, for example) and strong personal intuition from past experience. In many ways, it’s the core of the thing being tested out, but I have a lot of weight on “it turns out to matter more than just about anything else.”
re: excuses, the intention of the house is Not To Do The Stupid Thing.
Clearly, “mental health” days are a real phenomenon—I’ve taken some myself. And on a larger scale, psych blockers/motivational issues are also real. So it’d be stupid to a) pretend they don’t happen, and b) push directly against them all the time, and never look at undercutting them or working around them. This plan pushes directly against them some, with commitments to just show up anyway, but that’s not the only tool—one of the things I hope to do is increase the candor of all housemates, at least within the context of the house. This will take some practice and reinforcement, but I much prefer a norm of “Huh. I notice I just really didn’t want to show up today” --> figure out what’s going on and address it systematically, to a norm of “little white lie that nobody calls out.”
It’s also worth noting that the house has a pretty high introvert quotient, so there will be a lot of us (myself included) who are motivated to safeguard systems giving one the ability to get away from people for a while.
Thank you for writing that! It’s great to see the “too meta” problem spelled out so clearly. It’s similar to the situation in programming that has long puzzled me. Many people and companies have accumulated processes that they swear by (code review, type systems, continuous integration, agile and whatnot) but at the same time lots of people do amazing work with very little process.
It seems like meta stuff has a way of self-justifying and growing, like a bureaucracy. It’s useful if you’re stuck and nothing works, but if you’re making any progress at all, it’s better to steer with the engine so to speak. Radical meta proposals sound attractive to people who have fought their minds to a standstill, but even for such people I think a better idea is starting one small object-level thing on a strict schedule (gym is a good choice), making the mind more mobile for other things in turn.
Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.
I’ve come around somewhat to the outside buddy idea below; I dunno about the buddies knowing each other. That seems to introduce a whole new layer of difficulty, unless you’re just talking about, like, an email list.
Cool. Yes, a mailing list sounds even better than the low-tech solution I had in mind, which was “every buddy learns 80% of the names of the other buddies through the grapevine, and they happen to be one or two hops away on the social network”.
This seems extreme. Do you not expect that each participant will already have at least one friend outside the house they can talk to about the house if things go poorly, without this needing to be an explicit policy? Or do you worry that things will go so poorly that this won’t work for some reason? If so, can you share a more detailed model?
I think there’s a difference between a friend that one could talk to (if they decide to), and a friend tasked with the specific responsibility of checking in and intervening if things seem to be going badly.
Parts of the house setup pattern-match to a cult, cult members aren’t good at realizing when they need to leave, but their friends can probably tell much more easily.
(I don’t mean the above as negatively as it sounds connotatively, but it’s the most straightforward way to say what I think is the reason to want external people. I also think this reasoning degrades gracefully with the amount of cultishness.)
Yep, this is why I’m in favor of the “outside friend” norm. In particular, despite not planning to make a bad cult, if I accidentally do, I’m in favor of it being noticed as soon as possible, so it can either be fixed or dismantled.
I’m not proposing a house policy here. I’m suggesting that a Dragon would do well to have regular followups with someone outside the house, and I’m proposing that some members of the wider community offer to be those someones.
In the past I’ve had regular video calls with a couple people who were doing long-term experiments with their lifestyle; I think it was helpful. I believe such an arrangement was part of the Leverage polyphasic sleep experiment.
Jacob is right: There’s a difference between a friend one can reach out to if one needs to, and a friend one is scheduled to talk to once a week. Personally, I struggle to keep up with friends without scheduled meetings, and it sounds like the Dragon Army will be very busy.
Also, there is a difference between reaching out to a friend when things have gone very wrong and one needs to get out; and bringing up a less drastic problem during a weekly check-in. In the first case, you need a couch to crash on and maybe a lawyer. In the second case, you need someone who will listen to you and bring an outside perspective, and maybe refer you to other resources.
Partially, I’m afraid that if this doesn’t go well, our community will lose a cohort of promising people. It would be a shame if that happened because we failed to pay attention to how they were doing.
But also, if the experiment goes very well, this arrangement would be a means by which the wider community can learn from what went right.
Partially, I’m afraid that if this doesn’t go well, our community will lose a cohort of promising people.
I really don’t know what you mean by “lose” here (and I’m worried that others will have varying interpretations as well). Do you mean they’ll become less promising? Not promising? Leave the community? Go crazy? Die?
Anyway, this seems sensible, but I still want to nudge you and everyone else in the direction of sharing more explicit models of what you think could actually go wrong.
Sorry, I was imagining a scenario where a person has an unpleasant experience and then leaves the community because for the last several months all their close contacts in the community were in the context of an unpleasant living situation. That’s bad for the person, and unfortunate for the community as well.
I see a possible failure mode where a member of a participant’s family not into any rationalist community sees the Dragon Army rules and pattern-matches the rules and behavior into ‘cult’ (not arguing whether that pattern match is correct here, just saying that it might happen).
A family member concerned that their loved one might be involved in a dangerous cult might take extraordinary measures to remove that person from the situation, which might get very ugly.
I’m not sure that a nonparticipating buddy is sufficient to mitigate the risk of ‘rescue’.
I expect it to fail. And I kind of wish you wouldn’t try: I give maybe a 1⁄4 chance this fails sufficiently dramatically and publicly that I become less willing to be associated with the community because people start associating it with that failure.
In particular, here is what I expect to happen (~60% confidence it goes down something like this):
Someone will start regularly defecting within the first three months. Maybe they don’t keep up with their chores, maybe they skip meetings, maybe they fail to get along with someone and they fight, maybe they persist in doing something they’ve been asked repeatedly not to do, maybe they chafe under your leadership and start practicing malicious compliance. I don’t expect intentional defection so much as executive dysfunction, to be clear, but it has the same effect either way.
You, personally, will lack the force of character or charisma to fix it. (I haven’t met you in person, so this might be way off; I’m just going off your writing and those of your pictures on Facebook I can see. But it takes an extraordinarily good manager to deal with this problem, and there’s nothing in your bio which implies you are one.) You also, not being legally their military superior, won’t have any actually worthwhile carrots or sticks to offer—this is the core problem, as I see it, that you lack the legal authority to properly enforce anything. Also, rationalists are weird, and often don’t respond that well to the usual incentives.
The rest of the house will lose confidence in your leadership as a consequence.
Bad things. I don’t actually know what happens at this step—people move out, or just stop playing by your rules and it reverts to a standard if unusually dysfunctional group house, or what.
Unfortunately I don’t have fixes to offer you here, other than “try to figure out an enforcement mechanism which will work even on rationalists and which you can legally carry out”. I can’t think of such an enforcement mechanism, but haven’t even put a full five minutes into it. Maybe you already have one in mind and I’ve missed it. To be clear, I don’t think “ostracism” will be remotely sufficient, because of the aforementioned weirdness and the fact that people will have other friends to fall back on. (I guess you could only invite people without other friends, or require them to cut off contact with said friends, but that is a terrible idea.) I also want to say that I’ve seen a number of other communities either fail or struggle due to lack of an explicitly specified and actually effective enforcement mechanism for their rules.
Tiny side note: I think it’s very important that members have regular one-on-one meetings with someone other than you, in case their problems are problems with you which they aren’t willing to bring up to your face.
Thanks for this detailed model. I had a sense of this as a failure mode, but I like the specific way you’ve expressed it.
I do actually have a fair bit of managerial skill. I dunno if it’s better than 1⁄100, but it’s at least in that range. I also completely agree about regular one-on-one meetings with other people; in part, that’s what the “pair debugging/rapport building” time commitment is. I wonder if you think it’s important that they be with a specific other person, or if you think just fostering lots of one-on-one communication hits the thing you’re gesturing toward?
A specific other person intuitively sounds better to me, but that might just be because that’s how it has been done in organizations I’ve been in. (Though it sounds hard to schedule if it’s not a specific person, otherwise, and it’s important that this be a regular thing with the specific topic of “talk about how things are going”, not just general spending time together.) Maybe your second in command, maybe a different person from the command structure—I assume there’s going to be people other than you with roles like “general household management” (I am thinking of office managers, if you’re familiar).
I don’t think the pair time accomplishes quite this. Having a specific time set aside for one-on-one meetings specifically as the regular opportunity to bring up issues means issues which might otherwise have stayed at the back of the mind get brought up more. Generic time spent together does not accomplish this. It’s approximately the same reason you want scheduled one-on-one meetings with everyone in the house despite presumably spending a lot of time with the people in the house in other contexts.
Hmmm. It might be good to install as a house norm that everyone has an outside advisor that they commit to checking in with, either once a week or biweekly. Like, someone not directly affiliated with Dragon Army in any way.
That’s only useful if the outside advisor has some level of veto power. I’d suggest something like allowing them to trigger a discussion meeting /outside of Dragon Army Territory/ with the advised, optionally including the Commander and/or other members, and also at the option of the advisor including legal counsel or a medical practitioner.
Not because I expect anyone to need the safeguards involved, but because making those explicitly part of the Expectations makes it harder to coerce somebody into not getting help. Making coercion of the type “You’re fine, no need to waste time and leaving your ingroup to try to explain to some /outsider/ what’s going on, they won’t understand anyway” ring red alarm bell flags is a feature.
Can I get contact info from you? I already have Malcolm’s; if there’s an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.
Can I get contact info from you? I already have Malcolm’s; if there’s an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.
Praise: The focus on actually doing a thing is great.
Criticism: Most of this post was about methods the house will have, why these are OK, etc. Comparatively little was about what the house is going to used to accomplish outside itself. This seems worth putting much more up-front thought into given how much of the point is to make a house that can actually do a thing. Probably your methods and selection criteria are not very well-calibrated for whatever project will turn out to be best—human coordination is much easier when you’re coordinating about something in particular.
Obviously you will not know everything perfectly in advance no matter how much planning you do—but planning to accomplish a particular thing is very qualitatively different from planning to accomplish things in general.
Praise: A lot of the details on how to live together well (group exercise, food, time explicitly set aside for checking in) seem really good. If step 1 is just “learn to live well together,” that is itself a respectable project, and one most of the Rationalists have failed at. Probably most attempts at this fail, we only observe the old communes that didn’t fall apart.
I like both your praise and your criticism. re: the criticism, one of the reasons I’ve held off a bit is a suspicion that I can’t actually well-model the sorts of things the house will accomplish once fully formed (that it will be stranger/more surprising than I think). I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum, etc. but they were all over the map.
I can’t actually well-model the sorts of things the house will accomplish once fully formed
My best guess is that having a highly specific plan that includes steering/replanning capacity and then totally abandoning it when the wheels hit the road because it turns out to be the wrong thing is way better than having a generic plan.
I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum
I’d love to see how you’d design a house specifically for any one of these goals. Robot car is the one that I think would give you the most feedback from your internal models during the planning stage, followed by publishing a book or movie. “Create an org” is a bit recursive, and a talk series is probably either too easy or too vague. Not sure what you mean by develop Veritaserum but it seems to strongly overlap with some of Leverage’s most plausibly successful research.
I claim with moderate confidence that simply walking through how the house as currently planned might go about building a robot car would substantially improve not just your plans for particular object-level capacity, but general capacity. “How will this organization change its mind?” might be a lot harder to cash out usefully than “How will this organization change its mind about valve design for the fuel injector?”.
re: your best guess, that makes sense. It’s possible I should just choose one of those plans above (many of which actually have lots of fairly detailed planning behind them already) and run with it for now.
Eli Tyre strongly agrees with your last paragraph, and is (correctly, and appreciated-ly) pushing for the first large-scale project to be determined sooner rather than later.
Thing that sticks out to me: you mentioned the value of doing something as a house as opposed to as a company. Some of these seem like the sorts of things one does at-a-company-in-particular (and seem like they’re require the amount of time commitment that a job requires). Is there something that distinguishes doing this as a house vs doing this as a particularly intensive company?
Note that those are deliberately not in the charter itself, because I doubt they’re sufficient.
Two things distinguish it—one, starting a company is harder than starting a house, and two, a major part of this is to bind people in a society, and everyone around me already seems to have separate buckets for “my job” and “my life.” I think it’s important to start leveling up people and getting people moving in the “my life” bucket, and that the “my job” bucket already has plenty of forward momentum and pressure.
To Duncan: I am not going to say you are trying to start a cult group, like some other folks did in this thread. However, I am going to suggest some background readings on cults if you are interested. Cults are a hobby of mine. My favorite cults are Scientology, unofficial Scientology derivatives who kept most parts of the belief system (yes they exist), and the Fellowship of Friends and other Gurdjieff-offshoot cults. Also Carlos Castaneda’s group is a fun one. Those are the fun ones to read about.
To people Duncan is talking to: you are a human being, not a space monkey. The space monkey road is not a good road, I speak from personal painful experience. The space monkey road is going to abstract personal growth issues in a way that will be counterproductive for you in the long run, imo.
Ilya: if you recommend your top 2-5 sources, I’ll commit to reading at least 30,000 words in the next two weeks. (I ask for more than one source in case you propose things I’ve already read.)
Live stuff on Robert Burton’s Fellowship of Friends: http://robertearlburton.blogspot.com/. Also some exposes are googleable. Also some stuff on wikileaks. I have personal second hand info on this cult (was never in it, but know people who were). The Fellowship of Friends has their main base (Apollo, in Yuba County) in California and preys on educated, high salary types.
There are a ton of Gurdjieff offshoots in various states of virulence/danger. One thing I learned about the concept “cult” is it’s a fairly fuzzy concept and sort of dissipates around the edges into fairly benign reading groups/clubs and so on. Probably has to do with how charismatic the main person (almost always male) is. So discussions of whether something is “culty” or not are, to me, kind of silly. If the question is raised at all, probably yes a bit culty.
I like reading lots of heterogenous sources and personal accounts to try to piece together what’s happening in places like that, rather than books.
My favorite cult to read about is Rajneeshism. It’s very recent, the head guy was almost supernaturally charismatic by all accounts, and the story is hilarious! From the collection of 93 Rolls-Royces to a bioterror attack by poisoning salad bars in an Oregon town with salmonella (yes).
BTW, Scott of slatestarcodex has also chimed in against the OP’s proposal:
On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up sinking the project.
Slatestar: “Also, Duncan’s taking the wrong strategy by denying it’s a cult. His pitch should be “Hey, cults seem pretty good at controlling their members, let’s get together a bunch of people who are interested in using cult techniques to become the best people they can be by their own values, and see if we can make it work.””
I agree with Scott on this. When proposing that we should return to well-explored territory found to be dangerous (which is what I claim cults are), we should at least be honest about the fact that we’re returning to old territory, and perhaps argue that it was in fact not as well-explored as we thought and there might be good things to be found there.
But instead, Duncan appears to be arguing that, according to the Pendulum model, we have moved so far past the “old way of doing things” that we skipped over the optimum and are now in another poor solution. He suggests his proposal is a gentle nudge towards the optimum, but this doesn’t seem to square with the fact that the “cult” model is the “old way of doing things” that we we’re previously stuck in. So to me it seems more like “swing even harder in the opposite direction!” when the pendulum should actually be slowing down, moving towards the optimum with less momentum than it had previously.
I agree that “cult” is a loaded and derogatory word and probably should be abandoned in favor of more information-carrying terminology. It might be better described as the centralized authority model. I stand by my claim that the centralized authority model is a return to old territory, though, and this meshes well with Scott’s model of the formation of the bi-modal distribution of peoples’ priors about this (marginalized groups have probably been exposed more to the centralized authority model than privileged Westerners).
“cult” … might be better described as the centralized authority model.
I don’t know about that. There are a lot of organizations with highly centralized authority which are not cults (by any definition). For example, the military.
I would probably define “cult” as an entity which, when faced with the question “Who are you going to believe, me or your lying eyes?” strongly encourages the answer “You, of course you!” In more abstract terms, a cult depends on controlling the information flow to its members, both through isolation and through inculating high trust for “internal” claims and low trust for all “external” claims.
Cults are not good at getting members to fulfill their own values. Consider the amount of cults that valued sexual purity and ended up with a whole lot of rape and child molestation.
BTW, Scott of slatestarcodex has updated his post with an “on fourth thought” (in addition to his excellent theory on the dynamic motivating disagreement) that states he’s moving away from concern (though not necessarily all the way to “unconcerned”). I’m hoping you would’ve posted this yourself—having sort of implicitly committed to using Scott’s opinion as an advisory authority—if I hadn’t done so myself first. Not just trusting him when he’s on your side, and so forth.
I’m encouraged by this both because they seem like good ideas and because they sound like he’s thought this through more fully than I originally thought.
Also, if we are going to keep bringing in questionable outside blogging as source material, there’s this, which I feel fairly treated by and comes from an author with actual relevant life experience.
Also, if we are going to keep bringing in questionable outside blogging as source material, there’s this, which I feel fairly treated by and includes people with actual life experience rather than those talking out of their butts.
EDIT: Scott of slatestarcodex has updated his post with an “on fourth thought” that states he’s moving away from concern (though not necessarily all the way to “unconcerned”).
I think most people can do well by joining the kinds of relationships that are time-tested (marriage, friendship, work, school, gym, army, church...) From how much trouble it took society to get these halfway working and find decent boundaries, you should be skeptical of inventing new ones that will work in your lifetime. Especially if they look suspiciously similar to cults which we already know don’t work.
And I’m not even sure why you need to invent new relationships! You might feel like you have huge problems that require one huge hammer to solve, but that feeling is deceptive. Mitigating the problems one by one, with boring well-known fixes, is easier and works better. If you want to get fit, join a gym. If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you’ll be guided by competent people instead of fumbling your way as a crowd of amateurs.
I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. “cults don’t work.”
This is largely not a new invention, and is instead largely a return to structures and values that have been known to work in the past, and have been loosened/undermined in the past few decades.
I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. “cults don’t work.”
My opinion of CFAR just fell from “neutral” to “mildly harmful” because they hired someone who’s willing to say the above. On old LW (where Eliezer wrote a sequence on avoiding cults and I was contributing decision theory math) this would’ve been unbelievable. Or maybe I’ve been missing the signs, not being in the Bay Area.
You’re not thinking or arguing clearly, and are instead leaping to conclusions and pulling from stereotypes.
If you lose respect for CFAR over that, it’s the result of your own confusion, and the loss of your endorsement is not one I’d lose sleep over.
One can say “guns are indeed effective” and not be advocating for wanton gun violence. It’s a statement about objective reality—guns do things—not a statement about normative values. Similarly, I can argue with your claim “cults don’t work” (which is clearly,demonstrably false on at least some axes; cults were in fact successful enough to cause large damage to a lot of people’s lives at the very least) without saying “HECK YEAH, GO CULTS.”
I’ll continue to engage, or not, based on whether or not you respond reasonably to the above. Sorry for the impatience, but I’ve written thousands upon thousands of words in this thread by now, and I’m not at all in the mood to let people strawman me at this point (even if they want to try to pull a sneaky status move by claiming seniority-on-the-forum and trying to shame a certain kind of statement without any model behind the shaming).
(I also note that you didn’t bother to respond AT ALL to my claim that you’re making unfounded leaps, nor to my claim that this is in fact a return to previous proven systems rather than an attempt to invent a new one, which makes me think that in addition to smushing together unrelated things in your arguments, you’re not actually here to discuss, i.e. swap statements back and forth on a topic and in fact interact with what the other person is saying, and are instead here to just score points or confirm (rather than falsify) your own models.)
If you took my original comment to mean that cults are harmless, that’s a bit bizarre.
As for previous proven systems, I’m not sure which ones you mean. The closest analogue is religious or socialist communes, which turn bad too often for my taste. The happiest exception is kibbutzim which weren’t nearly as authoritarian as your idea. Then you have the army, which exists today just fine and we know what it’s good for, not sure why we need another one. Then there are boarding schools, sport camps etc. but these are based on learning from professionals which you don’t have.
I took your original comment to be saying “cults don’t work.”
Then, when I said “they do, though,” I took your second comment to be pearl-clutching and saying “well, now I think CFAR must be (slightly) evil or stupid for hiring someone who is willing to say out loud that cults work (gasp).”
You cannot possibly have drawn out of my statements above “Duncan thinks cousin_it thinks cults are harmless.”
I’m going to disengage because it’s not easy to have discourse with you (say things clearly, stick to a topic, expose reasoning, actually make progress toward truth or convergence). I don’t understand how your reasoning process works. I’m finding this subthread frustrating and low-value, and thus far the specific points I have been able to tease out of what you’re saying, I generally disagree with (and trust my domain knowledge and expertise more than I trust your skepticism-without-any-concrete-evidence-backing-it-up-from-someone-who’s-already-demonstrated-willingness-to-make-unfounded-leaps).
The militaries have a pretty big stick. You can go to prison for insubordination or disobeying orders; in wartime you might well just be shot for that. The Dragon Army… will give you a stern talking to?
The only person I heard of go to the brig was one who broke into barracks and stole personal property. Falsifying official records or running off to run a side job as a real estate broker was more of a ’30 days restriction, 30 days extra duty, reduction in rate to the next inferior rate, forfeiture of 1⁄2 month’s base pay for 2 months’ thing.
Actually I agree. It feels weird to see that one person upvoted my comment without knowing how many would have downvoted it. The same might apply to Duncan’s post, from the comments it seems like it was really polarizing, but the score only shows the 28 upvotes. If I may be allowed another reference to old LW, Eliezer used to advocate that people downvote more, ideally without replying. I think he saw it as a defense against noise and then left when the noise became too much.
You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel, opposing comments, e.g. it has 28 upvotes and 1029823904812309481320948blargltroll has 10. I highly doubt this would have ever received sufficient mass of downvotes to become invisible.
You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel,
I’m fairly certain that P(disagrees with blargtroll | disagrees with your proposal) >> P(agrees with blargtroll | disagrees with your proposal), simply because blargtroll’s counterargument is weak and its followups reveal some anger management issues.
For example, I would downvote both your proposal and blargtroll’s counterargument if I could—and by the Typical Mind heuristic so would everyone else :)
That said, I think you’re right in that this would not have received sufficiently many downvotes to become invisible.
It’s an appeal to authority and someone shitting on an organization based on one line of a lesswrong comment by one member of that organization, with no request for clarification or depth.
I don’t think there a good argument that a Western Church works that much better than a Yoga Ashram and the setup of the Dragon Army is relatively similar to a Yoga Ashram.
If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you’ll be guided by competent people instead of fumbling your way as a crowd of amateurs.
When comparing kids with decent parents schooled children don’t do much better than unschooled children.
When I was at university learning computer programming I quite often used the not-time-tested StackOverflow over the time-tested method of asking the tutor.
Churches don’t have cohabitation, they’re more like clubs, so the risk is lower. And in an ashram you hopefully get taught by a yoga professional, not just bossed around. I don’t see the value of OP’s proposal compared to either.
I thought homeschooled kids were usually taught by parents? Though I agree that you can learn stuff on your own. The problem is learning a broad range of ideas in a manageable time, not just picking a narrow path through topics that catch your interest, and for that many adults find universities useful. Not to mention you meet many smart people and pick up good memes without realizing it. Somewhere on Tumblr I saw a proposal for improving MIRI’s research that said simply “everyone gets a PhD”, I thought there was a lot of truth to that.
I note that you continue to assume/argue as if there will be zero relevant professional expertise, despite the fact that the professional expertise CITED IN THE MAIN POST is far from the only professional expertise that will be brought to bear during the experiment. In our very first dry run outing, we hired professional instruction to learn a new skill—you are factually incorrect in your assertions.
You are doing your level best to make sure to interpret everything here in the strawest, most negative possible light. “just bossed around.” I’m starting to assume you literally haven’t read the post, because it’s rapidly becoming the only possible explanation for your conclusions.
You’re not setting up a school with yourself as teacher, though. You’re setting up a commune with yourself as boss, with rules above and beyond what schools usually require, and which would lead to student rebellion if they were imposed in a university. So if learning is the point, I’d like to understand how your thing is better than a school.
Also, you miiiiiiiiiight try not leaning so hard on the typical mind fallacy. West Point is a university that people self-select into with far, far, far stricter rules than this, and they don’t rebel. Ditto many sorts of monasteries and temples and martial arts dojos and retreat centers (including many that are non-religious), ditto all sorts of invasive practices by organizations attempting to turn around lives-gone-astray or turbocharge the already-successful (e.g. regimens put into place by life coaches or intensive business training groups).
You’re confusing “I don’t like this” with “this is objectively bad” (or “I would rebel against this” with “no sane people would not rebel against this”) which—to quote you—on Old LW would have been unbelievable.
Once you make even a single good faith attempt to pass my ideological Turing test (my attempt to pass yours is the multithousand word post above), I’ll start taking your criticisms as seriously as I’ve taken everyone else’s.
“Life coaches”, bullshido dojos and religious brainwashing houses aren’t a good group to be in. It seems to me that such places are fine at teaching authority, but not the best way to teach anything else. I wouldn’t go to West Point to learn math or even history, I’d go somewhere that focuses on math or history instead. And even for fitness, dojos lose to compartmentalized workouts like lifting or BJJ.
Maybe my mistake is misunderstanding the rationalist community. I know that they are a slightly weird bunch, but it’d take a lot to convince me that a boot camp environment would suit them. In the Russian Army such folks tended to be miserable, whereas in relaxed civilian jobs they thrived. That’s part of why I’m replying to you, I feel that nerdy types are vulnerable to proposals like yours but ultimately don’t benefit from them. They already have a lot of tension and random windmills going in their minds, putting them in a pressure container makes it worse, compared to doing casual normal stuff.
Your mistake isn’t misunderstanding the rationalist community, it’s strawmanning and stereotyping and typical minding. If you stopped for one second to think, huh, maybe somebody who’s clearly not an idiot and whose models so strongly disagree with mine might see something I don’t, and approached this thing with curiosity instead of blunt assertions about how it’s terrible and you know better, you could’ve, I dunno, asked questions about places where you’re confused, and I would have answered them, and like many, many other places in this thread, there would’ve been a process of mutual updating and convergence as I showed you cool conclusions from thinking I’d already done, and you helped me find holes and flaws and make fixes, and both of us came out of the interaction with a clearer view of reality and a stronger ability to do good in the world.
Even now, like a dozen comments in, you refuse to stop it—putting scare quotes around life coaches and attaching the word bullshit to the word dojos and adding brainwashing to the phrase “religious houses.” You are not here in good faith; you’ve got a negative model you’re in love with and you’re confirmation biasing all over the place. You’re every bit as much a troll as the anonymous person was—you’re just more subtle about it.
Besides being pure ad hominem, you seem to understand “good faith” as trying to help you. Let me point out that no one has any obligations to help you or to cooperate with you—refusal to do so is not bad faith. Pointing out that your endeavour is misguided and doomed to failure (assuming that’s a point of view honestly held) is not in bad faith either, even if you do not accept the arguments made.
You are perfectly free to not cooperate with people who won’t cooperate with you, but that lack of cooperation on their part is neither malice nor trolling.
You got a lot more defensive over the past few days.
I disagree that the summary is ad hominem—I think it is a concrete description of my highest-probability model explanation of cousin_it.
I don’t interpret good faith as trying to help me. I do interpret it as trying to help us, where I define “us” as “all of the people on LW and in the rationalist community” specifically, and more broadly as “all humans.”
I don’t see cousin_it as doing any kind of truth-seeking or curious investigation, nor do I see them as taking a principled stance against something that is actively dangerous (the way the troll did). Instead, they’re just throwing out straw criticisms without actually bothering to put in the work to engage with the actual topic at hand. It smacks of either careless antagonism or an attempt to score cheap points, whereas many of the people who are openly and unrepentantly opposed to this project still seem, to me, to be acting in good faith.
I disagree that the summary is ad hominem—I think it is a concrete description of my highest-probability model explanation of cousin_it.
Buzzword compliance aside, this is precisely what ad hominem is: “a … description of … ”. The subject is your proposal for a commune—not your beliefs about cousin_it.
I don’t interpret good faith as trying to help me. I do interpret it as trying to help us, where I define “us” as “all of the people on LW and in the rationalist community” specifically, and more broadly as “all humans.”
That sounds to me like pious crap. I don’t see you as different from the 99.9+% of people who are not qualified to judge who is trying to help “all humans” and who is not—and that’s even besides the oft-made observation that road to hell is never in need of repair.
Let me remind you again—we are discussing your proposal for a commune, not whose intentions are pure.
As I said, you are free to cooperate or not, but focusing on what you see as personal shortcomings of people who disagree with you seems like a road that leads to bad places. Especially given that you put forward yourself as the Dear Leader of this potential commune.
Right. The problem is, only some of us are actually discussing.
In point of fact, most of us are actually discussing, but threeish people have just dropped in to lecture with no even hypothetical willingness to change their minds (or at least none credibly demonstrated, as I claim I’ve credibly demonstrated mine).
EDIT: Also, on reflection, I still think you’re either misusing the term ad hominem or mischaracterizing the critique I’m making of cousin_it. I’m not trying to make claims about them as a whole person (e.g. they’re bad in general or they lack the ability to engage in good faith in general), which is I think what is required for it to be ad hominem—I have to be making some fundamental attribution, and I’m not. I’m saying that the words they’ve typed in this thread are inconsistent with someone acting in good faith, which is a claim about observations and causality, and not about character.
I assume you have noted, because you’re perceptive, but just to say here—I have repeatedly expressed credible gratitude for the presence of countervailing models and criticisms and so forth, and done at least some significant updating in plain sight. I don’t think it would be fair for people to round me off to “was looking for a hive mind.”
The point here is merely to what degree LW is special and what can you expect from it. I neither said nor implied that you went looking for a hive mind.
Yeah, I want to similarly underscore/perhaps redundantly state that you have demonstrated extremely high and consistent credibility when it comes to productively engaging in discourse. With the comment above, I was underscoring a thing that plausibly could’ve just gone unstated.
I agree I got a lot more defensive over the past 36 hours, but you’ll note it’s confined almost entirely to two specific cases where I feel people are approaching with unjustified confidence in extremely uncharitable models, after all of the other discussion that’s gone on (which I feel should’ve earned me some credibility).
unjustified confidence in extremely uncharitable models
From your point of view, maybe—but it’s not the only one.
You seem to be welcoming comments about which parts of your plan to slightly bend, adjust, and repaint, but you are visibly hostile to the idea that your proposal is flawed at its core and cannot be saved regardless of tinkering with its details.
Yes—that’s because the proposal is not flawed at its core, and I’m not going to pretend that it is to satisfy pearl-clutchers and lecturers. (More accurately: I have greater than 95% confidence that this experiment, conditioned on it meeting existing criteria for launch, does not cause great harm to people on the scale of six months.)
I note that I am willing to engage with my real, extant uncertainty with people who don’t approach from a holier-than-thou know-it-all condescending lecturing position. For instance, it’s not even clear that the house will actually happen, because it’s not clear that there will be enough people who think that it’s a good idea. I’m not trying to convince any of the potential members—instead, I’m simply revealing revealing revealing the models, shining as much light on them as possible, so people can neutrally evaluate, and I still have ~33% credence on “there won’t be enough justified faith to do it.”
If someone were to say “Hmmm. I’m reasonably confident that this proposal is flawed at its core and can’t work; here are my objections and here are my questions,” I’d engage with them (and this is a credible claim if you look back through this thread). What I won’t engage with is people who don’t even know me who are trying to pull status moves to put themselves above me (and therefore in a position to judge) from the get-go.
As another way to state my point, I’m credibly offering good faith and charity to the vast majority of critics (all but the bottom 3%). But the people who are coming in with deontologically hostile models are not offering me any good faith and charity in return. And you’re right that no one owes me that, but similarly I don’t owe them any response other than “yeah, screw you, too.”
that’s because the proposal is not flawed at its core
And how do you know that?
Or, let’s put it this way: which evidence short of actually attempting to implement this would persuade you that the proposal is flawed?
who are trying to pull status moves
So, how much do you care about status? Why is it a big deal?
similarly I don’t owe them any response
True. But you are offering them a response. This response illustrates how you react to what you believe is unjustified criticism—and it is not “I disagree. Tap.”
The confidence regarding it not being flawed at its core comes from related past experience, confidence in the individuals involved, the direct evidence of the positive value of norms stolen from Dreamship and Event Horizon, faith in the safety valves of Circling, pair debugging, internal and external check-ins, and commitment to iteration, and the results of having run a trial version that went quite well.
There was evidence I could have gathered from the experimental weekend that would have persuaded me the proposal was flawed, and there were similarly potentially unknown arguments that people here on LW might have offered up that would have been persuasive, too, but at this point, I can’t outline concrete predictable evidence that would cause me to not run this (not actually all that ambitious) experiment. It’s like the pants ending up in Washington DC—there probably exists evidence that would convince me, but I can’t reasonably guess what it might be.
In response to both the status question and the owed-response question, I do believe that people need to adopt a policy of loudly objecting to moves they want to be considered outside the Overton window, especially if those people have some social capital to spend (because they’re doing it not only for themselves but also on behalf of the disenfranchised who can’t afford to push back). In other words, in part, I’m fighting the two people I think are Doing It Wrong because I want to be publicly seen fighting on behalf of not that. I think that it overall increases rather than decreases my credibility on axes that I think are relevant.
I do believe that people need to adopt a policy of loudly objecting to moves they want to be considered outside the Overton window
You are either grandstanding or misusing terms. People’s objections to your proposal (including both form and content) are firmly within the Overton Window and are nowhere near its boundaries. I have trouble believing that you actually want as tiny an Overton Window as you imply.
If I may make a suggestion? Stop digging. The narrower you make the range of acceptable thought/speech, the less adequate you look. The more you attack and denigrate people who fundamentally disagree with you, the less credibility you have as a leader.
Note again that we are on Less Wrong and within the rationalist community, both of which are very much built around norms of reasoning and discourse; I’m not suggesting a tiny Overton window for the world at large or even one that’s this constricted on all axes.
But yes—I think both Less Wrong and the rationalist community would be far, far closer to the ideal versions of themselves if they doubled or tripled their callouts-of and refusal-to-engage-with sloppy and biased and inappropriate discourse. Overton window being “things a politician can say on TV”—I want “styles of discourse that a high-status rationality community member can publicly endorse” to not include the stuff cousinit and handoflixue were doing. My concerns are almost entirely about form, because I think correct form leads to improved content. I could take any of the objections that cousin_it or handoflixue or 128bargl had and recast them into (e.g.) “the sort of sentences Julia Galef or Rob Bensigner would say,” and they’d be worth fully engaging with, but in their current form, I claim there’s more long-term civilizational value to rejecting them.
I’m entirely okay with losing credibility with people who don’t value the above. Those people shouldn’t hold me in high esteem—we have at least partially opposing goalsets, and will at least occasionally be actual antagonists relative to one another; I’m actually taking some mild encouragement from how violently people I fundamentally disagree with are disagreeing with this project, because it’s weak circumstantial evidence that I’m moving in the correct direction. (i.e. the less adequate I look to you is not necessarily an appropriate measure; Sanders and Clinton both frequently made moves that made them look less adequate to some people.)
And I again disagree with your characterization that I’m attacking and denigrating people who fundamentally disagree with me, and I’m surprised that you’re rounding things off that carelessly. If you want to see personal attacks and denigration, look at (e.g.) the blog post that cousin_it cited to Kaj. Nothing I’ve done here comes anywhere close to that—I’m attacking and denigrating specific forms of argument, and specific modes of reasoning. For example, if you look at the time where handoflixue asked a clear and cogent question without any unfounded critical leaps, I gave a multiparagraph answer with lots of concrete detail. I grumbled at them a bit for their other interactions with me, but I didn’t treat their point or question any differently because they’d bugged me elsewhere. I have no problem with specific people; it’s just that at some point my prior on the VOI of engaging with them drops too low. It’s Bayes—one of my fundamental moral principles is that you should trust in revealed preferences, and barring credible reasons to believe someone’s made a major personality shift, you should evaluate them as the sum of their actions.
(Also, I think it’s not grandstanding if I’m literally practicing what I’m preaching in real time? Like, I’m doing exactly what I claim a person ought to do, not just moralizing with no action behind it.)
would be far, far closer to the ideal versions of themselves if they doubled or tripled their callouts-of and refusal-to-engage-with sloppy and biased and inappropriate discourse
I don’t think so. I think they would be dead or sufficiently engrossed in navel-gazing to be functionally dead.
I claim there’s more long-term civilizational value
So, grandstanding.
Those people shouldn’t hold me in high esteem—we … will at least occasionally be actual antagonists
It’s perfectly reasonable to hold one’s enemies in high esteem and in fact one of the traditional measures of success is the caliber of enemies you’ve acquired along the way. For non-fatal competitions you actually want the best, highest-esteem enemies you could find—they will push you to become better (as opposed to nuisance pests who will only encourage you to stay irritated and smug).
I’m actually taking some mild encouragement
That’s the classic “reverse stupidity” argument.
Nothing I’ve done here comes anywhere close to that
As Alicorn pointed out, the situation is not symmetric. Writing a Tumblr rant is a very different thing from asking multiple people to surrender not insignificant amounts of autonomy to you, as well as become emotionally and financially entangled in a project of yours.
I’m attacking and denigrating specific forms of argument, and specific modes of reasoning
No, you don’t. You actually tend to oscillate between ad hominem attacks and replying to specific criticisms.
Or maybe you don’t think of the “you think wrong thoughts expressed in the wrong way and you should be ashamed of yourself” as an attack? Let me assure you that it is.
at some point my prior on the VOI of engaging with them drops too low
If that were so, you would stop engaging with them. But you don’t.
ETA
I think it’s not grandstanding if I’m literally practicing what I’m preaching in real time?
That’s not how it works. If you loudly proclaim that, say, the use of mis-gendered pronouns is a major human rights violation akin to torture (or that letting trans people use the bathrooms they want is the end of Western civilization), you are grandstanding even if you literally throw a temper tantrum in real life.
I’m now feeling deliberately misunderstood, and if you’re doing that on purpose, I ask you to stop.
We disagree about Overton windows; that’s good, and cruxy.
According to the definition of grandstanding that Google throws up when you type in the word, you’re misusing it (particularly, the word requires you to make claims about my internal state and purpose, i.e. what I’m doing X for, and your best source of data there is my self-report). It’s not grandstanding, and I note it’s far easier for you to name-call than to actually make a specific critique stick.
It’s perfectly reasonable to hold some of your enemies in high esteem—for instance, I note we’re disagreeing pretty heavily here, and I have a great deal of respect for you. But it’s unfounded to jump from some to all. Many of the people opposed to this idea are not high-caliber thinkers and reasoners, whatever other value they have as human beings.
reversed stupidity
I was extremely careful to say what I actually meant, and then you were extremely careful to strawman me by quoting only part of my words, as if I didn’t say “weak circumstantial” right in the same sentence.
Operationalize your claims that I’m making ad hominem attacks, and I’ll address them one by one. I predict you’ll definitely be able to find 1-3 examples of me sticking a foot across the line, and that they’ll be outweighed by a factor of at least five by me doing the thing I claimed I was doing. I predict you will find no examples that are anywhere near as gross as the ones put forth by cousin_it and handoflixue. I’d be willing to monetize this as a bet.
I’ve stopped engaging with them for their own sake. I have previously explained to you that I think it’s important to be seen openly defending good norms, and thus continue to engage with them for myself and everyone else. I think it was pretty lame of you to just … pretend I hadn’t said that, and again strawman by criticizing me for the thing I’m not really doing.
I am losing respect for you in this subthread, but right now it’s something like “I had you at 957 points, and I’m worried you’re going to drop to 949.” Hopefully this is just some combination of a little bit of triggering and the fact that both of us care about getting this right, and not that you endorse overall the tack you’re taking anymore than I’d endorse the worst 10% of my own reactions on this post.
My working definition of grandstanding is basically “declaring that one’s words or actions have outstanding significance or impact”. Case in point: you being concerned with “long-term civilizational value”. I strongly suspect that your cluefulness about long-term civilizational values is… limited.
as if I didn’t say “weak circumstantial”
It doesn’t help you. Weak circumstantial evidence is still evidence and under reverse stupidity you just don’t have any.
Operationalize your claims that I’m making ad hominem attacks, and I’ll address them one by one.
I have no interest in fisking your comments. I offered you an outside view—if you think it’s wrong, there is no reason for me to try to convince you.
I’ve stopped engaging with them … and thus continue to engage with them
Pick one, will ya? X-)
I think it’s important to be seen openly defending good norms
Maybe, but when you say stuff like “I deny your right to judge and interrogate me” you sound like an idiot. The fact that you were capable of typing that sentence and pressing “Send” is not a good sign.
I am losing respect for you in this subthread
I appreciate your concern, but I think I’ll be fine. Really, I will :-P
I’m glad, because you just lost a lot more. I do, indeed, think your outside view is deeply flawed, and I’ve just lost an illusion about how you in particular are likely to go about engaging in discourse. As an example, you just pulled a fifth-grader-bully trick in the quote
I’ve stopped engaging with them … and thus continue to engage with them
that was purposefully thickheaded in ignoring the whole point of that paragraph.
I didn’t think you would troll/deliberately mischaracterize, endorsedly, when not triggered-in-the-moment. That was firmly outside of my model of you. Now I know something new about you, and it will be useful to me in the future.
A funny thing about you: the more you talk, the worse you look. You started by presenting a very reasonable image—you listened and you expressed willingness to take into account people’s concerns. A bit more than a week passed and you’re already screaming at people IN ALL CAPS, calling them “a jerk” and dropping dark hints about knowledge that “will be useful to [you] in the future”. How is your stress tolerance? You are not performing well when people disagree with you.
You also try to be manipulative—not very successfully, mind you—by dispensing praise and criticism in order to gain the results you want. Since we’re are being all frank’n’all, my opinion of your adequacy as a leader went down a lot during this week—mostly because you wouldn’t shut up. I sincerely reiterate my advice to stop digging.
I don’t mind this whole “the more you talk, the worse you look” thing, because a) it’s symmetrical, and b) I’m entirely comfortable being seen for having exactly the preferences and principles I do have.
I’ve responded sharply, at this point, to exactly four people: a universally acknowledged troll, two people who started out clearly strawmanning me and being heavily anchored on negative opinions without justification, and now you, as you abandon standards in pursuit of scoring points.
I have not willfully misrepresented people, or immediately leapt to unfounded conclusions about their deep character, or engaged in cheap-trick point-scoring tactics against people who didn’t shoot first (with one exception that Alicorn called me out on, and I edited), or any of the other behaviors that I don’t reflectively endorse. I have certainly pulled none of the subpar junk that you’ve pulled in this subthread, and I’m proud to have opposed you as you’ve done it.
As I’ve noted elsewhere—I don’t much care about irrelevant opinions, and as people have demonstrated themselves to be below the bar of what I expect from a LWer and a rationalist, I correspondingly cease to mind what their overall judgment of me is. I generally try to judge how likely a person’s opinion is to closely correlate with truth and useful perspective, and while I hold my disregard with skepticism on the meta level, so as to not unfairly write people off, ultimately evidence is evidence. There are some people who simply demonstrate, fairly conclusively, that they aren’t going to play fair, think straight, update on evidence, etc., and are literally not worth listening to, in a VOI sense (though they may still be worth opposing in public).
I state again that something like 97% of the participants in this thread do seem like their opinions are likely to closely correlate with truth and provide useful perspective, and I’m grateful for the hours that total strangers have poured into helping me dodge mistakes. This project is something like 50% less likely to fail and 30% more likely to be really successful (relative to where it was a week ago) thanks to those contributions.
And sure—probably most of the neutral parties are shaking their heads somewhat—thinking things like “Duncan’s being too aggressive here” or “Duncan’s fighting fights not worth fighting” or “I wish he hadn’t posted X.” But that’s coin I’m spending deliberately, in open defense of things I think are worth defending. There’s no point in social capital if all you do is hoard it—at some point, people who’ve accrued ought to take risks holding lines that others can’t afford to defend. If I lose 5% of the respect that I’ve gained, but also meaningfully embolden others who were too hesitant to defend themselves against bullies by giving them the sense they’re not the only ones bothered by poor discourse, that’s a purchase I endorse. Freedom from trolls isn’t free—turns out even Lumifer will occasionally use Trump-style tactics, if they dislike you enough.
LOL. You smell SJW-ish. A white knight selflessly spending his social capital to defend the weak against the bullies. Against “Trump-style tactics” even! And, of course, you will not be denied for your cause is just.
You are clearly incapable of shutting up so this will be amusing.
So tell me more about things you think are worth defending—especially from the likes of me. Are we still talking about the mere forms of expression which you disapprove of or there’s some deeper ideology involved? Do you see me as lacking honor, or empathy, or proper morals, or the desire to remake the world, or something else?
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
It’s been a while since the last time I was officially added to the list of the Enemies of the People and… ritually cast out, I guess? This time there even a list of high crimes I’m guilty of—“reasons surrounding norms”. Woe is me!
I was hoping you’d show how your community will be better than current authoritarian communities, which I deeply dislike. Instead you insist that current authoritarian communities are fine and we need more of them. Hopefully you see why that’s unlikely to change my mind, imperfect as it is. Heck, my dislike for cults was clear from the first comment, which makes your jumping onto it even more weird. A master of soft skills would’ve chosen literally anything else as an opening. Even now in the middlegame you can still turn it around, though I can understand if you’re frustrated and don’t want to. My own goal in this conversation is mostly achieved, I can go on or not, up to you.
I’ve read your post, it’s nothing but red flags. You’re literally proposing that DA members greet each other with a salute and trust you more than themselves. The few upsides you mention (participants are smart, time is limited, etc) come across as excuses why you should get power now. Check out nostalgebraist’s tumblr for more folks who got the same vibe. Your comments make things worse, you clearly like authoritarian communities in your heart rather than consider them a necessary evil.
I thought homeschooled kids were usually taught by parents?
I use the phrase unschooling and not homeschooling but even if a child gets taught by their parents that still suggests that the average teacher is not skilled enough to provide value to his student that allows them to outperform students taught by lay-people.
The problem is learning a broad range of ideas in a manageable time, not just picking a narrow path through topics that catch your interest, and for that many adults find universities useful. Not to mention you meet many smart people and pick up good memes without realizing it.
The same arguments could be made for why the Dragon Army is a good idea.
Let’s take a random skill from the proposed curriculum, like welding. You could try externally motivated self-study at OP’s group house, or you could go to a community college and ask how long they’ll take to make you a certified welder. It seems to me that even without the authoritarian LARPing, the first option is a weird hybrid, like a flying submarine. It’s more costly than either full self-study (if you can do it) or fully spoon-fed learning at a traditional place for a set term.
The OP’s proposal is to dial motivation to 11 and hope that it leads to effective learning. Even if that doesn’t backfire, at most it lets you see the next bottleneck, and you don’t know how many there are. Traditional schools have solved all of them, and can teach people predictably without requiring much motivation (except for showing up). For well understood skills, I think they are better than rationalist groups in every way.
Traditional schools have solved all of them, and can teach people predictably without requiring much motivation
Traditional schools know how to teach welding but when it comes to teaching introspection or teaching teaching and tutoring skill it’s less clear.
Teachers who have a master degree aren’t better than their colleges. As far as we know those two years of being in the university to learn to teach better is worthless for teaching skills.
I would also doubt that it’s easier to learn programming via a community college course than by living together with people who can program well and who are willing to tutor you a bit.
I’m sorry to say but teaching introspection, rationality or other skills we don’t have reliable tests for is a scam. The fact that more than half of the OP’s curriculum consists of such skills is a big red flag. And learning programming doesn’t require any measures described in the OP, I know it, you know it.
And learning programming doesn’t require any measures described in the OP, I know it, you know it.
Yes, but you make the argument that traditional institutions of learning are superior. For programming, I don’t think that’s the case.
I’m sorry to say but teaching introspection, rationality or other skills we don’t have reliable tests for is a scam.
Do you believe that liberal arts college who claim to teach critical thinking are also scams? From my perspective, they are a lot more scammy because they actually have the money and time to research whether their claims are true.
I think a person who tries a new project where they have a goal that they can’t measure well is a lot scammy than big institutions like a liberal art college.
I don’t think the goal of OPs proposal is to learn any particular skill. To me it mostly looks like trying to build a tightly-knit group so that each member can use the others as external motivators and close friends to discuss life plans and ideas in detail not really possible between modern colleagues and friends. I.e. the goal is not learning a skill, it’s building a mutual support group that actually works.
I couldn’t comment on the linked Medium article, so I’d like to say that, for many students, particularly middle and high school students, it is simply not true that they are in class voluntarily. I was routinely threatened with dire consequences if I didn’t go to school, and attempts to remain at home and refuse to go were met with physical force—I was literally pulled out of my bed and taken to the car or bus. School is about as voluntary as the military draft.
Edit: my original response was unnecessarily brusque and rude, and I apologize. I can elaborate further, but in the meantime, you might squint at the doc again, because it was a particular message about agency aimed at people in exactly your kind of situation.
The end result of my experiment in school refusal was being put on psychiatric medication. (Which actually did help, if you consider changing my preferences to something more socially acceptable to be helping.)
In hindsight, my best strategy might have been seeking a diagnosis of delayed sleep phase syndrome and requesting accomodations under the Americans with Disabilities Act. (The trigger for all this was that the school changed its starting time from 8:10 AM to 7:40 AM and I was not willing to deal with getting up any earlier.)
I was in a special education school from third to seventh grade, and I was absolutely forced to be physically present at that school as much as any prison inmate was forced to be physically present in prison. They couldn’t force me to do schoolwork, and there were times I accepted a loss of privileges as the consequence for not participating, but any attempt to leave would be met by physical force. (The school even had a “time-out room” in which a student that became violent—a not uncommon occurrence—could be locked inside until he or she had calmed down.)
Participation was indeed a choice. Being physically present was not.
Going to class was not voluntary for me either. The consequences of not going to class included: parents screaming at me, parents kicking my ass (tiger parent style; we didn’t do “grounding” in my household), truancies going onto my “permanent record”, a full day of detention on a Saturday, etc. Things that people call “voluntary” don’t usually result in physical and emotional damage if you don’t do them.
Nonetheless, I skipped class a few times in middle school, and I suffered the consequences as a result. Were the consequences worth the glorious days of freedom that I spent skateboarding near the beach, sitting in a local comic book store marathoning manga, etc.? Maybe; maybe not.
But whether I go to class is a choice that I alone have the freedom to make. My parents and the school can set the consequences, and they can apply a lot of pressure to make particular options more or less appealing, but they can never take away my ability to choose.
On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.
On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn’t addressed directly. You’ve done a lot of review of failed projects, and succeeded projects, but I don’t get an impression you’ve done much of a review of abusive projects. The big common element I’ve seen in abusive projects is that unreasonable demands were made that any sensible person should have ‘defected’ on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn’t defect. They stayed in the abusive situation.
A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can’t find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say “No, I’m afraid not, I have reasons but I can’t really talk about them.”, because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.
What I draw from the case studies of abusive situations I’ve encountered, is that humans have false negatives as well as false positives about ‘defection’; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you’ve kind of seen a part of it, but the overall model being “humans defect on commitments too much” rather than “humans are lousy at knowing when to commit and when not to” seems like it will miss consideration of what various ideas will do with false negatives often.
The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying “you seem to be under the mistaken impression I need to give you a reason” if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won’t be part of this project.
When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can’t find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not ‘defecting’ enough to protect themselves. And I’ve met enough of them in rationalist-adjacent spaces that I know they’re nearby, they’re smart, they’re helpful, some are reliable, and they’re kind of vulnerable.
I think as leader you need to do more than say “protect yourself”. I think you need to expect that some people you are leading will /not/ say no when they should, and you won’t successfully filter all of them out before starting no more than you’ll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, “protect yourself” seems like trying to avoid part of the leadership role that isn’t actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn’t good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don’t know you personally find it kinda alarming in this context.
(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)
On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn’t one of our exceptional libertines leaving when they should. This isn’t a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who’s interested. The harsh terms typical of an SF house share are not suitable, I think.
The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they’re unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.
If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.
Strong approval of all of this. The short answer is, I’ve spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We’re all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn’t feel the need to lengthen the post by another third to include stuff that’s only half-in-progress and also largely too detailed/irrelevant to outsiders.
(As a single bite-sized example: the “protect yourself” mantra is there to lay the baseline, but thus far we’re also including a) explicit “non-conformity” training in bowing out of activities, coupled with strong norms of socially supporting people who “rule #1” themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is “how are you, can you be supported in any way,” d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows “where everyone is” and has a cheap Schelling point for “I need help with X.”)
This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I’ve got some, but still feel like it’s the largest weakness in the rationality training I’ve been doing.
Because basically every cult has a 30 second boilerplate that looks exactly like that?
When I say “discuss safety”, I’m looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the “check-ins” you’re describing, as a way to emotionally manipulate members. And the “group” check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).
You’re proposing starting something that looks like the cult. I’m asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you’ve provided has been perfectly in line with “you are a cult leader”.
If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.” That’s deeply fucked up, and inappropriate given that I willingly created a multithousand word explanation for transparency and critique, and have positively engaged with all but the bottom 3% of commentary (of which I claim you are firmly a part).
I think you’re flat-out wrong in claiming that “almost all evidence you’ve provided has been perfectly in line with ‘you are a cult leader.’” The whole original post provided all kinds of models and caveats that distinguish it from the (correctly feared and fought-against) standard cult model. You are engaged in confirmation bias and motivated cognition and stereotyping and strawmanning, and you are the one who is failing to rise to the standard of discussion of this community, and I will not back off from saying it however much people might glare at me for it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.”
While I agree that a lot of the criticism towards you has been hostile or at least pretty uncharitable, I would only point out that I suspect the default tendency most people have is to automatically reject anything that shows even the most minor outward signs of cultishness, and that these heavy prior beliefs will be difficult to overcome. So, it seems more likely that the standard is “outward signs of cultishness indicate a cult, and cults are really bad” rather than “Duncan is a cult leader.” (This is sort of similar to the criticisms of the rationality community in general).
I think there are a lot of reasons why people have such heavy priors here, and that they aren’t completely unjustified. I myself have them, because I feel that in most cases where I have observed outward signs of cultishness, it turned out these signals were correct in indicating an unhealthy or dangerous situation. I don’t think it’s necessary to go into detail about them because it would take a huge amount of space and we could potentially get into an endless debate about whether these details bear any similarity to the set-up you are proposing.
So it generally seems that your responses to the people who have these very heavy priors against what you are doing to be along the lines of “You can’t just come in here with your heavy priors and expect that they alone constitute valid evidence that my proposal is a bad idea”, and in that regard your rebuttal is valid. However, I do personally feel that, when someone does show up in an argument with very confident prior belief in something, the charitable principle is to assume at least initially that they have a possibly valid chain of evidence and reasoning that led them to that belief.
It could be that there is some social collective knowledge (like a history of shared experiences and reasoning) that led up to this belief, and therefore it is generally expected that we shouldn’t have to back-track through that reasoning chain (therefore allowing us to make confident statements in arguments without producing the evidence). I think that “cults” are a fairly good example of this kind of knowledge—things people almost universally consider bad, except for cult members themselves, so much so that saying otherwise could be considered taboo.
And this is definitely not to claim that every taboo is a justified taboo. It’s also not to say that you haven’t argued well or presented your arguments well. I’m only arguing that it’s going to be an uphill battle against the naysayers, and that to convince them they are wrong would probably require back-tracking through their chain of reasoning that led to their prior belief. In addition, if you find yourself becoming frustrated with them, just keep the above in mind.
For essentially the above reasons, my model predicts that most of the people who decide to participate in this endeavor will be those who trust you and know you very well, and possibly people who know and trust people who know and trust you very well. Secondly, my model also predicts that most of the participants will have done something similar to this already (the military, bootcamps, martial arts dojos, etc.) and successfully made it through them without burning out or getting distressed about the situation. Thus it predicts that people who don’t know you very well or who have never done anything similar to this before are unlikely to participate and are also unlikely to be swayed by the arguments given in favor of it. And even more unfortunately, due to the predicted composition of the participants, we may not be able to learn much about how successful the project will be for people who wouldn’t normally be inclined to participate, and so even if the outcome on the first run is successful, it will still be unlikely to sway those people.
I don’t place much weight on this model right now and I currently expect something like a 30% chance I will need to update it drastically. For example, you might already be receiving a ton of support from people who have never tried this and who don’t know you very well, and that would force me to update right away.
Also, even though I don’t know you personally, I generally feel positively towards the rationality community and feel safe in the knowledge that this whole thing is happening within it, because it means that this project is not too disconnected from the wider community and that you have sufficient dis-incentives from actually becoming a cult-leader.
In short: Don’t let the negativity you are facing become too much of a burden, just keep in mind that it’s possible that many of the most negative critics (besides obvious trolls) are not acting in bad faith, and that it could require more work than is feasible to engage with all of it sufficiently.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.
This post puts me maybe 50% the way to thinking this is a good idea from my previous position.
My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying “Taking care of yourself always comes first, respect yourself”, then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they’ll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they’ll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what “running on spite” cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.
My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you’ll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you’re updating not just the charter, but your aesthetics. Frankly, you don’t seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.
To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can’t be treated like normal policy positions, and people’s self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.
I’m unlikely to revise the aesthetics, but a) the particular operationalization/expression of those aesthetics, and b) the boundary/balance between both the aesthetics and other people’s agency are fully open to debate, iteration, and consensus.
The whole point is to test out the aesthetic as it exists, to see whether it produces a better life for people, so it’s important not to compromise it until some actual testing has taken place. But imagine e.g. a constructed social norm is approved of, proves to be problematic twice, and has one week left before its originally established “re-evaluate” point—I posit you get much better data out of seeing what happens if you keep the norm firmly in place, see the fallout for a week, watch people grumble and adjust, and then re-evaluate on schedule, than if you just constantly say “NOPE, DIDN’T WORK, SCREW THAT.”
I think there’s a strong instinct to buck norms and update in the moment, and that this is a pendulum swing thing—it’s good that we do this a lot more than we did two decades ago, but it’s bad that we do it as much as we do. There’s value in learning to live with rules that don’t change, or rules that are slightly stupid, and by setting rules firmly in place for e.g. three weeks at a time, I think you capture some of that value, at a low price in terms of loss of the flexibility thing.
Does that seem coherent/a valid response to your qualm?
Another way to say this is that I think the bar for “discard this norm” should be raised one notch higher from (straw description) “it bothered one of us once” to “it bothered several of us several times.” If you keep it past the former, I think you see interesting effects in how people shape themselves around one another, and I think there’s some valuable effect from transferring some sovereignty back from the individual to the social fabric (i.e. everybody’s not just quittable at all times).
Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.
I’d suggest evaluating the policy of ‘demand that an approved norm be in place until the scheduled vote’ on the first scheduled vote following each scheduled vote in which ‘a norm was dropped that people wanted to have it dropped mid-cycle but couldn’t because of the policy’.
Your suggestion makes sense for an experiment, but misses the whole point of this experiment. This, to me, seems like exactly the unpleasant valley dynamic. “We tried holding ourselves to a standard of ‘we finish the experiments that we start,’ but we got a couple of experiments in and we didn’t like it. Let’s stop.”
“Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs.”
If you have no norm for evaluating that rule explicitly, it doesn’t mean that you won’t evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won’t quickly learn to put exit clauses in experiments that are likely to need them ‘notwithstanding any other provision’ is failing to accurately predict.
I think you miss the point that Duncan wants to train the ability to be out-of-comfort zone by following through on goals that are set.
A norm being very annoying wouldn’t be a reason to drop it before the scheduled vote. The norm would have to actually create substantial harm.
I read that “this is causing substantial harm” would be insufficient to cancel a norm, but expect that “this is creating a physical hazard” would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there’s a false negative in a mideterm evaluation of danger...
Maybe I’m concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.
Duncan’s rule one is “A Dragon will protect itself”.
I don’t think whether something is physical would be the prime distinction but whether the harm is substantial. If following a norm would likely result in someone losing his job, that isn’t physical harm but substantial harm that likely warrants violating the norm.
“roughly 90 hours a month (~1.5hr/day plus occasional weekend activities)”
My math says that those weekend activities total the 1.5 hours every day has and also 10 additional hours every weekend.
“Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of “keep paying until you’ve found your replacement.” ”
It seems counterproductive to have people who have left the experiment living in the same house until they are replaced. Exit terms such as ‘two months notice, or less if a suitable replacement can be found or otherwise agreed’ are less coercive.
21 hours most weeks is 3 hours per day, or 2 hours during each weekday and ~10 for the weekend.
Just making sure that your daily and weekly estimates don’t contain math errors, not saying anything about the sufficiency of those numbers.
Oh, goodness, you’re actually completely right. I just dumbbrained. The goal is 21 hours per week, on average, but with most weeks having more like 12 hours and some having more like 40.
The numbers are somewhat higher in the beginning both a) because it’s easier to relax expectations than to tighten them, and b) I do suspect we want to frontload the togetherness and do more individual stuff after norming and bonding.
I’m curious whether the not for me is “there are different kinds of people and different kinds of brains and different kinds of personalities, and they actually sometimes need different nutrients, and this one is bad for Lumifers,” or whether it’s “there’s something fundamentally broken here that I’m particularly sensitive to but others are more able to tolerate.”
If the latter, I’d love it if you ever manage to put it into words. The goal is to avoid as many of the Stupid Things as possible.
So, I have actually lived in a semi-authoritarian culture, and have a sort of unique experience of seeing high rates of autism function under that culture (and let’s not deny the high rates of autism in this subculture). While this doesn’t sound like “cult” to me, I can think of a couple ways gratuitous harm could occur even if everyone is operating in good faith.
Person A harms Person B. Person B realizes that their violation threshold is much lower than they thought when they signed on, and they want to bring it up for discussion, but you and Person A have a much better rapport than you and Person B. And Person B was uniquely attracted to this because they need their self-care to largely be outsourced to a group structure. So they don’t actually have the skills they need to be agenty outside of group expectations, and simply continue to be harmed while being unable to bring it to anyone’s attention until it’s much to late to repair the relationships. I’d like to present myself as someone who has gotten feedback along the lines of “you’re competent and mature” and who still does this sort of thing. It’s not something that’s easily predicted by the person or by people observing them.
As mentioned in (1), simply outsourcing functionality to a group structure can leave people helpless when they have to act against the group or act without the group. I don’t see much thought put towards transition plans for people when they leave DAB. Relating back to the childhood and adolescent experiences I claimed gave me insight into this, I have seen a lot of people flail once their version of the role you’re taking here is gone. And they get hurt. This applies even more to people who’ve required extra structure to function, as in the case of autism (and I am one of those autistic kids that flailed). You might say that people are accepting that they will get no transition help once they leave the immersive, structured environment you’re creating, but it seems naive to not at least prep them for the struggles they might have.
2a. Transition is even more important given that this is a necessarily isolating endeavor. The things you’re proposing take a ton of time! People will be making a lot of interpersonal sacrifices to participate, and that will degrade whatever safety net they think they’ll have if they leave.
Personally, I’m trying really really hard to separate criticisms from an aesthetic distaste and the fact that this looks like things I have been actively harmed by, when the people in charge were people who loved me and had my best interests at heart. So, apologies, because this comment is definitely biased by that.
As far as “there are different kinds of people and this is bad for helldalgos” goes, this is bad because I would do something like this if I tried to participate: outsource most of my functionality to group norms, overstate my ability to be transparent enough to function in a high trust environment like this, end up hiding rule violations, feel guilty, become dishonest, and have periodic emotional and mental breakdowns where I burn all of my relationships in the house to the ground. The fact that I behave like this under authoritarian structures might be a problem, but it’s not one that’s fixed all at once by starting an immersive group project where someone is in charge of me. I said a few hours ago to someone else that I would definitely participate if I didn’t have so many roots where I live now and if I could actually stand living in the Bay, but upon reflection, I think not.
This is outstanding, and I appreciate you taking the time to write it up.
I think 1) is an interesting and important dynamic that I had not previously thought about, and I’m curious if you have concrete thoughts as to how to repair it. I think that simply acknowledging it, and committing to accede to opinions-other-than-my-own in evaluating whether it’s going on, is an important first step but only gets like 15% of the way there. Similarly, I think norms of regular retrospectives and Circling-type environments will make it marginally more likely that people can bring this stuff forward and get it addressed, but not entirely because anxiety, status, etc.
My first brainstorm there produces things like anonymous feedback structures, “interrupting” norms where people are free to call things to a halt, requests-to-speak-privately and requests-for-third-party-mediation as strong guaranteed “yesses,” and maybe something like a norm that people can call for discussion or mediation to halt until their ideological Turing test has been passed? e.g. I can’t just brush past your claim of harm; you have an absolute right to stop things from moving forward until you are satisfied that I at least understand the magnitude of your internal experience, even if I disagree with your description of what happened externally.
As for 2), it’s an ongoing conversation; back-and-forth in these comments has already produced a lot of clarity on both non-defecty, structured ways of leaving, and also urgent, ejector-seat methods. (I’ve been a little slow to post concrete details because I claim the person clamoring for them loudest isn’t engaging in good faith, but I’d be happy to PM). My current sense, though, is that these structures, while they should be put in place as soon as possible, should also be discussed with the group, rather than emerging entirely under my models.
Thanks again, particularly for your separating criticisms from aesthetic distaste—I feel you absolutely succeeded at that goal, and I felt that your comment was both a) actually valuable and b) entirely constructive.
I’m not sure how to solve it except to avoid authoritarian structures, which is obviously counterproductive for your project. I would recommend taking any opportunity you have to exhibit through actions that fairness can be expected despite your existing rapport with someone. The things you suggested are helpful but not comprehensive. You could screen for anxiety, but this behavior can be found in people who wouldn’t otherwise consider themselves anxious. And it’s not entirely fueled by anxiety, either.
I like the “interrupting” norm idea; I can see it becoming prone to a weaponized-victimhood sort of abuse but that would be easier to see and stop from the outside than the dynamic it could solve. And if someone is constantly claiming that they’ve been harmed, that’s probably a good sign that DAB isn’t a healthy environment for them anyways.
I would be louder about insisting on plans for various types of leaving if I had more of a stake in this project. If I were planning to participate or someone I cared about was, I would be insisting on it with a similar degree of intensity as the other comments you’re referencing. That’s a major part of what will keep it from being what some people are calling abusive, but that I think belongs under the wider umbrella of “dysfunctional.” You’re right that it should be collaborative, and I don’t expect graceful exit plans to leap fully formed from your skull, but yeah. I endorse that level of intensity in terms of expressing just how important exit plans are.
I should admit that as of a few hours ago I have an ongoing bet about the year-long success of this project, and I took the pessimistic view (for reasons other than abuse or harm). I was also incredibly worried about people getting hurt when I read through the document and some other comments. But, having talked to some other people that know you and reading other things you’ve said, I am definitely less worried about harm through intent or negligence than I was. I am still pretty concerned about harm through ignorance or dynamics inherent in the interaction between the people and the system.
Also, excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement:
One thing that’s seemed striking to me in this Dragon Army discussion is the priors on different people’s threat assessments.
I remember when I was younger, I used to want to meet my friends from the Internet, and my parents were horrified, and had all of these objections like “What if they’re pedophiles who befriended you so they could molest you?” or “What if they’re kidnappers who befriended you so they could kidnap you?”, or less lurid possibilities like “What if they’re creepy drug people and they insist on bringing you along to their creepy drug abuse sessions and won’t let you say no?”
And I never developed a good plan that countered their concerns, like “I will bring pepper spray so I can defend myself”. It was more about rolling my eyes and telling them that never happened in real life. I’ve now met hundreds of Internet friends, and I was absolutely right—it’s never happened, and any effort I put into developing a plan would have been effort wasted.
I’m not claiming there are no Internet pedophiles or kidnappers. I’m saying that based on my own Internet communities, and my threat-detection abilities, and the base rate, I was pretty sure it was more in the realm of terrorism (the kind of stuff you hear about on the news) than the realm of car accidents (the stuff that happens to real people and that you must be guarding yourself against at every moment).
This is also how I think of people turning out to be abusers. It’s possible that anyone I date could turn out to be an abuser, just like it’s possible I could be killed by a terrorist, but it’s not something likely enough that I’m going to take strong precautions against it. This is obviously a function of my personal situations, but it’s a real function of my personal situation, which like my Internet-friend-meeting has consistently been confirmed over a bunch of different situations.
(Please don’t give me the “that’s just male privilege!” speech; men and women get abused at roughly similar rates. I do think that probably women are socialized to fear abuse much more, and that’s a big part of this, and probably other axes of marginalization contribute more)
One interesting thing about Tumblr and the SJ-sphere in particular is that because it comes disproportionately from marginalized communities, it has this sort of natural prior of “people often turn out to be abusers, every situation has to be made abuser-proof or else it will be a catastrophe”. I once dated someone I knew on Tumblr who did a weird test on me where (sorry, won’t give more details) they deliberately put me in a situation where I could have abused them to see what I would do. When they told me about this months later, I was pretty offended—did I really seem so potentially-abusive that I had to be specifically cleared by some procedure? And people explained to me that there’s this whole other culture where somebody being an abuser is, if not the norm, at least high enough to worry about with everyone.
I’m not sure what percent of the population is more like me vs. more like my date. But I think there’s a failure mode where someone from a high-trust culture starts what they think is a perfectly reasonable institution, and someone from a low-trust culture says “that’s awful, you didn’t make any effort to guard against abusers!”.
And then the person from the high-trust culture gets angry, because they’re being accused of being a potential abuser, which to them sounds as silly as being accused of being a potential terrorist. If you told your Muslim friend you wouldn’t hang out with him without some safeguards in case he turned out to be a terrorist, my guess is he’d get pretty upset. At the very least it would engender the “stop wasting my time” reaction I had when my parents made me develop anti-pedophile plans before meeting my Internet friends.
And then the person from the low-trust culture gets angry, because the person has just dismissed out of hand (or even gotten angry about) a common-sense attempt to avoid abuse, and who but an abuser would do something like that?
I think it’s interesting that the Dragon Army idea received more positive feedback or constructive criticism on LW (where it was pitched to, and which is probably culturally more similar to me) and more strongly negative feedback on Tumblr (which is more full of marginalized people and SJ-aligned people, and also maybe more full of abusers as judged by the number who get called out all the time).
Yeah, I saw that earlier. In my case, I’m not panicked (or at least, I quickly became not panicked) about rampant abuse, and I also have not been directly exposed to a lot of abuse. My concerns are more about ways I’ve been directly exposed to harm by authoritarianism with good intentions. It’s no coincidence that that is what I was inclined to bring up. Since I’m probably not unique, there’s probably something worth taking seriously in every complaint. But everyone is probably weighting their own concerns the most. So that summarizes to something like:
-abuse is often perpetuated in structures that share significant characteristics with DAB, and you should think about specific plans to avoid abusing people
-there are unique systemic issues with authoritarian structures that facilitate unsustainable dysfunction even when no individual person is deviating much from normal behavior
-sex and romance will cause problems and it might be worth restricting behavior inside the house
+1 to all this. In particular, if my pendulum swing model is correct, the new position of the pendulum (extreme aversion to the risk of abuse) is a result of the pendulum’s previous stuck point being “a lot of people suffering abuse in these kinds of environments.”
I’m proposing swinging back toward the old norm and trying not to cross the ideal point, and I agree it’s a hard problem. Posts like yours are excellent for improving models and reducing risk as a result.
I think it’s okay for people to bet against it; we’re going to have a large betting norm within the house. If nobody bet against, I wouldn’t have anybody to bet with!
Exit plans are now #1 on the “to finalize” list, and have had multiple eyes on. I strongly endorse the way that LW has focused me toward that part of things, which I was underweighting. However, I also note that some people SHOULD ultimately still be dissatisfied with the exit norms, and therefore choose not to participate. Like, that’s part of the definition of high-stakes, high-commitment—it’s not for everybody, and in fact if everybody were on board with the exit norms being sufficient it … wouldn’t be much of anything?
The key, in my opinion, is being clear clear clear clear clear, and that particular part of it was not clear enough to the potential participants, and it will be now.
Thanks again for your willingness to write things up.
Mostly the former. I am an individualist and dislike collectivism. As befits a proper individualist :-) I also recognize that people are different and what’s good for me is not necessarily good for thee. I can survive and function in collectivist environments like you propose, but I don’t like them and don’t see a good reason for me to be there.
As to the latter, it’s hard to do a pre-mortem on something that’s still in flux. Communes of different kinds—from monasteries to kibbutzim and hippies—have been around for many centuries and clearly some people like them and find them useful. There’s enough history (which I’m not all that familiar with) to learn where the common pitfalls lie and what are the major trade-offs that you would be facing. I can’t recommend a book, but I’m sure there’s a few.
Generally speaking, I would expect the most likely mode of failure to be the way power dynamics develop. Authority and power are complicated and deadly—tightly-knit communities can go very bad quickly this way (consult your favourite cult group horror story). Adding sex to the mix generally makes things… more volatile. The rationalist community doesn’t strike me as being particularly capable of managing power issues.
Emotionally, the whole proposal strikes me as cultlike in a bad way. I can’t defend that as a factual claim since I only skimmed the post (precisely because it is not relevant to me), but I am pretty sure that living in such a situation even for a short while would make me feel very, very bad.
Same question posed to you—to the best of your ability to tell, is this a bug in the system, a bug in you personally, or a simple instance of diff’rent strokes for diff’rent folks? And if a bug in the system, can you point straight at it?
Speaking entirely for myself: You are proposing a dangerous venture. The path is littered with skulls. Despite this, you have not provided any concrete discussion of safety. When people have brought the subject up, you’ve deflected.
I suspect you haven’t actually poked around in all of the comments—I can point to multiple places where I’ve provided concrete discussion of safety, if you spend five minutes looking and can’t find it.
The biggest concern/red flag for me is one aspect of the authoritarian nature of the project. I would be perfectly fine with fully outsourcing decisions (giving higher intellectual status) but not with being a subordinate in full generality. What I’m trying to point at is the difference between “What should I do? He said to do “x” and I trust his expertise so this is my best option and I’m going to make myself do it if unpleasant” and someone forcing me to do the thing.
Which of the two would be my intuitive reaction depends mostly on your character/attitude and this is something that is completely missing from the discussion so far. Hopefully that is because people know you so they are sure it wouldn’t be a problem but your comments here only show competence and don’t exclude arrogance or enjoying power too much and beginning to boss people around. I found concerning the comparisons to military bootcamps and talking about tyrants as this somewhat paints the image of “someone shouting at people to do stuff” which I expect to have severe negative effects and build up resentment quickly. In other words it seems to me that constraining your image strictly to the one who decides what is to be done as opposed to someone who also enforces the execution would reduce the risk of failure of the experiment. Enforcing by regulating incentives should be fine as it won’t speak to System 1 and provoke the low-level “Who are you to tell me what to do” reaction.
Maybe this is an obvious point that having a nice and respectful leader is better than powerful tyrant but I’m not sure how far I can generalize from my own preferences so decided to share anyway. Apologies if this doesn’t make sense or wastes your time, I’m new to posting here.
This is a clear and cogent point, and thanks for posting it.
I suspect the authoritarian stuff is a necessary catalyst, to get the group cohered together and working, and after an initial period it becomes less and less useful. For instance, I think a major part of the thing is getting everyone to be in the same room at the same times, and that happens fastest and becomes ingrained easiest if someone’s just dictating the time (after reasonably accounting for everyone’s constraints and preferences).
But once everyone’s all in the same room, I don’t think it makes too much sense for an authoritarian to dictate what happens. Like, I think the useful thing is something along the lines of “well, if you all can’t decide where we’re going to eat, then we’re getting pizza”—my plan is to set a minimum bar of “this is a useful thing to be doing,” and to demand that we do at least that, but to in no way restrict people from coming up with something better/more effective/more worthwhile.
So, we start off by having morning exercise and weekly dinner, and then over time, people who are chafing because the morning exercise get to say, “Hey, you know what would be a better use of this slot of togetherness that is taken as a given? Doing X or Y or Z.” The authoritarianism is there to support the scaffold, but is not there to say what grows on it, except in the most general sense of “let’s try to improve” and “let’s lean toward important stuff rather than trivial.”
I also note that I’m somewhat overemphasizing the authoritarian bit, because I expect it’s the most difficult piece to swallow, and I want to really really really really really make sure that I don’t undersell how strict things will end up being. It seems way worse to lose a couple of people who would’ve liked it because I made it sound too restrictive than to include people who are going to be trapped and unhappy because I didn’t give them enough warning.
I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you’ve factored in the evidence that he wrote this whole, weird thing, complete with references to Ender’s Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.
There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.
The whole point of him posting this was to acknowledge that he is doing something dangerous, and that we have a responsibility to speak up. To quote him exactly: “good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked”.
His refusal to address basic safety concerns simply because he was put off by my tone is very strong evidence to me that people are indeed being hoodwinked. I don’t care if the danger to them is because he’s incompetent, overconfident, evil, or power-hungry. I care that people might get hurt.
(I would actually favor the hypothesis that he is incompetent/overconfident. Evil people have more sensible targets to go after)
I think you’re confusing “refusal to address basic safety concerns to handoflixue directly” with “refusal to address basic safety concerns at all.” I deny your right to judge and interrogate me, because of your failure to exhibit clear thinking and good discourse. I’ve engaged with those very same points in many other comment threads, though—there are literally only three people in this entire thread for whom I’ve determined that the EV of digging into their perspective is not worth it.
I note that there’s a bet waiting in the wings to lend your harsh words credibility. You could charitably offer to donate your winnings to salving the pain of the people you claim to care about.
I think you’re dramatically underestimating how your responses are being read by third parties. Your style of response to handoflixue specifically has made at least one person I’ve spoken to decide to avoid giving you well thought out criticism out of fear of you yelling at them and being very confrontational.
If you stumble upon a schoolyard fight, and immediately assume that the person you see punching is fundamentally violent and has high odds of attacking you, I think you’re skipping an important step of checking to see whether they’re the bully or whether they’re defending themselves. Most of us have had the experience (either direct or vicarious) of being absolutely infuriated by the people who try to pretend like there’s a perfect symmetry between the punch thrown by the aggressor and the punch thrown by the defender—it’s not hypocritical to both support “not starting fights” and “being willing to end them.”
I am aware of the risk of losing people around the edges, yeah. But I can’t do anything except point to the scores and scores of other responses (it might be over a hundred by now) in which I’ve thanked people for critique, responded in depth, updated visibly in real time, etc.
People get anxious, and maybe they disengage. But anyone who’s not going to be openly and unjustifiably uncharitable has nothing to fear from me in particular. I’m not going to not stand up for myself against bullies and trolls, even if it costs me some quiet whispers that would’ve contained good content.
Everything is tradeoffs. To put it another way: The person who’s refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.
If a), then fair play—this is nobody’s first priority except mine, and I don’t feel entitled to everyone’s opinions; it’s perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.
If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then … well … it probably wouldn’t have gone well anyway.
If c), they’ve followed the whole chain in chronological order and they still think I’m at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don’t know how well we’d be able to pass one another’s ITTs anyway.
To the best of my ability to judge, handoflixue’s earlier comments (e.g. above and below this comment) were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to Harsh Judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I’d demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single token apology conditional on “if” their tone had been too harsh (rather than just saying sorry, I crossed the line, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I’m a bad actor in every post.
Given that my stated role model is Ender Wiggin, if somebody thinks handoflixue’s approach is okay, or thinks that I shouldn’t have defended myself, then it shouldn’t be surprising that I claim, as my personal opinion, that their moral compass is drastically askew. There’s a different question about whether I’ve marginally erred, e.g. by being 15% too defensive, but that shouldn’t trigger someone who’s not going to be hostile in the first place to be afraid.
To put it another way: The person who’s refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.
If a), then fair play—this is nobody’s first priority except mine, and I don’t feel entitled to everyone’s opinions; it’s perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.
If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then … well … it probably wouldn’t have gone well anyway.
If c), they’ve followed the whole chain in chronological order and they still think I’m at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don’t know how well we’d be able to pass one another’s ITTs anyway.
handoflixue’s earlier comments were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to harsh judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I’d demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single apology conditional on an “if” (rather than just saying, sorry, I was too harsh, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I’m a bad actor in every post.
If somebody thinks that’s okay, or thinks that I shouldn’t have defended myself, then that’s somebody whose moral framework is, in my personal opinion, drastically askew. There’s a different question about whether I’ve marginally erred, e.g. by being 15% too defensive, but that shouldn’t trigger someone who’s not going to be hostile in the first place to be afraid.
handoflixue’s earlier comments were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to harsh judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I’d demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics.
Fine. Reply to my OP with links to where you addressed other people with those concerns. Stop wasting time blustering and insulting me—either you’re willing to commit publicly to safety protocols, or you’re a danger to the community.
If nothing else, the precedent of letting anyone recruit for their cult as long as they write a couple thousand words and paint it up in geek aesthetics is one I think actively harms the community.
But, you know what? I’m not the only one shouting “THIS IS DANGEROUS. PLEASE FOR THE LOVE OF GOD RECONSIDER WHAT YOU’RE DOING.” Go find one of them, and actually hold a conversation with someone who thinks this is a bad ideas.
I just desperately want you to pause and seriously consider that you might be wrong. I don’t give a shit if you engage with me.
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up sinking the project.
One particularly dangerous failure mode is that people may lose the capacity to recognize when the situation is toxic, unhealthy or counter-productive. The sunk cost fallacy is a powerful thing, as are the effects of strong emotional attachment. You may want to consider having a mandatory short vacation period from the house. This will allow people to take some space to get perspective on the house.
You also may want to mandate external social supports such as therapy, external friend groups, etc.
I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it’s even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)
But… I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it’s great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably “has the means to deal with needing to leave if the project doesn’t work out” is a screening factor.)
It’s possible this is just a fact about me, more than about the project. But I don’t have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It’s possible I’m misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a “comfort zone expansion” exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn’t like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I’m curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.
Separately, I’m interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the “desperately hoping nobody will bail after the third meeting” thing a lot before, but usually the context is “a bunch of people vaguely want to get a thing done but nobody has really committed to it”, in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won’t give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it’s the only way?
I think the main issue here is culture. Like, I agree with you that I think most members of the rationalsphere wouldn’t do well in a military bootcamp, and I think this suggests a failing of the rationalist community—a pendulum that swung too far, and has weakened people in a way that’s probably better than the previous/alternative weakness, but still isn’t great and shouldn’t be lauded. I, at least, would do fine in a military bootcamp. So, I suspect, would the rationalists I actually admire (Nate S, Anna S, Eli T, Alex R, etc). I suspect Eliezer wouldn’t join a military bootcamp, but conditional on him having chosen to do so, I suspect he’d do quite well, also. There’s something in there about being able to draw on a bank of strength/go negative temporarily/have meta-level trust that you can pull through/not confuse pain with damage/not be cut off from the whole hemisphere of strategies that require some amount of battering.
It makes sense to me that our community’s allergic to it—many people entered into such contexts before they were ready, or with too little information, or under circumstances where the damage was real and extreme. But I think “AVOID AT ALL COSTS! RED FLAG! DEONTOLOGICAL REJECTION!” is the wrong lesson to take from it, and I think our community is closer to that than it is to a healthy, carefully considered balance.
Similarly, I think the people-being-unreliable thing is a bullshit side effect/artifact of people correctly identifying flexibility and sensitivity-to-fluctuating-motivation as things worth prioritizing, but incorrectly weighting the actual costs of making them the TOP priorities. I think the current state of the rationalist community is one that fetishizes freedom of movement and sacrifices all sorts of long-term, increasing-marginal-returns sorts of gains, and that a few years from now, the pendulum will swing again and people will be doing it less wrong and will be slightly embarrassed about this phase.
(I’m quite emphatic about this one. Of all the things rationalists do, this one smacks the most of a sort of self-serving, short-sighted immaturity, the exact reason why we have the phrase “letting the perfect be the enemy of the good.”)
I do think Problem 4 can probably be solved incrementally/with a smaller intervention, but when I was considering founding a house, one of my thoughts was “Okay, good—in addition to all the other reasons to do this, it’ll give me a context to really turn a bazooka on that one pet peeve.”
I suspect Eliezer wouldn’t join a military bootcamp, but conditional on him having chosen to do so, I suspect he’d do quite well, also.
Eliezer wasn’t able to complete high school, for what I suspect are related reasons. (The sleep thing may have contributed, but I think it was overdetermined.)
I think I would have been extremely miserable if I had gone through boot camp at 18; I think I would have been able to bear going through it by ~25.
I think a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships. Like, if you go through a string of really bad relationships with partners who consistently abused you, you might update that there’s something inherently abusive about relationships and that you just shouldn’t be in one again, ever, because your autonomy is too important. On the other hand there is such a thing as a healthy relationship, even a healthy relationship in which you have less than perfect autonomy because you’ve made some commitments that you’re following through on, and you might be lucky enough to find yourself in one in the future if you’re open to the possibility and search carefully for someone to commit to.
I think I disagree that the pendulum will swing back in the future though. The rationality community being the way it is now, prioritizing flexibility the way it does now, probably has the property that it attracts people who are prioritizing flexibility and turns off people who are looking for reliability. So if anything I expect the problem to get worse over time unless someone makes a deliberate effort to attract looking-for-reliability sorts of people—hopefully Dragon Army can do this.
a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships
I don’t get the analogy. So, if you go through a string of really bad military bootcamps? But you need to stay open to the possibility of a really good bootcamp that you can and should commit to?
Yes, but using “military bootcamp” as a symbol of broader kinds of authorities you could submit to, e.g. schools, employers, governments, and keeping in mind that people are learning about how authorities work based on others’ experiences and not just their own.
As someone who’s done the whole military thing (am I alone?), I agree with your view that most members of the rationalsphere would struggle immensely in bootcamp, both in turns of physicality and culture (I’m referring mostly to the Army and Marines here, which focus on actual combat training vs. the Air Force and Navy that don’t).
I totally agree that you would have 0 problems (other than patience with the stupid parts) as you have a high degree of physical ability, emotional resilience, and general cognitive ability. You would very likely excel. I could say the same of Val and Pete, and I’m sure Eli would do well (I don’t know the others you listed well enough to venture a guess).
I have never met Eliezer. However, I suspect he would struggle a great deal and be unlikely to succeed from what I’ve read and been told. I can’t imagine Eliezer playing say football well either. My model of him just says he’s simply not optimized for that kind of environment where his intellectual strengths would be limited and his weaknesses amplified. It’s just not a remotely optimal environment for someone who is (according to my model of him) built like a race car, extreme performance within strict parameters (flat track, maintenance, etc.).
And that’s okay. The military enlisted system at least typically focuses on taking both physical and intellectual generalists and training them to perform a specific job. It’s all about the averages. The cockpit is decidedly not adjusted for individual needs or specialized performance for the vast majority of military personnel.
I do hope you’re at least somewhat right about the long-term, increasing-marginal-returns sorts of gains, since that’s my current strategy for achieving high impact on important matters.
Similarly, I think the people-being-unreliable thing is a bullshit side effect
You may wish to consider that this community has a very high frequency of disabilities which render one non-consensually unreliable.
You may wish to consider that your stance is especially insulting towards those members of our community.
You may wish to reconsider making uncharitable comments about those members of our community. In case it is unclear: “this one smacks the most of a sort of self-serving, short-sighted immaturity” is not a charitable statement.
Oh, I missed this one in the shuffle. Note that you chose to quote less than half a sentence, because if you quoted the whole sentence you’d have a heck of a time setting up the strawman you wanted to knock down.
Hi Duncan, I’m a relative newcomer (this is my first LW thread, though I’ve participated in rationalsphere discussions elsewhere), so this may not carry much weight, but I want to somewhat agree with handoflixue here.
One of my stronger reactions to your post is “this is an impossible set of expectations for me and a lot of others”. Which is fine, obviously you can have expectations that some people can’t live up to, and of course it is very good that you are making these expectations very clear.
But I sort of get the sense that you are a person who is fundamentally capable of being reliable and regularly making good life choices pretty easily, and that you sort of don’t get that for a lot of people these things are really hard even if they understand what the right choice is and are legitimately trying their best to do that.
This is based only partly on your post and somewhat more on a mini-talk which (IIRC) you gave at a CFAR community night where you posed the question “does it even make sense for people to seek out advanced rationality techniques such as the ones discussed here when they’re not displaying basic rationality such as eating a reasonable diet and sleeping enough?”. Even then, this question struck me as dangerously wrong-headed, and now that you are proposing to be in charge of people, this seems to take on more importance.
Advanced rationality techniques, at least when applied to one’s self-conception and life choices, are basically therapy. “Failures of basic rationality” are often better described as “mental health issues”. Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I’ve seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.
I don’t actually know you, so my information is pretty incomplete, but my impression is that if someone fails to act in a way you (and they!) think is reasonable, you’re likely to become baffled and frustrated and try to deal with the problem by imposing stricter expectations & consequences. This might work for some people, but for many, it will just make them miserable and less productive because they will be angry at themselves for failing at things that they “should” be able to do.
I think it’s likely that your way of dealing with this is basically to screen out the people who are likely to react poorly to your approach, in addition to causing others like me to self-select out. That’s fine, I guess, though I would still be on the lookout for this sort of issue as a possible failure mode, and maybe also just demonstrate more compassionate awareness that things like reliability are actually almost impossible for some people, and maybe not attribute all of this to having the wrong culture or mindset.
(My general opinion of your project is “this sounds scary and I want to stay very far away from it, and this makes me somewhat wary of the people involved, and I wouldn’t recommend participation to people I know, at the same time I am really curious about how this will go so selfishly I’m a little glad it’s happening so I can gain information from it”.)
Thanks for the long comment. I really appreciate your candor and perspective—I do think I get the fact that other minds don’t work like mine, but you’re right in sniffing out that a lot of that knowledge is top-down and parts of me are still instinctively typical-minding a lot. I work hard to remind myself, e.g. I have triggers on certain words or feelings that cause me to review memories of specific times when my assumptions about what was going on in someone else’s head were blindingly false.
I think I generally agree with you that there’s a large overlap between rationality and therapy, and I’m intrigued by the hypothesis re: mentally ill rationalists; it seems to be pretty plausible.
Here’s my actual plan if someone fails to act in a way that things seem reasonable. Note that this is the “everything but the kitchen sink option,” including aaaaaallll of the steps one might take, and that for smaller disagreements, this can be done as a speed run or stepwise.
Determine whether to follow up in the moment or later based on the needs of the activity, determine whether to follow up in private, in group, or via delegation based on the apparent needs of the person.
Start by asking. What did they think was going on? What were their thought processes? Assume from the outset that people act in consistent, coherent ways, and that basically everyone is trying to make the world a better place.
Try to pass their ideological Turing test. In other words, try to reflect back to them the priorities they were holding and the goals they were attempting to achieve, and keep falsifying my hypotheses until they give a clear endorsement of my summary.
Ask them to model me, in return (note: one important subthread of how the house will run is a check-in along the lines of “is Duncan clear, consistent, and model-able?”). See if they can predict what my priorities were, and if they have a sense of what I’m reacting to. Do not make this some kind of sick high-pressure quiz dynamic … if they shrug and say “dunno,” I’ll just explain.
Try to lay out, from as birds’-eye as possible a perspective, the conflicting goalsets. Point at the causal chains that brought them into conflict, and highlight my model of where things are broken. Ask them if they have a different model/let them update my picture with a better sense.
Form a new plan for the future; explicitly discuss weighing the goals against one another, and how they ought to stack up. Possibly include other people in the discussion at this point, particularly if the defection seemed to have externalities.
Assume that plan failed. Come up with a plausible explanation for why; try to patch the first or second obvious holes. Form an intention going forward.
Check whether reparations need to be made. Hopefully, there’s a standard formula (as in the pushups example). If not, do a similar process of attempting to converge on a good face-saving/balance-restoring action. If there isn’t a clear satisfactory solution, default to a compromise and schedule a future check-in.
Through all of this, run things by others if either party thinks that’d be beneficial. Also consider things like anxiety/introversion, and have the conversation at a deliberate time rather than forcing it if it’s not urgent.
So yeah, in a sense, this might result in stricter expectations and consequences, but not in a blind, top-down way. In situations where there needs to be an immediate response, I’ll take an action/give an order and expect it to work, but I’ll want to revisit any such quick authoritarian moves after the fact, to explain my thinking and confirm absence of undue harm (and apologize/make amends of my own if necessary).
Overall, though, the idea is to build a high trust environment, and trust goes both ways and is easier to lose than to gain. The thing I want people in the house to actually be justified in believing is “Duncan always has good intentions and is making decisions from some kind of a model. He’ll explain when he can, and if he doesn’t, it’s because he has another model saying why he can’t, and he’ll instead explain both models once the thing is over.”
The idea being that I prove trustworthiness in situations 1-8, and people grant me a little leeway in situation 9. But 1-8 definitely have to come first.
Advanced rationality techniques, at least when applied to one’s self-conception and life choices, are basically therapy. “Failures of basic rationality” are often better described as “mental health issues”. Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I’ve seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.
Um. Quick reply before I go further—I’m really really confident that the community talk night thing you’re remembering either wasn’t me or that the quote doesn’t resemble what I said. I strongly agree with you that that’s a dangerously wrong-headed way to try carving up the world.
That’s not because he didn’t do the exercise. Bootcamp doesn’t care if you lose weight, they only care if you execute the weight loss program. If you doesn’t meet any of the body proportion standards, you just have to perform extra exercise.
Bootcamp (i.e. the military) cares very much about both losing sufficient weight to meet the standard as well as the ability to perform at a basic level of physical fitness. The different U.S. military services have differing standards, but the general requirements are all comparable.
In an environment where the food supply is tightly controlled and there is constant movement, people tend to lose a lot of weight quite rapidly.
However, if you don’t meet the body proportion standards after a certain time, you will be separated from the military.
Part of the program is separating people who don’t lose weight. That doesn’t mean they care about the height/weight, only that the next box is ‘process for separation’.
There’s not a lot other than adherence to procedure that most of the military actually does care about.
I’m not sure if I’m totally missing your point, or if you’re making a point that’s a distinction without a difference.
In Army basic training, there are two standards one must meet:
height/weight, adjusted for age and gender
PT test, which consists of push-ups, sit-ups, and a 2-mile run, with scoring adjusted for age and gender
Either one will get you chaptered out of the Army within certain timeframes. There is a lot of fine print for specific situations (basic training has some extra cushion), but that’s the ground truth. These same principles apply to the military at large, but the standards and fine print differ.
I don’t know how that squares with: “That doesn’t mean they care about the height/weight.”
In an organization so devoted to adherence to procedure, what the procedures are set up to be is often a pretty strong indicator of what the organization cares about...
No individual cares about anything other than the procedures. Thus, the organization as a whole cares only about the procedures. The behavior is similar /with the procedures that exist/ to caring about fitness, but there is also a procedure to change procedure.
If the organization cared about fitness, the procedure to change the height/weight standards would be based on fitness. As it is, it is more based on politics. Therefore I conclude that the Army cares more about politics and procedures than fitness, and any behavior that looks like caring about fitness is incidental to their actual values.
With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects—Geoff Anders of Leverage Research. So I don’t think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.
(disclaimer: I was at the test weekend for this house and am likely to participate)
Yeah, Geoff and Leverage have a lot I would love to look at and emulate, but I haven’t been running under the assumption that I’d just … be allowed to. I’m beginning some conversations that are exciting and promising.
That being said, I do think that the overall goals are somewhat different. Leverage (as far as I can tell) is building a permanent superteam to actually do stuff. I think Dragon Army is building a temporary superteam that will do stuff in the short and medium term, but is more focused on individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies. My model of Leverage is looking for the right thing to exploit on, whereas I’m looking for how to create competent people, and while there’s a lot of overlap those are not the same Polaris.
I similarly think Geoff is highly competent and certainly outstrips me in some ways (and possibly is net more qualified), but I’d posit I outstrip him in a roughly similar number of ways, and that he’s better matched for what Leverage is doing and I’m better matched for what DA is doing (sort of tautologically, since we’re each carving out mountains the way we think makes the most sense). I think the best of all would be if Geoff and I end up in positions of mutual respect and are able to swap models and resources, but I acknowledge he’s a good five years my senior and has no reason to treat me as an equal yet.
EDIT: Also note that Geoff is disqualified by virtue of already being busy, and as for “just join Leverage,” well … they’ve never really expressed interest in me up to this point, so I figured I wouldn’t bother them unless I was no longer employed day-to-day.
I dunno about “key.” Open-ended brainstorm, keeping in mind that my models of Leverage are vague and straw and NO insult is intended if I get things wrong …
Leverage advantages—provides a discriminator that lets you tell more accurately who fits and who doesn’t, sounds better if your goal is to accrue funding, is better if your goal is to return money to an investor, provides your participants with a strong mission that they can write in their hearts rather than a vague one that might be hard to grasp, gives you a narrowing principle that helps you discard certain kinds of growth as irrelevant/boon-doggle with reasonably high confidence
Leverage disadvantages—seems (from my limited outside vantage point) to require people to more closely conform to the shape of the leader/take on a singular mission rather than allowing for different colors in the spectrum, seems to fall prey to the intellectual property and get-there-first problems that encourage isolation from the broader network of allies, (maybe) requires you to somewhat distort what you’re doing to please investors, (maybe) requires you to strike the balance between deciding-too-soon and being-decision-paralyzed because you have to cohere around a smaller number of goals at a time
Dragon Army advantages—adheres (slightly) more closely to what the average rationalist wants and thus opens you up to a (slightly) wider range of participants, causes members to gain leadership and facilitation skills of necessity rather than accidentally/luckily, (somewhat more) forces people to confront the question what do you really want instead of giving them an easy out by handing them a distracting answer, doesn’t require as much funding, biases toward action rather than running the risk of spiraling up into the meta
Dragon Army disadvantages—more vulnerable to strawmanning and skepticism because it is less coherent and clear, much more vulnerable to confusion or failure if I get hit by a bus because the models all live in my head and aren’t yet interactable, runs the risk of losing people who are impatient and feel like they’re lost in triviality, is less viscerally rewarding (jack of all tradesing, that is) than getting gold medals as a master, needs a longer runway/longer start time because it’s more explicitly about culture building and less about objective checkpoints that you can meet on the fly
incomplete
Note that I CANNOT STRESS ENOUGH that my models of Leverage are VAGUE AND PROBABLY WRONG and also note that I’m sleep-deprived and I am aware that this may not really answer your question.
Oh, also: AFAIK, Leverage is actually fairly low on precommitment, i.e. if someone were to want everyone to get together in the same room at the same time on a regular basis, they would have to go around and win the argument something like forty times, and at any time someone who’d previously been convinced could just say, “actually, never mind, I changed my mind and I have better things to do again,” and there aren’t any … initially consensual, eventually coercive? … structures in place.
Nothing, in short, to get people across the unpleasant valley except their own epistemics and willpower … no firm, unyielding scaffold that can be built such that others can rely on it. So, Leverage has the advantage of not having the failures of such a system (e.g. people getting trapped and wasting time), and Dragon Army has the advantages of having the benefits of such a system (Actual Reliability that doesn’t require inordinate upfront costs, the ability to develop an ecology of affordances over time upon a Schelling point of togetherness).
An excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement that’s fueling the more heated clashes:
One thing that’s seemed striking to me in this Dragon Army discussion is the priors on different people’s threat assessments.
I remember when I was younger, I used to want to meet my friends from the Internet, and my parents were horrified, and had all of these objections like “What if they’re pedophiles who befriended you so they could molest you?” or “What if they’re kidnappers who befriended you so they could kidnap you?”, or less lurid possibilities like “What if they’re creepy drug people and they insist on bringing you along to their creepy drug abuse sessions and won’t let you say no?”
And I never developed a good plan that countered their concerns, like “I will bring pepper spray so I can defend myself”. It was more about rolling my eyes and telling them that never happened in real life. I’ve now met hundreds of Internet friends, and I was absolutely right—it’s never happened, and any effort I put into developing a plan would have been effort wasted.
I’m not claiming there are no Internet pedophiles or kidnappers. I’m saying that based on my own Internet communities, and my threat-detection abilities, and the base rate, I was pretty sure it was more in the realm of terrorism (the kind of stuff you hear about on the news) than the realm of car accidents (the stuff that happens to real people and that you must be guarding yourself against at every moment).
This is also how I think of people turning out to be abusers. It’s possible that anyone I date could turn out to be an abuser, just like it’s possible I could be killed by a terrorist, but it’s not something likely enough that I’m going to take strong precautions against it. This is obviously a function of my personal situations, but it’s a real function of my personal situation, which like my Internet-friend-meeting has consistently been confirmed over a bunch of different situations.
(Please don’t give me the “that’s just male privilege!” speech; men and women get abused at roughly similar rates. I do think that probably women are socialized to fear abuse much more, and that’s a big part of this, and probably other axes of marginalization contribute more)
One interesting thing about Tumblr and the SJ-sphere in particular is that because it comes disproportionately from marginalized communities, it has this sort of natural prior of “people often turn out to be abusers, every situation has to be made abuser-proof or else it will be a catastrophe”. I once dated someone I knew on Tumblr who did a weird test on me where (sorry, won’t give more details) they deliberately put me in a situation where I could have abused them to see what I would do. When they told me about this months later, I was pretty offended—did I really seem so potentially-abusive that I had to be specifically cleared by some procedure? And people explained to me that there’s this whole other culture where somebody being an abuser is, if not the norm, at least high enough to worry about with everyone.
I’m not sure what percent of the population is more like me vs. more like my date. But I think there’s a failure mode where someone from a high-trust culture starts what they think is a perfectly reasonable institution, and someone from a low-trust culture says “that’s awful, you didn’t make any effort to guard against abusers!”.
And then the person from the high-trust culture gets angry, because they’re being accused of being a potential abuser, which to them sounds as silly as being accused of being a potential terrorist. If you told your Muslim friend you wouldn’t hang out with him without some safeguards in case he turned out to be a terrorist, my guess is he’d get pretty upset. At the very least it would engender the “stop wasting my time” reaction I had when my parents made me develop anti-pedophile plans before meeting my Internet friends.
And then the person from the low-trust culture gets angry, because the person has just dismissed out of hand (or even gotten angry about) a common-sense attempt to avoid abuse, and who but an abuser would do something like that?
I think it’s interesting that the Dragon Army idea received more positive feedback or constructive criticism on LW (where it was pitched to, and which is probably culturally more similar to me) and more strongly negative feedback on Tumblr (which is more full of marginalized people and SJ-aligned people, and also maybe more full of abusers as judged by the number who get called out all the time).
I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is “does this behavior seem normal or like a predictive red flag?”. In those cases, your lived experience directly influences your perception. Someone’s actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that’s evidence. The people who didn’t think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It’s not just people who don’t alieve the correct base rates. Certainly those people exist, though they’re much more plentiful on Tumblr than in person or on LW. It’s very non-obvious whether a strong reaction is correct.
Neither side can truly accept the other’s arguments. It’s a bad situation when both sides consider the other’s reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you’re particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you’d end up a frog boiled in a pot of drama nobody else is experiencing. I’m not sure this is what’s happening, but it looks plausible.
First, you seem to think that “Getting Useful Things Done” and “Be 99.99% Reliable” heavily correlate. The military is infamous for bloated budgets, coordination issues, and high rates of sexual abuse and suicide. High-pressure startups largely fail, and are well known for burning people out. There is a very obvious failure state to this sort of rigid, high pressure environment and… you seem unaware of it.
Second, you seem really unaware of alternate organizational systems that actually DO get things done. The open source community is largely a loose model of “80% reliable” components, and yet great things get built by these collaborations. Rome wasn’t built in a day, and neither was Linux.
“we often convince ourselves that 90% or 99% is good enough, when in fact what’s needed is something like 99.99%.”
Third, and most bluntly: I don’t think you have the slightest knowledge of Fault Tolerant Design, or how to handle Error Cases, if you would say something like this. I write software that can rely on it’s inputs working maybe 80% of the time. This is accounting software, so it is NOT allowed to fuck up on corner cases. And I do it just fine. 80% is perfectly sufficient, if you know how to build a system that fails safely.
I think this makes you a uniquely bad candidate for this sort of endeavor, because the first iteration of this experiment is going to be running at maybe 80% reliability. You’re going to have a ton of bugs to iron out, and the first run needs to be someone who can work with 80%. And you seem pretty blunt that you’re inept in that area.
Fourth, your thresholds for success are all nebulous. I’d really expect testable predictions, ideally ones that are easy for the community to evaluate independent of your own opinions. It seems like the goal of this exercise should be to produce data, more than results.
All that said, I do value the focus on iteration. I think you will be prone to making more mistakes, and inflicting more unnecessary suffering on participants, but I do not think you have any sort of malicious intent. And with no one else really stepping up to run this sort of experiment… well, if people are willing to make that sacrifice, I’m happy to learn from them?
But I think you dramatically over-estimate your ability, and you’re selling short how badly the first version is going to go. There are going to be bugs. You are going to need to learn to deal with the 80% that you get.
And on top of that, well, the consequences for failure are actually worse than being homeless, since you’re also responsible for finding a replacement. That’s a really huge risk to ask people to take, when you yourself have absolutely nothing at stake.
I think your heart may well be in the right place, but the idea as currently conceived is actively harmful, and desperately needs to build in much better safety protocols. It also needs to be much clearer that this is an initial draft, that it will go badly as people try to figure this out, and that initial participants are going to be suffering through an unoptimized process.
Finally: You don’t have a fail safe for if the whole idea proves non-viable. As it stands right now, you kick everyone out but leave them on the hook for rent until they’ve run 3 replacement candidates by you. In the meantime, you enjoy a rent free house.
It really feels like it needs an “ABORT” button where the participants can pull the plug if things get out of control; if you turn out power mad; or if it just turns out a significant number of participants badly estimated how this would go.
The fact that you have nothing on the line, and no fail-safe / abort clause… really, really worries me?
TL;DR: Your plan is dangerous and you haven’t given nearly enough thought to keeping people safe. Scrap what you have and rebuilt it from the ground up with the notion of this being a safe experiment (and I want to emphasis both the word “safe” and the word “experiment”—you should be expecting the initial version of this to fail at producing results, and instead largely produce data on how to do this better in the future)
(Having exchanged half a dozen comments with cousin_it, I now recognize the pattern of a) you’re defaulting to the least charitable interpretation at every possible split point, b) many of your claims and conclusions are flat-out false, c) you’re incredibly confident that you’re correct about all of your assumptions and are including zero nuance or uncertainty, and therefore d) this thread will produce nothing of value. I feel no need to convince people who a, b, and c, especially those who are unable to distinguish object level standards from meta level ones. Contrast your post with jbeshir’s, for instance, which is also highly critical but in an entirely constructive way that doesn’t make the same mistakes.)
Datapoint: I thought handoflixue’s comment was much more reasonable and less uncharitable than cousin_it’s opening comment was; in particular, the points about needing an explicit abort procedure sounded very reasonable and it makes me slightly worried to see you making a comment that implies you’re just disregarding them. (only slightly because of my personal trust in you and your abilities; I expect that people who don’t know you, will get much more worried)
EDIT: I wrote this comment before reading your reply to jbeshir’s comment; your response there further reduces my worry.
Not knowing the author, can’t say much else than “someone freaked out”? I see mostly a strong emotional reaction, which looks to me similar than a bunch of other strong emotional reactions that people have had when they’ve pattern-matched things in the rationalist community to their stereotype of a cult, without really actually understanding the community (or necessarily cults either).
Ah, now I see why some smart folks were okay with Duncan’s idea. They pattern-matched criticisms of it to criticisms of the rationalist community! That’s sneaky, even Scott fell prey to it, though he came around quickly (check his tumblr).
It seems like the only way “weird” groups can defend against such radicalization over time is by adopting “normie” ideas. I’ve been advocating that for a while, but I know it’s a hard sell here because many rationalists feel hurt by normies.
They pattern-matched criticisms of it to criticisms of the rationalist community!
Well, what else can you say to a criticism that’s mostly an emotional outburst? That post was using every opportunity it could to interpret Duncan’s post in a maximally uncharitable light and turn stuff into ad hominems, such as “yes, dude, I too had a job in college”. I searched for the “self-insert” phrase like you asked me to, and it brought up a line where the author expressed not liking Duncan’s writing. What substantive point am I supposed to take out of someone’s literary preferences? (also the author mischaracterizes “A Weekend with the Legion”—to the extent that it’s a self-insert fic, it’s one of joining a rationalist group house not founding one, and I’m not sure where the “mary sue” thing came from)
For me personally, a lot of what Duncan wrote resonated in me a lot in that I’ve long wished to live in a society that would be arranged kind of like he described Dragon Army, and it seemed clear that he’d seen the same things and worked off a similar model. Whereas few of the criticisms seemed to understand those intuitions/emotional needs that I presume we’re both operating out of, so ended up missing the mark. E.g. I’m totally willing to buy it when he says that he doesn’t actually want to be the leader, both because I’ve met him, and also because not wanting to be the leader is a major part of why I’m not trying to create a similar project myself now that I’ve read his post (that, and because it would be too difficult to explain to people without them pattern-matching it into cults).
It feels weird saying this to you, and please don’t take it too seriously, but if you feel an emotional need to live in a commune with salutes, push-up punishments and restrictions on criticism, have you considered that your emotions might be wrong (from an outside perspective)? For example, many of my emotions are wrong, that’s why I don’t text my exes while drunk.
The things you mentioned seem to me more like incidental than essential features of the commune; also I’m not saying that I would agree with Duncan on exactly everything regarding the design—for one, I thought Ender’s Game was an okay book but didn’t see what all the fuss about it was. :) But then again, his project, and I’m sure that my ideal aesthetics wouldn’t be his ideal aesthetics either.
The core things that do appeal to me are… well, this is a little hard to verbalize, since like him this is operating more off a system 1, pattern matching basis rather than any explicit first principles. But things like agreement with the sense that the pendelum of modern society has swung a little too far with regard to individualism and commitment, a sense that there is genuine value in being part of a group where everyone is genuinely entirely commited to the project and each other’s welfare (“One for all, all for one”), where people are willing to try whatever weird things if it works without needing to worry about what outsiders might think, and generally having a strong supportive social structure that offers you help when you’re struggling, pushes you to become the best possible version of yourself when you might otherwise slack off, and provides frequent feedback of how you’re doing regardless.
I think I’d be much happier off in a situation like that, rather than the current situation where it feels like I mostly have to figure out everything myself and it’s a constant struggle to find allies for any project that would make things better and which I can’t pull off just by myself.
But sure, I’m open to the possibility that I’m wrong in this and such an environment wouldn’t actually be good for me, or that I’m reading too much into Duncan’s post and that the intuitions he’s operating out of are actually substantially different from the ones I’m having.
If the problem is lack of supporting structure in modern life, surely the answer is joining traditional institutions, not more costly and risky social experiments?
surely the answer is joining traditional institutions
I think this depends on how much alignment you can expect to have with traditional institutions. Quakers let in gays and atheists, but the politics of the typical member grated; joining the Mormons would involve celibacy until God calls up the prophet and tells them that being gay is okay (which I cautiously expect in less than ten years) and lying about beliefs in the supernatural. Joining the military involves participating in ‘wars’ that I disagree with strenuously, and when I was the right age to do it “don’t ask don’t tell” was still official policy (and, I later learned from an acquaintance who did go to the Academy I would’ve gone to, being openly atheistic was seen as an invitation for hazing by some of the instructors).
I’m not inviting people to join the Mormons. The OP’s curriculum would be better covered by joining a gym, meditation group, public speaking club or graphic design course, which don’t have the problems you mention.
I brought up the Mormons because I seriously considered joining them (and rejected it for the above reasons).
I think you’re fundamentally misunderstanding the nutrient being sought out if you think that the list of four things you mention (individually or all together) would actually satisfy the relevant hunger.
I thought the point was learning skills and interacting with people. If the real point is filling a tribe shaped hole in your soul, I can only repeat my question to Kaj. Are you sure that yearning for a tribe is an emotion that serves your interests?
Are you sure that yearning for a tribe is an emotion that serves your interests?
Given how yearning for a tribe is a “powerful, fundamental, and extremely pervasive motivation” (old paper, but later research has only served to further confirm the general notion), I would guess yes; for me personally, “being in a tribe” seems very much like the strongest unmet terminal goal that I have.
That seems like proving too much, since I don’t yearn for a tribe. Are you sure you aren’t confusing your social needs for a specific dream of fulfilling them?
A motivation can be “extremely pervasive” without being universal. (very few things in psychology are truly universal) You may not share the yearning, but I’ve certainly run into plenty of people who do.
Are you sure you aren’t confusing your social needs with a specific way to fulfill them?
That is possible, and I have made that kind of a mistake before, but if there’s an alternative way of fulfilling them I haven’t found it.
I think you misunderstand the point. The goal is not to develop skills, the goal is to create an emotional web of support that comes from being a bona fide member of a tightly-knit tribe. You don’t (normally) get that at a gym or a public speaking group.
Possibly excluding some religious communities, which I wouldn’t want to join because I’m not religious, I don’t know of any traditional institutions that would provide general life support. Schools have some support structures in place that are aimed at helping you do better at school, martial arts training supports you become better at martial arts, etc. Which traditional institution is one that you can just join, and which is aimed at making all of its members become the best versions of themselves in all respects?
(By the way, I forgot to reply to this in the earlier comment, but I think that interpreting “start from the assumption of good faith when interacting with other members of the house” as “no criticizing the leader” is… not a particularly charitable interpretation.)
When deciding who to put in power and how much power to give them, the principle of charity is harmful.
It seems to me that institutions that claim to make you better in every way are always scams. The fact that a school will teach you only welding, and will give you a welder certificate in a certain number of weeks if you keep showing up, is a feature. If you join two or three institutions according to your interests, you’ll be fully booked in both self-improvement and social interaction, and it’s still less costly or risky than joining an authoritarian commune.
When deciding who to put in power and how much power to give them, the principle of charity is harmful.
There’s healthy skepticism and then there’s twisting words wildly beyond any reasonable interpretation...
Also the level of skepticism should be proportionate to the level of authority requested; it makes sense to be more skeptical the more power someone wants. But my reading of the original post agrees with Sinal’s reading, who compares the level of authoritarianism with that of a Boy Scout troop leader. The original post has stuff like the first rule of conduct for a dragon being to protect themselves; it mentioned that people can “hard veto” proposed experimental norms; people are free to leave the experiment if they wish. Duncan’s authority seems to be limited to upholding policies that were agreed upon by group consensus and running them for a limited time; he has mentioned in the comments that he can be removed from power using the kind of procedures one would expect, e.g. a majority vote. The specific examples of his “tyrannical” powers that were given were things like deciding that a specific meeting will be held on Tuesdays even though not everyone wants the meeting to be on a Tuesday.
The Boy Scout troop leader probably has more power over his scouts than Duncan has in the house, and I doubt we’d consider people obviously unsuitable to be scout leaders for the sin of suggesting that scouts should assume good intent in their dealings with each other.
You’re talking like joining this commune would be a huge enormous risk, and I just don’t see that. Sure there’s a risk, but it’s on the same order as joining any other commune or moving in with other roommates—you risk having a miserable time for a while if it turns out you’re not a good fit for each other, and then things may be inconvenient for a while as you need to look for a new place where to live.
Personally I made the mistake of moving in with some wildly incompatible roommates at least once, and have also on other occasions lived together with other people who I’d strongly have preferred not to live together with. Yes, it sucked a lot and made me much more miserable than I probably would have been otherwise. But then I moved out and don’t think I’ve suffered any lasting consequences, and despite the unpleasantness I still don’t consider it a risk on the order of “has to absolutely be avoided”.
It seems to me that institutions that claim to make you better in every way are always scams. The fact that a school will teach you only welding, and will give you a welder certificate in a certain number of weeks if you keep showing up, is a feature.
Agreed that this is a feature: sometimes one really does only want to learn welding. But if you want to learn dancing and everyone’s only teaching welding, with all the places that claim to teach dancing actually being scams… then that’s a major problem for you, and suggests that you’d get a lot out of it if someone did found a dancing school that actually taught dancing and wasn’t a scam.
I think claiming to teach skills that aren’t taught by any traditional institutions is fishy. (This isn’t an isolated demand, I’ve argued with CFAR folks that they should prioritize research into testing rationality, instead of jumping head first into teaching it.)
Yeah, when we want to learn things beyond the expertise of a house member (such as when we learned to use firearms during the weekend experiment) we bring in professional help.
The post says it will help you achieve three goals, of which self-improvement is the most important, and gives a list of 15 skills it will help you learn (many of which are fishy by my standard above).
Which traditional institution is one that you can just join, and which is aimed at making all of its members become the best versions of themselves in all respects?
I think what you’re referring to is something like the Holy Grail of institutions. So if someone claims that they’ve found the global optimum of institutions, the right reaction should be one of heavy skepticism. It’s not wrong to seek the global optimum, but when someone proposes that it exists in some well-explored territory based on a somewhat simple model, the argument they should present for it would probably look something like 1) We overlooked some seemingly trivial, but serious details that would have fixed the major issues we had previously and/or 2) Iterating on this idea for a while will not result in diminishing gains for a considerable time.
What we have in society right now is a bunch of local optimums for specific needs. I think we should be prepared for the scenario in which the global optimum looks weird, and is composed of sort of a hodgepodge of various fixes and hacks and specific set-ups to meet different requirements for different people. And I know this looks ugly, but that’s typically what solutions as the output of optimization processes look like. I consider a single hierarchical institution to be a simple model, and therefore consider it unlikely that such an ambitious goal will be reached using such a solution.
So based on my above model of institutions I sort of place low probability on a solution that consists of a simple model already well-explored or without a considerable amount of details tacked-on that have been found through consistent iteration and optimization. Right now I think this experiment will have to be run with significant fail-safe mechanisms in place and outside observation so that this process can actually take place.
It’s not obvious to me that Duncan is proposing that. See my comment here. To me, it seems more like iterating and optimizing towards the minimum would get you something far from both the extremes of the libertarian egalitarian model and the one-person-in-charge-of-everything model.
I mentioned in another comment that Duncan’s role seems to be “upholding policies that were agreed upon by group consensus and running them for a limited time”; this does seem like it’s pretty distant from both rampant individualism and one-person-in-charge-of-everything to me.
I’m not sure of how to interpret your referenced comment; you seem to be talking about the “old model” being “cults”, but I don’t know what you mean by cults—I interpret a “cult” to be something like “a small group rallied around a charismatic leader with absolute authority”, but I don’t think that has been the predominant mode of social organization at any point in history?
I interpret “cult” as applicable to both small and large groups and not dependent on whether the leader has charisma or not (It could also refer to small tribes with chieftains, dictatorships, absolute monarchies, etc.). And I think in this regard it has been the predominant mode of social organization throughout history.
But after seeing Scott’s “on fourth thought” I have been more convinced that Duncan has been moving in the direction of placing limits on his power and making sure the appropriate safe-guards are in place, which has updated me away from seeing the pendulum as swinging too far in the opposite direction. I think the question remains whether or not continued updates and iterations will involve further limitations on his authority.
being part of a group where everyone is genuinely entirely commited to the project and each other’s welfare (“One for all, all for one”), where people are willing to try whatever weird things if it works without needing to worry about what outsiders might think, and generally having a strong supportive social structure that offers you help when you’re struggling, pushes you to become the best possible version of yourself when you might otherwise slack off, and provides frequent feedback of how you’re doing regardless.
Sure. You are describing a church group, or maybe an entire sect/denomination (see e.g. pretty much all early Protestant movements).
Is it a good idea? As usual, it depends :-/ Sometimes it works out and sometimes it doesn’t. Sometimes you spend a safe and content life doing good work, and sometimes you find yourself killing evil abominations like Catholics.
Besides, such groups evolve and usually not in a good direction. Becoming bureaucratic and ossified is relatively harmless, but being taken over by sociopaths (as per ribbonfarm) can be much worse.
Ok. If you don’t mind, I’ll use you as an interpreter for Duncan, since he doesn’t answer questions much. Can you explain why the idea of a group house with salutes, push-up punishments, restrictions on criticism etc. appeals to you? Is there any evidence that it would help learn skills more effectively, compared to taking a class? Why do you feel that the obvious dangers aren’t dangers, apart from knowing Duncan personally (many real world tyrants were reportedly charming in person) and seeing the list of excuses that’s identical to that of every other cult?
I resisted playing the fallacy game with Duncan because he’s clearly just parroting stuff, but I expected better from you. Okay, let’s go. “You’re being emotional” and “you’re pattern matching” are examples of the bulverism fallacy. Your turn.
This person’s post, while containing some overlap with the more true and useful criticism here, is also not the sort of thing I expect people to cite on LW and not, I think, a useful entry in the back and forth here.
On the other hand, the difference in our levels of endorsement of it explains a lot about why our interaction went south in a hurry.
Quoting Qiaochu:
I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you’ve factored in the evidence that he wrote this whole, weird thing, complete with references to Ender’s Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.
There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.
I was tentatively willing to give you some benefit of the doubt even though I don’t know you but I’m really disappointed that you feel the need to score points against a rationalist-adjacent posting to her Tumblr about how your post looks to her from her outside vantage point. I brought a similar-amount-of-adjacent friend to the seder and it freaked her out. Rationalist shit looks bizarre from a couple steps away. You do not have to slam my friend for not being impressed with you.
Fair point. I will edit the above to remove point-scoring criticism; if this person wanted to be exposed to it, they would’ve posted here directly. I’ll ask you to leave your comment so it’s clear what originally occurred.
That being said, they certainly have no qualms about tearing into me. Like, my response to them was not a response to “I am unimpressed” or “I have a negative reaction to this,” and I think it’s a little disingenuous or unfair of you to summarize their content thusly. It’s … an asymmetric expectation of charity? Holding a double standard? Or something like that. I’d hope you’d offer feedback to them similar to what you said to me here, to see how they respond.
I know her and she has earned some charity from me. You’re a stranger soliciting a line of credit. Also, her task is “opine on Tumblr” and yours is “benevolent dictatorship”. If you want me to convey to her that your feelings were hurt I could do that for you, I suppose.
It’s less that my feelings were hurt (they were, a little, but I’ve developed a pretty thick skin around “strangers are wrong about me”), and more that you’re saying, to me, “hey, please don’t be uncharitable or overly critical or focus on point-scoring,” and I think the point-scoring exhibited in that post would cause me, in your shoes, to make a symmetric point to my friend. It’s a consistency thing, of supporting the norms I want to see in all places, ignoring partisan or loyalty lines (being willing to call out my allies as much as I’m willing to call out a stranger or an enemy).
I guess if I were to ask you to convey a message, it would be “this person thinks you’ve jumped to unfounded conclusions, and wonders what odds you’d put on ‘I might be wrong.’”
Thanks. As Lumifer has pointed out, I have become more defensive in the past 36 hours, but I claim it’s almost entirely limited to the two individuals who have shown themselves to be deontologically hostile and extremely overconfident in their models. There’s obviously wiggle room in there to say “Eh, even given that, Duncan, I think you’re overreacting,” but if so, it’s because I feel that after a hundred comments and a multithousand word post (that I didn’t have to make at all, in the first place) I deserve some credit à la I’ve clearly demonstrated willingness to engage positively with criticism and update publicly and admit wrong and so on and so forth (and therefore don’t like comments that presuppose me not being all those things).
I have absolutely no confidence that I’m correct in my assertions. In fact, I was rather expecting your response to address these things. Your original post read as a sketch, with a lot of details withheld to keep things brief.
The whole point of discussion is for us to identify weak points, and then you go in to more detail to reassure us that this has been well addressed (and opening those solutions up to critique where we might identify further weak points). If you can’t provide more detail right now, you could say “that’s in progress, but it’s definitely something we will address in the Second Draft” and then actually do that.
I’ve said “that’s in progress, but it’s definitely something we will address in the Second Draft” all over these comments. You jumped into the discussion two days in and just … didn’t bother to read? I feel defensive and upset over this, because a big part of doing this whole thing out in the public view was to build credibility as a good actor who listens and updates, and I feel like you just completely ignored all the evidence of that as you started to write your critique.
And in that critique, you used a bunch of phrases like “I don’t think you have the slightest knowledge of Fault Tolerant Design” and “you haven’t given nearly enough thought to keeping people safe” and “you yourself have absolutely nothing at stake” and “you seem really unaware of X” and “you’re a uniquely bad candidate” and “the idea as conceived is actively harmful” and on and on and on. You cannot pretend that this language does not indicate strong confidence. Words have meaning.
And most of those things presuppose stuff about my internal state, or my experience, or actions I have or have not taken, and assert those things as fact or extremely likely probability, rather than putting in any kind of hedge or owning “I could be wrong about this” or whatever. You take all sorts of things that you cannot possibly know, and instead of asking about them, build up a structure in which they’re taken as given and Everything Is Bad. You do say “it seems to me” a few times, so some credit is due there, but overall, your post was overwhelmingly assertive and aggressive and lecturing/condescending, in stark contrast to the vast majority of the critical feedback (and in stark resemblance to the few comments I’ve responded to with hostility).
You did not come across as trying to identify weak points and then find out what I thought about them; you came across as trying to tell me that I’m bad/dumb/evil.
For the record: all of your points are under consideration, many of them have been completed to satisfaction within the group, and those which remain are either a) highlighted elsewhere in the comments here by me saying “Yeah, that’s a solid point, we should do something about that,” or b) have, on reflection, been ranked as low-priority.
In the absence of a sound rebuttal to the concerns that I brought up, you’re correct: I’m quite confident that you are acting in a way that is dangerous to the community.
I had, however, expected you to have the fortitude to actually respond to my criticisms.
In the absence of a rebuttal, I would hope you have the ability to update on this being more dangerous than you originally assumed.
Bluntly: After reading your responses, I don’t think you have the emotional maturity necessary for this level of authority. You apparently can’t handle a few paragraphs of criticism from an online stranger with no investment in the situation. Why should I possibly expect you to be more mature when dealing with an angry participant whose housing depends on your good will?
On the off chance that you’re actually open to feedback, and not just grandstanding to look good...
1) I apologize if my tone was too harsh. You are attempting something very dangerous, on a path littered with skulls. I had expected you were prepared for criticism.
2) Commit to posting a second draft or addendum, which addresses the criticisms raised here.
3) Reply to my original post, point by point. Linking me to other places in the thread is fine.
Screw you; it’s not “on the off chance,” it’s been overwhelmingly demonstrated and backed up by multiple people in this thread. You’re attempting to highlight “emotional maturity” in a way that means “I want you to let me be socially dominant over you, despite the fact that I’m violating norms of good faith and discourse.”
Tolerance is not a moral absolute; it is a peace treaty. Tolerance is a social norm because it allows different people to live side-by-side without being at each other’s throats. It means that we accept that people may be different from us, in their customs, in their behavior, in their dress, in their sex lives, and that if this doesn’t directly affect our lives, it is none of our business. But the model of a peace treaty differs from the model of a moral precept in one simple way: the protection of a peace treaty only extends to those willing to abide by its terms. It is an agreement to live in peace, not an agreement to be peaceful no matter the conduct of others. A peace treaty is not a suicide pact.
In fact, what I have is sufficient emotional maturity to notice when I’m being bullied, and not roll over, even if it’s somewhat socially frowned upon for the bullied to fight back openly. i.e. I reflectively endorse both the calmness and openness with which I’ve reacted to the majority of commenters, and the degree to which I have risen to and matched your hostility rather than just letting you punch unfairly.
I’ll do 3) if and only if you rewrite your original point to include a generally normal amount of epistemic uncertainty/humility for claims made on LessWrong about a person you don’t know well, after that person’s demonstrated willingness to be transparent and to update.
And just to be clear: I don’t give a shit about social dominance. I’m not trying to bully you. I’m just blunt and skeptical. I wouldn’t be offended in the least if you mirrored my tone. What does offend me is the fact that you’ve spent all this time blustering about my tone, instead of addressing the actual content.
(I emphasize “me” because I do acknowledge that you have offered a substantial reply to other posters)
I don’t want to mirror your tone because I think your tone is both socially corrosive and epistemically unsound. I’ve at least in part been fighting you so hard because I want to publicly defend a stance that the way you’ve acted in this thread is unacceptable. Saying “I’m just blunt and skeptical” is not a complete description of the posts you’ve made; others in this thread have been blunt and skeptical without jumping to conclusions, lecturing, and being wildly overconfident that their map is accurate enough to justify throwing excrement around.
I think you’ve fallen far short of the standard of a place like LW in this thread, and I want that opinion known to anyone trying to model me.
You seem to feel that publicly shaming me is important. Should participants in your group also expect to be publicly shamed if they fall short of your standards / upset you?
With the caveat that I’m attempting to shame the way you’re going about engaging in discourse much more than I’m shaming the core of you as a person (really, you’re the one operating on the level of the fundamental attribution error within this particular thread; look in a mirror)—yes, absolutely. Part of having standards is making it socially unacceptable to fall grossly short of them.
That’s modified by things like the “saving face” section above, and the clear intention for all of us to grow and improve, me included—none of us are getting it right on the first try, and you have to scaffold growth and reward with gentle affirmation people who are willing to try to change for the better.
It’s further modified by the fact that people who don’t like these standards can simply not join, and I’ve spent now well in excess of 100 hours making my models crystal clear to those who are considering opting in (so that their decision can be fully informed).
But yeah—anybody who’s falling as far short as you absolutely deserves to be called out for it, and given a choice between “do these concrete things differently” or “lose social points.” Since you’ve repeatedly refused to stop jumping to conclusions and ignore evidence that I’m acting in good faith and not an idiot—since you’ve refused to do concrete things differently—yeah, I wholeheartedly endorse you losing social points, and people updating the way they assume interactions with you will go as a result.
You’ve even conceded to others that I’m a cut above the “other trolls” here, and have input from others that I’m trying to raise concerns in good faith.
I think the problem here is the same as the problem of enforcing repayment of loans. If someone borrows a bunch of money, and then later has no money to repay, how should society respond?
Obviously, the thing is not simply “demand money.” Similarly, though, there can’t be no standard of requiring recompense, because that sets up a really bad incentive.
So my current plan is (in addition to really heavily highlighting that people need to think this through/talk with their advisors/visualize failure/ensure they have a buffer sufficient for likely amounts of damage) to set up something like the following norms:
If you conclusively determine that you need to drop from the experiment, no one is allowed to argue or convince; this is referred to as “rule-one-ing out,” and is a thing that we will explicitly practice in small doses in the hope that this will transfer over to larger spaces.
If dropped, you retain full access to kitchen, bathrooms, lawn, living room, etc. but agree to physically avoid house activities (and those house activities will e.g. change to not use shared rooms that you live in). You’re also welcome to leave, but maintain the same sort of “normal” financial obligation that people have when they suddenly vanish, i.e. you’re still paying for your slot for a little while.
“A little while” means that you agree to put forth good-faith effort to find a viable replacement. I said “three potential replacements” as an initial guess to point toward “it’s harder to replace yourself here than in normal houses; there should be some limit to your obligation if we say ‘no’ to your first two choices; you’re definitely not on the hook forever.” It’s possible that the answer should be “two” or something else.
In the event that this fails, something like “you’re on the hook, financially, for rent payments in the 2-6 week window from the time you drop,” which seems like a non-Draconian and fairly boilerplate norm (“this month, and next month too if ‘this month’ ends really soon”).
In the event that this fails, I was planning to just … secretly and quietly absorb the blow? This is made worse by your demand that it be explicit (some things are better as back pocket options), but whatever—few people will see this part. The idea is that OBVIOUSLY (unless you’re starting from the presumption that Duncan is evil) you have to make accommodations for a person who is (by the time they reach this step) both emotionally and financially exhausted/compromised, and part of the whole point of having a large community is that it creates flexibility to absorb blows like that (the damage is spread out enough that it becomes manageable on an individual level).
So at that point, yeah—people could just straight-up defect on the house, and the idea was NOT to blare that from the rooftops, because now there’s a clear incentive for defectors to just defect and impose costs on everyone else. That would’ve been better left as an obvious implicit norm that’s universal among decent people.
On a broader, whole-house level, we’re having open retrospectives every week, with opportunities for both nonymous and anonymous feedback and discussion. I put the odds of this going that far south in under six months at far less than 1%, but in the event that a majority of people decide the thing is bad, it’ll be at most six days before they have a chance to all say so, at the obvious Schelling point for coordination, at which point there’ll be a clearly decisive mass of consensus and I’ll just—be overruled. This is further made more likely to happen if-it-needs-to-happen by the fact that elsewhere in the thread I’ve committed to instituting a requirement that people check in weekly with outside advisors, and by the fact that there are multiple strong/iconoclastic/independent/healthily self-protective personalities in the mix who would have little to no fear in openly opposing me if they needed to, and by the fact that there’s a known second-in-command who’s a good coordinator in the event that things need to happen without me being looped in (noble coup).
I notice I am very confused as to why you keep reiterating actual talking points from actual known-dangerous cults in service of “providing evidence that you’re not a cult.”
For instance, most cults have a charismatic (“well known”) second-in-command who could take over should there be some scandal involving the initial leader. Most cults have written thousands of words about how they’re different from other cults. Most cults get very indignant when you accuse them of being cults.
On the object level: Why do you think people will be reassured by these statements, when they fail to differentiate you from exist cults?
Stepping up a level: how much have you read about cults and abusive group dynamics?
On the object level: because a plurality if not a majority of actual, real humans have indeed been reassured by them, including some who were open critics and said things like “I traveled 50% of the distance toward ‘this is a good idea’ [just from this post].” It’s worth noting that I’m not going to refrain from saying true things that cults have also said; reversed stupidity is not intelligence and the thrust of this post was never “differentiate myself from cults,” it was “here’s a thing I want to try.”
On the discourse level: still jumping to conclusions left and right. “When Duncan said well known, he must have meant charismatic, obviously.” False—Eli Tyre is many, many good things, but “charismatic” is not usually a compliment given to him. Furthermore, I note that you decided to ignore all of the other object-level content in favor of picking one nit (based on false assumptions), so I’m taking that as “you had nothing good to criticize in that other stuff, and so you decided not to say anything at all,” i.e. you’re unable to say “good point” and update incrementally.
Stepping up a level: since you’re inclined to view everything I say in the worst possible light and uncharitably leaping to conclusions, I claim that I’m justified in theorizing that literally no answer would’ve satisfied you (had I said 10 hours, you’d have been smugly dismissive of my lack of research; had I said 1000 you’d have said ‘well, you obviously weren’t paying attention’), and that it was a bullshit question to begin with.
We’re done; I anticipate that other skeptics in this thread (like decius and lumifer and deluks and taygetea, for example) will provide me with the overwhelming majority of the value you might offer, and at a fraction of the cost in you’re-doing-a-bunch-of-the-things-the-sequences-exist-to-warn-against.
Also, as far as “we’re done” goes: I agreed to rewrite my original post—not exactly a small time commitment, still working on it in fact. Are you seriously reneging on your original agreement to address it?
See, now you’re the one leaping to conclusions. I didn’t say that all of your talking points are actual talking points from actual cults. I am confused why even some of them are.
If you can point me to someone who felt “I wrote thousands of words” is, in and of itself, a solid argument for you being trustworthy, please link me to it. I need to do them an epistemic favor.
I was using “charismatic” in the sense of having enough of it to hold the group together. If he doesn’t have enough charisma to do that, then he’s kinda worthless as a commanding officer, neh?
Your claim is false. I wanted to know at what level to hold this conversation. I legitimately can’t tell if you’re waving a bunch of “this is a cult” red flags because you’re trying to be honest about the risks here, because you don’t realize they’re red flags, or because you’re playing N-Dimensional chess and these red flags are somehow all part of your plan.
Can you elaborate on the notion that you can be overruled? Your original post largely described a top-down Authoritarian model, with you being Supreme Ruler.
How would you handle it if someone identifies the environment as abusive, and therefor refuses to suggest anyone else join such an environment?
You discuss taking a financial hit, but I’ve previously objected that you have no visible stake in this. Do you have a dedicated savings account that can reasonably cover that hit? What if the environment is found abusive, and multiple people leave?
Anyone entering your group is signing a legal contract binding them to pay rent for six months. What legal commitments are you willing to make regarding exit protocols?
I notice that you are unusually unable to notice yourself jumping to conclusions. As a challenge, can you find the conclusions you’re still jumping to, above, without curiosity or caveat? Note the plural on “conclusions.”
An excellent question whose answer I’m interested in exposing to literally anyone other than you, the troll, and cousin_it. Also, a question that has been openly and actively discussed and is not yet fully finalized, but boils down to “pretty close to the obvious stuff about voting majorities.”
I am not and have not at any point required that “people should proselytize this, and encourage others to join.” So, I wouldn’t object or find it unreasonable if someone didn’t encourage others to join.
You’ve previously talked out of your butt without ever expressing curiosity as to my visible stake in this. So, repeat my answer to 1: a fine question, which everyone is encouraged to feel curiosity about, and which I’d be motivated and eager to discuss with the potential participants and everyone except you, the troll, and cousin_it.
Similarly, an excellent question that I don’t think is any of your business, though I continue to endorse the fact that I’ve voluntarily made it the good 97% of LessWrong’s business. And I know this is giving away part of the answer, but you just assumed that people would be signing lease agreements with me rather than with the owner of whatever house we rent (and therefore that I would have some fully controlling role in determining exit protocols, rather than simply being a coordinator and a negotiator).
I used the word visible to make it clear that there might be some stake which is not visible to me. If you have made your stakes visible in this thread, I’ll admit I missed it—can you please provide a link?
Furthermore, locking all of this into place in formal language was not a thing I was going to do by myself, but rather was going to be a collaborative, consensus-based process engaged in by the group as a whole, which is obvious if you look at all the other places in this thread and in the original post where I say that we’re going to discuss and iterate and figure things out together.
Or, for example, by the fact that I chose Dragon Army as the model, and not (as has come up elsewhere) Salamander Army.
You shouldn’t quote Scott for support, because he just wrote this:
On third thought, everyone else is right and I am wrong. The Dragon Army group house is a very bad idea, enough so that it’s okay to be forceful in encouraging Duncan to modify it or other people not to join it. This is true even if the required modifications are so hard that they end up torpedoing the project.
First, thank you for writing the post so fully and readably—it is really impressive! And I wish you would go to do this, in whatever way you would decide upon. But even if I thought full well the setup was safe (which I do) and the results were exactly as intended, in the most useful and generally good way, I wouldn’t join.
Because I think that when people become parents, they suddenly find themselves in a world that is much more uncertain. You can’t reliably say that you will sleep through the night, for example, even when the kid mostly does. And this is already hard enough to get used to—I know from experience—and it is also hard to begin anew (though this might be less so for men.) Imagine having actually trained yourself to be 100% in control of what you do, or even letting other people know that you are such kind of person. It’s just not robust.
Reading the comments… well, this escalated quickly.
I can imagine this going either horribly right or horribly wrong. So I appreciate if a group of volunteers actually does the experiment, instead of everyone offering their preferred analogy for what should happen. Preferably with good safety mechanism, of which I can imagine two, already mentioned in this debate:
(1) Give members a mandatory time off, once in a while, to spend with their friends outside the “Army”. Not just a weekend, but a full week, once in a while.
(2) If possible, it would be good to reduce the financial impact of leaving the group as much as possible. In a perfect world, there would be none. But of course, if you want to live in the same house, that costs money. It would be nice if the group could somehow collect extra money, as an insurance, to allow people leave without financial consequences. Perhaps make everyone pay 10% or 20% extra for the house?
There is always a tension between freedom and commitment, and between individual freedom and group cooperation. It seems generally good to err on the side of freedom, because people in positions of power often have a bias in favor of less freedom (for others, of course), so this is how we balance it. On the other hand, akrasia—almost a proverbial trait of wannabe rationalists—is often an inability to follow one’s own commitments. Already damaging for individuals; making group activity almost impossible. It would be nice to be able to overcome this, and enter high-commitment situations (with limited scope, for limited time). Otherwise, we lose a lot of potential.
I can imagine myself benefitting from some kind of commitment enforcement, and rational life coaching in general. Of course, the devil is in the details. That’s where things can go wrong easily. But if we can create enough safeguards, I support trying this, because there is so much to win.
A possible approach could be to select in advance two or three people trusted by the rationalist community as supervisors of the project. The supervisors would not participate in the project directly, but would have regularly scheduled meetings with members, individually, outside of the project, where the members could provide their opinions, and after hearing all of them, the supervisors would post an anonymized summary report on LW.
EDIT: Except for the part about posting an anonymized summary report on LW. It’s entirely reasonable to have outside advisors and supervisors (in the sense of “well, if the thing’s as good as I say it’ll be, then I have no reason to want to hide”). However, it’s silly to pretend that the house grants LW any kind of oversight, or specifically seeks LW’s approval—I posted here because I thought LW would be a) mildly interested and b) would, in exchange for the mild interestingness be willing to provide some solid, concrete criticism, but that’s pretty much as far as it goes.
A “culture of abundance” in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
That reminds me of an event during a retreat where a cake couldn’t get backed because they required chocolate that was brought to bake the cake was consumed beforehand. It was even baking-chocolate.
It seems like good cooking or baking leads to people buying specific ingredients and it’s bad if they can’t count on those ingredients not being consumed before the planned meal.
You might also want a mechanism to handle “staples” that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I’d have no objections to other people eating them, but if they did I’d want them to take responsibility for never leaving the house in a state of “no X on hand”.
The food policy strikes me as one of the more trivial and unimportant parts of the proposal. I’m not saying you’re taking it too seriously—I think that shared living spaces should have clear rules about who gets to eat what. It’s just that this particular food policy seems easily to change without changing the core “authoritarian” structure of the Dragon Barracks.
1) You focus heavily on 99.99% reliability. That’s 1-in-10,000. If we only count weekdays, that’s 1 absence every 40 years, or about one per working lifetime. If we count weekends, that’s 1 absence every 27 years, or 3 per lifetime. Do you really feel like this is a reasonable standard, or are you being hyperbolic and over-correcting? If the latter, what wold you consider an actual reasonable number?
2) Why does one person being 95% reliable cause CFAR workshops to fail catastrophically? Don’t you have backups / contingencies? I’m not trying to be rude, I’m just used to working with vastly less fragile, more fault-tolerant systems, and I’m noticing I am very confused when you discuss workshops failing catastrophically.
the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.
3) Numerous open source programs have been written via a web of one-shot and low-reliability contributors. In general, there’s plenty of examples of successful systems that tolerate significantly more than 0.01% defection. Could you elaborate on why you think these systems “close the loop”, or aren’t destroyed? Could you elaborate on why you think your own endeavors can’t work within those frameworks? The framing seems solidly a general purpose statement, not just a statement on your own personal preferences, but I acknowledge I could be misreading this.
4) You make a number of references to the military, and a general philosophy of “Obedience to Authority”. Given the high rate of sexual assault and pointless bureaucracy in the actual military, that seems like a really bad choice of role model for this experiment. How do you plan to avoid the well known failure states of such a model?
5) You raise a lot of interesting points about Restitution, but never actually go in to details. Is that coming in a future update?
every attempt by an individual to gather power about themselves is at least suspect, given regular ol’ incentive structures and regular ol’ fallible humans
6) You seem to acknowledge that you’re making an extraordinary claim here when you say “I’ve noticed the skulls”. Do you think your original post constitutes extraordinary proof? If not, why are you so upset that some people consider you suspect, and are, as you invited them to do, grilling you and trying to protect the community from someone who might be hoodwinking members?
7) Do you feel comfortable with the precedent of allowing this sort of recruiting post from other people (i.e. me)? I realize I’m making a bit of an ask here, but if I, handoflixue, had written basically this post and was insisting you should trust me that I’m totally not running a cult… would you actually trust me? Would you be okay with the community endorsing me? I am using myself specifically as an example here, because I think you really do not trust me—but I also have the karma / seniority to claim the right to post such a thing if you can :)
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
I want to publicly express my strong support for this experiment/meta-experiment.
I think that my support is particularly noteworthy as I’m presently a core member of a different taking-each-other-seriously co-living experiment that is profoundly different in its philosophy. (Mine is not in Berkeley, nor rationalist.) Therefore some people might assume that I would be opposed to Dragon Army Barracks.
Things in common between the experiment I’m part of and Dragon Army Barracks:
is “high-commitment, high-standards, high-investment”
is trying to actually make & achieve something together
is addressing unanchored abandoned loneliness thing
has consciously explicated commitments and assumptions
is intended to produce a high-level of consistent excellence and ability to effectively collaborate
Things that are different:
We’re very far from authoritarian or hierarchical. Although we’re also not egalitarian, consensus-based, or even democratic per se… but we have essentially zero of telling-other-people-what-to-do
Our basic collective navigating framework is [Kegan-5 / fluid mode / post-rational], rather than [Kegan-4 / systematic mode / rational] (good summary of this distinction)
Our focus is almost entirely on the meta-level of building the new cultural platform we’re building. We don’t have any expectations of each other on the levels of specific object-level projects or explicit behavioral norms (aside from ones necessary for the house’s function)
I think that these differences are core to why I am part of this project that I’m part of, and why I consider it to be the most valuable investment I could be making with my time and energy. I am, therefore, non-Berkeley-residence aside, not going to be applying to DA. As I said above though, I strongly support Dragon Army Barracks as an experiment and potentially as an ongoing resource to individual and collective growth.
Reasons why I think that DA is a good idea:
Expected value of high amounts of worthwhile object-level output. As Sebastian Marshall says, “the gains made from living more purposefully are forever—the time you’ve spent well will remains well-spent even if you fall off for a while sometimes. Most people don’t even try, which is why most people don’t succeed.”
I expect it will also produce a lot of developmental progress for people involved; that if you were to be able to sort rationalists by amount of growth in a year, the Dragons would all be in the top quartile, and would occupy many of the top 10 slots. This, even if the experiment were to end after 6 months.
The DA Barracks is an intervention that is attempting to produce change on a very fundamental level of the system that is a group house. This is a powerful leverage point (see Donella Meadow’s article… I would say this is around a 2 or 3, and most group houses have only done mild experiments at the 4-6 level.)
I agree with and/or resonate with the six points that Duncan makes in Section 2 of this document.
The project-level value of learning here is also very high: this will greatly inform future experiments, whatever their leadership basis.
If I had kids, I would absolutely sign them up for any summer camps or classes Duncan was running. I think the amount of power he would have in relation to them would be similar to the amount of power he’ll have in this situation.
A final reason is this: I think that we as humanity need to rapidly make progress on being able to effectively coordinate in non-hierarchical ways, which is what the project I’m part of is about. Corollarily, humanity is kind of mediocre at doing this in many contexts. Therefore if non-hierarchical projects aren’t emphatically directed towards solving that challenge itself, I expect them to be outperformed by projects that are leveraging existing understanding about how to coordinate effectively in hierarchical ways. i.e. in this case, Dragon Army Barracks.
I really, really wish Kegan levels didn’t come in an order, so a claim to be at a higher Kegan level than someone else didn’t look so starkly like a claim to superiority. It’s turning me off even trying to take them seriously, because everyone who uses them looks like they’re just self-aggrandizing to me.
I’m totally with you in wishing that Kegan levels weren’t getting socially entangled with claims to superiority!
...but that can’t be achieved in the way you describe: they would be a fundamentally different thing if they didn’t come in the order they do. It’s not a personality typing system, it’s a model of human development over time. Probably some people who are talking about them are self-aggrandizing; people are known to do that with just about everything they can get their hands on.
I suspect that your heuristics about not trusting people who brag about their Kegan levels are probably decently good heuristics, as it could be reasonably expected that that would be ineffective in just the way you’re describing here.
I first learned about the CDT model from a conversation I had with someone who used to work with Kegan, and who readily noted that he was not himself consistently operating out of stage 5. Robert Kegan has said that about himself too, which I found surprising and originally interpreted as being a failure mode in the opposite direction—false humility or something. But now it strikes me as not that unlikely. There’s a big difference between being able to recognize abstractly (or in others) what it means to be subject to one’s own interpretations & ideologies, and being able to actually not do it.
There’s an unfortunate phenomenon here, where the value of the concept gets diluted because the people who are finding the Kegan models helpful but aren’t claiming to be at higher Kegan levels than others… are harder to notice.
Anyway, I realize that I may sound like I’m making a superiority claim here myself. I will address that directly, kind of like Duncan is doing re: skulls above.
My understanding—based more on reading things like this than Kegan’s own work—is that the “fluid mode” (~=K-5) does have capabilities that the “systematic mode” (~=K-4) does not; much like multivariate calculus can be used to re-derive the equation for the volume of a sphere, but not the reverse. Is multivariate calculus superior to sphere equations? In functional senses yes, but not in a social status way. And also not in all domains! It’s certainly slower if you just need to calculate the volumes of a bunch of spheres.
I’ve spent a considerable amount of time over the past year working to develop the ability to operate in the fluid mode, and I think that that makes a lot of sense for me and many other people, but I don’t think that that’s highest priority for everyone right now. Hence my strong support for Dragon Army.
I like the paragraph “my understanding” a lot. In particular, while I think I have some limited, flickering access to K5, I notice that operations which come out of being solidly K4 often cause me to outstrip/outperform people who are entirely in K5, which seems to me to be something analogous to “I’m successfully calculating the volumes of a bunch of spheres and you’re just stuck there mired in re-derivation.”
I’m not sure what it means to be entirely K5. To me the phrase sounds like Chapman’s description of the postmodernists who are at K3 and tried to skip K4 entirely and are without any real access to the ability to use a system.
Fair. “People who overwhelmingly operate from a thing where I’m comfortable applying the label K5,” where overwhelmingly means 90+% and comfortable means 90+%.
Our basic collective navigating framework is Kegan-5 / fluid mode / post-rational, rather than Kegan-4 / systematic mode / rational (good summary of this distinction)
How do you filter for people who are Kegan-5 when you are seeking to accept members?
We don’t! Each of the individual members themselves aren’t necessarily Kegan-5, but the person spearheading the project (who is in her 70s) certainly is. And so, therefore, are our models, our equivalent to a “charter”, etc.
It’s also the case that the mode of interaction that we’re training here is fluid as opposed to systematic, which shows up in the ways that we make agreements, commitments, and the general way-we-do-things-here. I was very much operating in (and committed to!) systematic mode when I first joined several years ago, and I’m still getting comfortable with this. It’s challenging but worth it, and we’re working to build a bridge to meta-rationality to make that learning process easier.
I think that Duncan’s intended context will potentially be (a) an awesome place to go from Kegan-3 to Kegan-4, and (b) an awesome place to operate in an exceedingly high-functioning Kegan-4 way. It asks that of its members. I don’t expect it to create a demand for most Dragons to operate in a Kegan-5 way, which is the core different between it and the project I’m a part of.
Not officially at this stage; we’re in a process of overhauling a lot of things, including answers to questions like “who are we?” and “what are we calling ourselves?”
That said, this category of posts on my blog has a lot of content about our philosophy, models, culture, etc.
I am really interested to see the result of this experiment.
I think the underlying models are extremely plausible, with the next bullet point as a possible exception.
I am aesthetically very skeptical of phrases like “absolutely reliable” (in Problem 4). I don’t think it’s possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.
I don’t buy claim 4, “It does actually require a tyrant”. I agree that it isn’t always possible to achieve consensus. I don’t think that hierarchical authority is the only way to solve that problem. Democratic Centralism is a well-tested alternative, for instance.
I find the code of conduct worrisome, at least as presented. The rules seem likely to encourage hypocrisy and dishonesty, since they make psychologically implausible demands which in many cases are undetectable at time of infraction. This could potentially be mitigated by norms encouraging confession/absolution for sins, but otherwise I expect this to have corrosive effects.
I am totally uninterested in joining the experiment, despite my interest in its outcome. I would be likely to be interested in substantially more time-boxed activities with similar expectations.
“norms encouraging confession/absolution for sins” is a somewhat … connotation-laden … phrase, but that’s a big part of it. For instance, one of the norms I want to build is something surrounding rewarding the admission of a mistake (the cliff there is people starting to get off on making mistakes to get rewarded, but I think we can dodge it), and a MAJOR part of the regular check-ins and circles and pair debugs will be a focus on minimizing the pain and guilt of having slipped up, plus high-status people leading the way by making visible their own flaws and failings.
+1 for noticing and concern. Do you have any concrete tweaks or other suggestions that you think might mitigate?
Also: “absolute” is probably the wrong word, yeah. What I’m gesturing toward is the qualitative difference between 99% and 99.99%.
I am aesthetically very skeptical of phrases like “absolutely reliable” (in Problem 4). I don’t think it’s possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.
[…]
Also: “absolute” is probably the wrong word, yeah. What I’m gesturing toward is the qualitative difference between 99% and 99.99%.
There’s definitely a qualitative shift for me when something moves from “This is very likely to happen” to “This is a fact in the future and I’ll stop wondering whether it’ll happen.”
While I think it’s good to remember that 0 and 1 are not probabilities, I also think it’s worthwhile to remember that in a human being they can be implemented as something kind of like probabilities. (Otherwise Eliezer’s post wouldn’t have been needed!) Even if in a Bayesian framework we’re just moving the probability beyond some threshold (like Duncan’s 99.99%), it feels to me like a discrete shift to dropping the question about whether it’ll happen.
I think that’s a fine time to use a word like “absolute”, even if only aesthetically.
Yeah, there’s some switch from “am maintaining uncertainty” to “am willing to be certain and absorb the cost of an unpleasant surprise.” Or from “would not be surprised by failure” to “have decided to be surprised by failure.”
Those sound like good ideas for mitigating the corrosive effects I’m worried about.
My personal aesthetic vastly prefers opportunity framings over obligation framings, so my hypothetical version of the dragon army would present things as ideals to aspire to, rather than a code that must not be violated. (Eliezer’s Twelve Virtues of Rationality might be a reasonable model.) I think this would have less chance of being corrosive in the way I’m concerned about. However, for the same reason, it would likely have less force.
Re: absolute. I agree that there can be a qualitative difference between 99% and 99.99%. However, I’m skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully. (Again, this may still be just an aesthetic difference, since your proposed system does seem to have fault-tolerance and graceful degradation built in.)
However, I’m skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully.
On the other hand… look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn’t stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It’s what makes the 21st century work.
I’d be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn’t work, maybe the analogy fails. But I want to see the answer!
to see what happens when that same level of uncompromising reliability is demanded of social systems
Who exactly will be doing the demanding and what would be price for not delivering?
Authoritarian systems are often capable of delivering short-term reliability by demanding the head of everyone who fails (“making the trains run on time”). Of course pretty soon they are left without any competent professionals.
Do you have examples of systems that reach this kind of reliabilty internally?
Most high-9 systems work by taking lots of low-9 components, and relying on not all of them failing at the same time. I.e. if you have 10 95% systems that fail completely independently, and you only need one of them to work, that gets you like eleven nines (99.9{11}%).
Expecting a person to be 99% reliable is ridiculous. That’s like two sick days per year, ignoring all other possible causes of failing to make a task. Instead you should build systems and organisations that have slack, so that one person failing at a particular point in time doesn’t make a project/org fail.
Well, in general, I’d say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.
At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it’s not like the underlying bolts are being replaced because they failed with any regularity.
I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.
Yeah. I’ve got a couple brilliant and highly capable friends/allies/advisors who also STRONGLY prefer opportunity framings over obligation framings. I think that’s one of the things where the pendulum has overcorrected, though—I think the rationality community as a whole is rather correctly allergic to obligation framings, because of bad experiences with badly made obligations in the past, but I think we’re missing out on an important piece of the puzzle. You can run a successful thing that’s, like, “we’ll do this every week for twelve weeks, show up as much as you like!” and you can run a successful thing that’s, like, “we’ll do this if we get enough people to commit for twelve weeks!” and I think the two styles overlap but there’s a LOT of non-overlap, and the Bay Area rationalists are missing half of that.
“we’ll do this if we get enough people to commit for twelve weeks!”
I actually totally buy this. There are some things where you just have to commit, and accept the obligations that come with that.
My hesitation primarily comes from the fact that the code of conduct seems intended to be pervasive. It even has requirements that happen entirely inside your own mind. These seem like bad features for an obligation-based system.
My model is that obligation-based systems work best when they’re concrete and specific, and limited to specific times and circumstances. “Commit to performing specified activities twice a week for twelve weeks” seems good, while “never have a mental lapse of type x” seems bad.
That makes sense, yeah. I’m hoping the cure comes both from the culture-of-gentleness we referenced above, and the above-board “Yep, we’re trying to restructure our thinking here” and people choosing intelligently whether to opt in or opt out.
Good place to keep an eye out for problems, though. Yellow flag.
Edit: also, it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.” Like, the agreement that, however frustrated you might FEEL about the fact that people were keeping you up, you’re in a social contract not to VENT at them, if you didn’t first ask them to stop. Similarly, maybe you don’t have the emotional resources to take the outside view/calm down when triggered, but you’re aware that everyone else will act like you should, and that your socially-accepted options are somewhat constrained. You can still do what feels right in the moment, but it’s not endorsed on a broad scale, and may cost.
it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.”
This framing does bother me less, so that is a fair clarification. However, I don’t think it applies to some of them, particularly:
will not form negative models of other Dragons without giving those Dragons a chance to hear about and interact with them
True. Updated the wording on that one to reflect the real causality (notice negative model --> share it); will look at the others with this lens again soon. Thanks.
I applaud the experiment, and the writeup! Do you have a place where you’ll publish metrics (people contacted, interest level, etc. before starting, and self-reported or objective measures of your stated objectives every week)?
That’s not been formally set, but yes—that’s the biggest ask we get from outsiders interested, and it’s clearly one of the “obvious things” that we ought to do, so it’s been part of the plan for a while now. We just have to hammer out the details once the group is set.
Depending on interest, we may publish those updates here on LW, or make them available through my blog or FB, or some other option we haven’t thought of yet.
From the skeptical side, I would strongly suggest committing to a publicly visible schedule for updates, reports on transitions (e.g. out of bootcamp), and a final report. The outside world would be well served by knowing how this turns out, and having a schedule which is evidently independent of considerations such as “is this currently going well” would do a great deal to reassure us that we will know in time.
I do note that, while I’d like to collect data and make that data available to other humans trying to do cool stuff in the world, I’m not particularly concerned with assuaging all skeptics/reassuring people who, from the outside, think that it’s bad. This post is sort of my one big push to do that, after which I planned to shrug and just let people make the judgments they’re gonna make.
A schedule is still a solid structure just along the “do this properly” axis, though.
That’s absolutely fair. The point I’m trying to make is that it’s not about publishable results either way. Like, yes, I’d like to ship useful information to the outside world, but that’s a distant second priority to making good things happen on the ground.
What I do commit to is not making the choice to publish based on whether things are good or bad. I commit to publishing if and only if a) I have spare time and cycles, and b) there’s something useful for others to hear.
The only way there would be nothing useful to learn is if there was a complete failure due to circumstances outside of the influence of anyone involved, such as an earthquake that halted the plan. Even then a quick note to that effect would be of use.
0) This is not for me, not because of a bug in the proposed structure but because I don’t know you and don’t know any of the people recommending you. There are two people that immediately came to mind who, if they proposed this with themselves in your place, I would join up with over most situations and three more I would probably follow like this over my current situation.
1) You can’t name something Dragon Army and not expect nerd pedantry, but this is pedantry with a point behind it. Dragon Army (in the book) distributed leadership down as much as possible. Each toon leader had more degrees of freedom from Ender’s plans, each toon had a second who was expected to make decisions, and soldiers were more free to question their toon leaders. I know Dragon Army (the name) has a certain positive association in rationalist circles, but what you’re describing sounds more like Salamander Army. This is meant as nerd pedantry more than disagreement with your proposed goals or metrics (Salamander was doing really well in the standings after all) but the difference between Salamander and Dragon hierarchy seems important in this context. Dragon Army won by having a dozen good commanders all thinking at once, Salamander won by having one or two good commanders and being able to expect sharp obedience from everyone under them.
2) The second highest value change (Highest is brought up in point 0) would be some form of “I Told You So” and accountability. I find I am much happier to submit to doing things I think are incorrect if my dissension has been recorded and I can point at it later. Something like an internal prediction market is probably overkill and would erode confidence in leadership in a bad way, but a norm where someone could say “I’m 70% confident this treehouse won’t support enough weight if we nail it like that” and someone quickly sticks that in a google form might be fast enough not to interrupt things. This may or may not help with general cohesion or be relevant to the people who are actually probably joining.
This is sort of related to how often “sure, I’ll do it the way you said as long as I have it in writing that I think it’s dumb” has saved me by covering my rear, it also provides an important check on an incompetent leader, but mostly I’d want it because then the nagging thought “this is a bad idea” is out of my head and I can forget about it for a while. It’s sort of like singing a song out loud sometimes stops it being stuck in your head.
3) “Internal economy trading effort for money and so on”
Can I pay someone to do my lateness-apology push ups for me?
That’s a joking example, but given the likelihood of having large income discrepancies something of that nature may come up, and it might be worth having a framework for it. In the same ballpark, intense cooperation seems like it might be odd in non-DA associated things. Examples; what happens if one member applies for a job at a company another member works for? What happens if one member commits a crime and asks other members to be their alibi? I don’t really expect either of those examples to actually come up, but they are examples where organizations structurally similar to what you’re proposing can do very well for its members in ways that maybe aren’t good for the surrounding social structures.
4) If I knew that this general sort of setup was working well for all concerned, I wouldn’t consider it lasting indefinitely with the same leader to be a bad thing. That said, since you stated an intention to only lead it for about a year, ‘temporary’ leaders leading indefinitely is pretty strongly associated with this general sort of setup no longer working well for all concerned. If this started today, and you were still leading it in two years, I’d take that as evidence something has gone wrong. This gets lessened greatly if individual people are regularly rotating out of the group and all have wonderful praises for it.
All of the above is even more true for romantic/sexual relations between the leadership and the rank-and-file.
5) I’m strongly in favour of this being tried, and I’ll be reading any updates with great interest. Good luck!
1) Yeah, I’m emphasizing the more authoritarian parts, because those are the more dangerous/aversive ones, but in fact Dragon Army is the source of the aesthetic. I agree with almost everything you said in 1), and that’s what the house is supposed to be like. Don’t forget, though, that while Ender distributed authority as broadly as possible, he was firmly, absolutely in command, in the end. When he spoke, they moved. The key thing was that a) he used that as rarely as possible and b) he didn’t undercut his toon leaders when he exercised central authority.
2) Yeah, absolutely. We’ve already installed a norm of making direct, one-to-one bets, and are almost certainly going to install prediction markets and “I told you so” structures. In particular, I think the people originally opposed to a given failed experiment should be given greater weight in the next decision, if their predictions about that experiment came true. It’s tough to balance this against “creating perverse incentives,” but I think we can manage it.
3) Yes. It’s tricky, because we have to work out rates-of-exchange between e.g. rich and poor participants, but an internal economy is something I hope to create with second-priority urgency (i.e. in the first couple of months).
4) I’m not committed to ceasing after a year, if all is going swimmingly, but essentially I want to open that question up to the group itself after six months.
My curiosity is satisfied by your answers to 2-4, but I want to dig a little deeper into 1 if you don’t mind.
The source of the aesthetic is Dragon Army but emphasizing Salamander since those are the pieces more likely to be found off-putting makes sense to me. If someone’s on the fence, they probably shouldn’t go forward. That said, you may have overemphasized your ideal here. Ender was not firmly, absolutely in command; his toon leaders took up body-guarding him over his direct objections in a way that they wouldn’t have for a more authoritarian commander. Would you consider such a mutiny to be a sign you’d failed, or a sign you’d succeeded? (I strongly don’t expect body-guarding to be relevant, but I can imagine similar well-intentioned disagreements.)
Also, since you are changing the emphasis like this I wonder what your plans are for any Nikolai Delphikis* or Beans** that wind up involved? “Screen or vet people carefully so we don’t have any” is noted as probably a good idea, but is also insufficient.
*By Nikolai, I mean someone who would be happy following a confident leader, but feels out of their depth being expected to constantly adapt without sufficient direction. A potentially good Salamander member who read the Salamander description, and was surprised by the Dragon direction it took. Maybe even someone who looks very Dragon-like in most situations, but finds themselves the least improving member of what you set up. On the one hand, if you’re pulling from the rationalist population this seems an unexpected direction to find errors in, on the other hand I have had the experience unexpectedly of finding myself the slowest and least agenty person in a group and it was demoralizing in a way that made me empathize with the fictional Nikolai.
**By Bean, I mean someone who gets involved expecting more degrees of freedom or a higher position on the hierarchy than they wind up with. Bean put himself in Dragon Army knowing he was coming right out of launch, knowing he was small, and knowing Ender would have no reason to pay particular attention to this particular rookie, and then got upset that he wasn’t given any authority or special notice. If you have at least fifteen people not counting yourself or your second, I’d be willing to make a 1:1 bet that you are going to wind up with someone wanting more degrees of freedom or more authority than you want to give them.
I actually take the text of Ender’s Game pretty seriously as a model; I think it offers a lot of good perspective on human morality and interaction. So I actually have the example of the toon leaders bodyguarding Ender as a salient … um … parable? … in my head already, and would view that as a sign I’d succeeded.
We’ve already got a Bean; his name is Eli Tyre. His position as second-in-command didn’t exist through the whole eight months of planning this until 12 hours before I posted the charter. Similarly, the more credible responsibility others can take, the more I get to do less; the only block here is credibly believing that the people taking power will do the right thing on all the levels of meta, or setting up scaffolds such that damage-from-mistakes is minimized and survivable.
As for Nikolais, the first priority is the sign of the derivative (are you progressing positively), the second priority is the derivative (is your progress steep), and a distant, distant third is your actual position (are you in fact now good at X). A major part of the point of the house is to make everyone, myself included, feel a bit like Nikolai? i.e. we want everyone to be at the edge of their growth. But similarly, we want every Nikolai to have a Bean … hence the tight-knit, do-things-together, check-in one-on-one social structure.
I … think that answered your questions? Let me know if I missed something important.
One major caveat I think is that it’s a structure that wouldn’t work for most people in the rationality community. Calling most of them libertines incompatible with such a strict framework wouldn’t be too far from the truth. But those are the views of a very distant outsider who doesn’t know the the deeper views/feelings of the Berkeleyans you refer to, and is only familiar at a superficial glance.
But for a niche group of strongly driven baby rationalists lacking for direction/purpose who aren’t opposed to operating within a strict structure, I don’t know how this wouldn’t be an ideal framework to use.
As a former military enlisted, I think all the military comparisons made are valid. Allow me to include one more. I believe that also like the military, there will be a high turnover rate—once people get what they want out of the community, they leave. As I allude to earlier, the appeal of joining is acquiring skills in discipline/organization/direction. Once those are acquired, there is very little left to motivate people to stay. But, in both cases, this isn’t really a bad thing either. If everyone leaves after the one year commitment, but they reflect on the experience positively, then it would still be considered a success.
Yeah. In most-but-not-all of my conceptions of the house, I imagine “leaving” the post of guy-in-charge after a year, if not six months. Maybe not leaving the context as a whole, but “turning over” as far as roles are concerned.
It’s hard to go from being the boss of someone to being their subordinate, and vice versa. I think it’s more plausible to shift into an advisory, strategic, consultant, or executive role rather than swap.
Sounds awful to me. I would absolutely hate to live somewhere where I was regularly told what to do and/or expected to fit in with rituals. I tolerate this kind of thing at work because I have to.
What will you say when people come to you saying “I’m not sure this is really worth it for me”? I personally don’t think self-improvement is a very stable overall goal. In my cursory acquaintance, most cults/high-demand living situations tend to believe in “something greater”—often something quite ridiculous, but nonetheless something bigger than the individual. Perhaps it is important to have something which seems to trump feelings of personal discomfort.
Basically what I tell people (in answer to 2) is “ABSOLUTELY trust that instinct. This requires pretty high confidence that this is the right move, and DEFINITELY high confidence that if it’s the wrong move you won’t take significant damage over the six month period. If you’re unsure, the answer should be ‘no.’”
The most common cause of the collapse of high investment intentional communities is romantic drama.
(Maybe the Dragon Barracks are so obviously a boy thing that you’re taking for granted that there will be no girls in the house, but all the weird non-gendered pronouns like “a Dragon will brush its teeth” imply either an attempt to have a team composed of both men or women, or else a hilarious level of contempt for the agency of your space monkeys. I’m going to assume that you’re imagining mixed gender living arrangements rather than already starting with verbal de-personalization of presumed uniformly male space monkeys...)
So anyway, assuming men and women in the house at the same time, that’s what usually causes things to collapse in the long run.
The two standard failure modes are Bonobo egalitarianism that collapses due to the accumulation of residual jealousies over time or else a harem forms around the charismatic cult leader (which isn’t necessarily a failure mode… it is just a sign of a cult leader whose stated community goals are a load of hypocritical baloney compared to the real goal of getting more than his “fair share” of tail—cue the Limp Bizkit song).
There are lots of patches for this sort of thing that have historically worked for various kinds of communities. Requiring celibacy is an obvious one that monasteries often use. Disallowing any romantic statuses except “single” and “closed dyadic marriage” (with a managed “courting” status to mediate the one way transition) is another standard trick.
Whatever the rule is, the standard enforcement mechanism is “ostracism” because the real problem from a social engineering perspective is the accumulation of complicated feelings that slow and redirect the workings of the social machine away from its stated purposes and towards managing the wreckage of new and old love triangles. If you throw away the cogs that are liable to have “complicated feelings” and replace them with non-complicated cogs… then the machine should continue to run as designed?
(I think maybe the romantic mores that were junked in the US in the 1960′s arose in the first place because villages are kinda like auto-poetic intentional communities. The pragmatically useful norms of village romance, that kept the village from exploding, could be semi-safely junked because (well, obviosuly “the pill” but also because) cities are anonymous and moderately well mixed… essentially everyone in a city is already pre-ostrasized by everyone else, and we each are desperately struggling to create a synthetic village-like community despite the isolating forces of urban mixing. In an already chaotic urban romantic economy a divorce causing additional minor lesioning of the local social graph is like a dust devil in a hurricane. There might actually be a lot of dust devils caused by hurricane turbulence for all I know, but I’m pretty sure no one cares much because the actual hurricane make them irrelevant.)
Anyway, for the above reasons, you might want to just say “this is a fraternity and if women want to start a rationalist sorority that can be a separate thing”. Alternatively, think about romantic norms up front.
One idea that is probably necessary but not sufficient is for the Commander (and anyone else with any authority in the house) to have an absolute commitment not to sleep with anyone else in the house.
Edit: with this rule, a different/earlier version of me might have been interested. Without it I would never be.
Possible advantage of this solution: I’ve noticed that male bonding gets a lot easier when a group goes from being “almost all guys” to “all guys”. (I imagine it would get easier still if you are regularly doing testosterone-elevating things that require coordination with your group of guys, the way sports teams, armies, fraternities, and heavy metal bands do. I suspect men have a pack hunting instinct that gets activated in circumstances like these.)
Data point to the contrary: I spent two years in a closed military unit with 44 guys and 5 girls (in Israel). Each of the girls went through at least a couple of in-unit boyfriends at the time, but that wasn’t a major source of drama. It took quite a bit of suffering to forge the unit bonds (a 4-month combat boot camp to start our service), but by the end of it, people cared about “the unit” as a whole more than about personal drama. I certainly can’t imagine that the “bonding” could have been any stronger without the girls there.
And one final point of support for DA: while I was living in a closed barracks, with five girls, a huge workload, strict rules and significant barriers to exit, I read Ender’s Game and thought “this is exactly like my life, and it’s awesome”.
I agree with some of the critics here that Duncan is overconfident in his ability to make this work. I also agree that there’s a limit to how much you can learn from a work of fiction about space monkey superchildren. But a lot of the criticism here is even more overconfident, and it comes from people who never lived in DA-like situation in their lives so all the evidence they’re basing their criticism on is fictional.
It’s especially worth noting that the group is highly competent and self-selecting for the environment, too, so we’re likely to respond in the same way you did (i.e. if we want to say that your experience “beat outside view,” then we’re pretty well set up for ours to beat outside view similarly, even if that outside view is somewhat unpromising).
I’ve been going off statistics which, AFAIK, aren’t fictional. Am I wrong in my assumption that the military, which seems like a decent comparison point, has an above average rate of sexual harassment, sexual assault, bloated budgets, and bureaucratic waste? All the statistics and research I’ve read suggest that at least the US Military has a lot of problems and should not be used as a role-model.
Counterpoint:
Personally, I don’t think that the military helps. The claim is implausible as personality traits are pretty stubborn things. Anecdotes are definitely confounded as militaries these days can be selective (literally administering IQ tests), and young men who enlist will mature as a simple matter of time. Military-style boot camps are one of the juvenile justice interventions we can say don’t work well or maybe at all (“Preventing future offending of delinquents and offenders: what have we learned from experiments and meta-analyses?”, Mackenzie & Farrington 2015) despite being aimed at the ‘youngsters’ who ought to most benefit from not being ‘fuckups’ and being aimed much more explicitly at that goal with a lower bar of success. And the natural experiments I know of like the Vietnam War draft lottery show permanent large harms to income from being drafted (most famously, Angrist 1990), which is certainly not what one would expect from a magical organization which turns fuckup civilians into reliable soldiers and explains why super-competent soldiers have such difficulty comporting in & reintegrating into a civilian life of tragic incompetence everywhere.
Some confounds/conflations in the above? Like, I agree with the truth value of the specific examples you’ve cited, but I think I disagree with the implicit claim that they’re necessarily entangled with the thing Kaj is quoting.
e.g. yes, juvenile military institutions don’t prevent people from being deliquent or discourage future criminality, but that’s not to say that they don’t cause those people, while embedded, to be reliable for object-level tasks and deadlines.
Similarly, the absolute horror and chaos that was Vietnam War combat, and the subsequent shredding of the psyches of people who didn’t volunteer to be there, seems fundamentally different from e.g. modern duty on an aircraft carrier or WWII quartermastering. It doesn’t seem incoherent or contradictory to say both [military culture promotes reliability] and also [being drafted in Vietnam screws you up, military schools don’t fix teenage delinquency].
I also note that both examples cited talk about people who don’t self-select in, which—if relevant—wouldn’t surprise me.
I think “implausible because personality traits are pretty stubborn” is an overconfident statement—personality traits are pretty stubborn, but being thoroughly embedded in a culture that forces you to practice certain skills and surrounds you with coherent social pressures is also pretty stubborn. And in point of fact, while within that context, culture clearly dominates over personality traits, whatever else happens afterwards.
If I’ve misunderstood your claims, please forgive and correct—I feel like I might’ve missed your crux.
Duncan’s comment already touched upon this, but just to highlight it: both of your cited studies are about situations where people were literally forced to join against their will; the Vietnam example additionally has those people exposed to the horror that was Vietnam. Being forced to join something against one’s will tends to make people very resistant against the norms advocated there, and even to actively behave in the opposite way as soon as they get out of there. (I’m reminded of all the kids who decided, for many years afterwards, they want to have nothing to do with sports or exercise because they had to suffer through school gym class.) It’s not a condition where you’d even expect to get much of the internalized pride in the group norms, and desire to act accordingly, that was discussed in my quote.
I get that you picked those studies to combat the confounding from selection (both in the military screening its candidates and the candidates themselves self-selecting), but the context of this discussion was “is Dragon Army a good idea”. Dragon Army participants are also going to be both self-selected and heavily screened for suitability, so whether or not this kind of an intervention would work for the population at large isn’t actually the question we’re interested in.
An actual military has life-and-death work. This might even be more important than consent.
A military-style “boot camp” for delinquents is a cargo cult by comparison.
Unfortunately I think at this point the discussion can only go towards a back and forth on what is good and bad about the military, which can’t be very profitable, and this kind of debate has gone on for so long already that it’s embedded into popular culture. It’s also very heavily culture-warish.
Clearly, the military is adapted for one task, which requires an extraordinary amount of dependability and low likelihood of failure. There’s also an extraordinary cost for that low likelihood of failure, which encompasses the things you pointed out. I don’t think any society has survived very long being converted into 100% military culture, nor has it survived getting rid of it completely.
Maybe a low likelihood of certain kind of errors for which it optimizes, but not in general. An above average rate of sexual assault is a sign of failure.
Losing track of money in the middle of a war that might go to anyone is also a failure (https://www.theguardian.com/world/2007/feb/08/usa.iraq1).
The NSA lost their cyber-weapons (maybe to Russian spies) and now you have civilian targets like hospitals getting attacked because they didn’t do their OPSec properly.
The US military accidentally bombs hospitals.
Romantic entanglements and their fallout are not ruled out by all male environments even if the members do not identify as homosexual. So still important to consider these issues even if there are no women at all.
Can confirm. I was in a fraternity in college with many gay members, some of whom occasionally hooked up and caused manageable levels of drama. This was a relatively recent phenomenon in the history of the fraternity; I think as recently as 10 years before my time nobody was out, and then some people came out after joining.
Currently there are both men and women interested (though many more men than women).
All of your points above seem sound at first glance, and yes, it’s on the docket to be sorted out. I don’t think I want to go full monastery, but there’s a decent chance the house itself will end up being activity-restricted in some way.
Thanks for the detailed model-sharing.
I want to add a strong “romantic entanglements are a big risk” voice.
My worst experience with rationalists (and possibly some of their worst experiences with me) were when romance/sex conflict came up. It turns out people are really bad at being rational when that happens. (This was exacerbated by a lot of people being inexperienced, which may or may not be the case in Dragon Army, but it makes sense for romance and sex drive being something just overwhelms the prefrontal cortex)
I’m glad the model was deemed useful :-) Good luck.
1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that’s more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.
2) I don’t think I’ve met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don’t see you owning this desire anywhere in your long post. Like, if you had said, just once, “I think I would enjoy being a leader, and I think you might enjoy being led by me,” I would feel calmer. Instead I’m worried that you have convinced yourself that you are grudgingly stepping up as a leader because it’s necessary and no one else will. If you’re not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?
3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you’ve clearly put some thought and care into articulating them, but it’s not clear whether your proposals are backed up by research, or whether you’re just reasoning from your armchair.
4) Why should anyone follow you on an epic journey to improve their time management skills if you’re sleep-deprived and behind schedule on writing a blog post? Don’t you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?
As someone who knows Duncan moderately well in person and has been under his leadership in a few contexts (CFAR instructor training and the recent Dragon Army experiment), I can confirm that this is nowhere close to true. What Duncan is hungry for is for the world to be better, and he thinks as a contingent fact that being the chief of this particular tribe is the best way for him to do that. I agree with Duncan’s assessment of himself that if someone else stepped up to do the thing he would breathe an enormous sigh of relief, rather than be in any way jealous.
It depends on how urgent you think Duncan thinks having this blog post out sooner rather than later is. If Duncan were optimizing for looking like he has his shit together he could have either just not mentioned that he was sleep-deprived and behind schedule, or he could have gotten more sleep and fallen further behind schedule. Instead he posted the blog post, and went out of his way to mention that he was sleep-deprived and behind schedule, because he is optimizing for something else.
1) Thanks.
2) Nope, you’re just way off (though I appreciate the candor). I thought about coming up with some sort of epistemically humble “maybe” or “I can see where you got that impression,” but it seems more advisable to simply be direct, and to sound as confident as I am. I’ve been a leader, and I’ve been a follower, and I’ve transitioned in both directions within the same contexts, and there’s no special draw there along any of the lines you laid out. In particular, I think the statement “this needs to happen, and no one else is going to do it” is actually true; if some contender wants to stand up and credibly claim they can pull this off better than me, I will IMMEDIATELY hand them the baton and breathe a sigh of relief—my actual favorite place to be is second or third in command.
Feel free to PM me if you’re actually curious about my history, or to poke around my reputation within the community, or to ask any of the dozen or so people who’ve worked with me for a couple of years, or the twenty people who attended the dry run experiment last week (I can point you in their direction more specifically, also through PM).
(I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won’t make any deliberate effort.)
3) I think you and I might disagree fairly strongly on the importance/value/worth of “the literature” in this arena. Part of the whole point here is that I have a solid inside view developed from a unique set of experiences that a lot of other people are doing it wrong. I think there’s some value in literature review (e.g. the sources that Benquo listed up above seem worth at least an afternoon’s perusing), but in three separate fields I’ve found that my idiosyncratic ideas that everyone said contradicted the literature and wouldn’t work did, in fact, work, and produced excellent results; I’m not actually convinced that there’s enough EV to justify more than a quick, 80⁄20 skim of the available info. I’m currently reasoning from my armchair—that’s a fair point. But also the whole screed is “let’s get down to the business of running experiments and gathering data,” and I note again that we did already do a test weekend that gave promising preliminary support to a lot of my models and claims.
4) Another quite sound/reasonable criticism, taking the outside view with no priors to add detail to your model. In point of fact, though, it’s been a 90th percentile unusual month (I’m the curriculum director in an org that just ran its most ambitious sprint of events to date, including bringing in a round of new employees whose training I was almost entirely responsible for, and then since that ended I’ve been churning hard on this project), and it’s not particularly strong evidence about other months. Also, I think it’s reasonable to posit that one needs to be more or less in control before leading others, but I note it’s not obvious—I can clearly envision (for instance) models in which one person sacrifices themselves to push everyone else forward. That’s not what I plan to do, but the picture isn’t as straightforward as a clever-sounding false equivalency.
Also, lastly, remember the house is supposed to help me, too:
I’m not the only one with skills, and a big part of it is creating a construct that I can use to level up and improve. The part where I impose structure is separate from the part where maybe I could leverage social pressure to improve my own workflow.
Can you point to some reasons why you believe that an authoritarian commune is a good idea (besides “let’s try and see what this button does”)?
“Who needs literature, I’m smarter than all of them” is a worrisome attitude. By the way, did you check what the literature actually said? In my experience what “everyone says” literature claims is usually NOT what the literature really claims.
What is the price for the experiment and who will pay it?
Er … I think the whole post above is all about answering your first question? I’m confused, and feel somewhat strawmanned by the summary “let’s try it and see what this button does.” Because high-commitment, high-structure environments have a long, long history of being actually productive and useful and net-good for a lot of the people that go through them, and ought to be in the toolkit despite their known failure modes, and given the rationalist community’s strong predilections towards individualism, prioritizing flexibility and following short-term motivation, and not committing to things, it seemed naive to expect that a high-commitment, high-structure environment would come into existence via committee. Note that, while not super emphasized in the post above, a major assumption is “if I’m right, I should be able to largely put down the baton six months in when the thing is clearly working,” i.e. it’s more about the structure than the authoritarianism specifically (the authoritarianism being simply a necessary catalyst imo).
The price for the experiment is largely distributed across its members; it’s the money involved in housing and whatever difficulty people suffer from giving up a not-insignificant-but-overall-fairly-small fraction of their agency and self-determination. It’s roughly analogous, I think, to the price one pays to become a black belt, only condensed down into six months rather than spread across several years.
As far as “who needs literature, I’m smarter than all of them” being worrisome—I’m okay with people being worried. Those people are being actively encouraged to influence things here, and also the whole system is based on iteration, and also I object to the strawmanning again (I’ve said more than once that there’s some value to be had there, but am being summed up as rejecting it entirely), and also I am, in fact, smarter than a lot of them. Not all, but a lot, and it’s been proven before in multiple domains, and I’d be an idiot to ignore that.
That wasn’t a summary of your position, that was a straw counterpoint for you to kick :-)
Well… it’s complicated. Such environments are good for producing tools for a purpose. Cogs in a machine, maybe, or mass-produced minds from the same mold, or even cannon fodder if you’re unlucky—note that the military is the prototypical “high-commitment, high-structure” institution.
Having tools is certainly productive from the point of the view of the purpose. And it is true that some (maybe many) people feel that being a tool gives you a purposeful life, better than being pointlessly adrift. But, as I said, it’s complicated :-/
Structure needs to be enforced—otherwise everyone could easily set up the needed amount of structure in their life themselves. The point of the exercise is, basically, “I will organize your life for you” and that doesn’t work in the no-stick all-carrot setups.
I guess the concept I worry about is responsibility: if you will organize my life for me, you become responsible for it while my responsibility diminishes.
That’s a good thing to be, but not necessarily to believe in :-D
In any case, I’m not saying you should do what the literature says, I’m saying you should know what the literature says, and not on the basis of hearsay either.
Yes. The price (I’m mostly speaking about things other than money) is uncertain, in statistical terms it’s a random variable with a particular distribution. The question is how far the tail stretches: how bad is the worst-case scenario?
Ah, gotcha. Thanks. =)
I think the point of the exercise is less “I will organize your life for you,” and more “we will reduce our ability to hide from one another, and therefore all be more likely to conform to our shared sense of that-which-is-endorsed.” The “I will organize” part is more “I will get us all together and turn on some of the relevant and hopefully-appropriate spotlights, and then moderate the discussion about which spotlights should turn back off.”
I have hopes that we can see the worst-case scenarios coming in time to avert them or eject, and that therefore the effective worst-case scenario is basically something like “I had a rough six months and have to find another room to rent again.”
Strong agreement with basically everything you say above.
Because in real world there are many successful authoritarian organisations? More or less every company you heard about is de facto authoritarian inside (sure, there are exceptions, too).
Because “our kind” seems to have bias against coordination, and an authoritarian leadership is a possible way to solve it?
Volunteers.
The issue isn’t so much “authoritarian” as it is the combination of “authoritarian” and “commune”.
Communes tend to be totalitarian and this one is explicitly set up as such (high-commitment, full-immersion, etc.) This makes it a dangerous environment—if people mention noticing the skulls, that’s because there are a LOT of skulls. “Authoritarian” means submission to the authority and in a totalitarian context that means total submission.
Authoritarian organizations like companies merely claim about 40 hours of your time per week plus obedience to a set of mostly external rules. And, of course, they pay you recognizing that their claim is a burden on you :-)
I understand where the impulse comes from: grassroots left is notoriously disorganized with the Occupy movement having been, perhaps, the peak of that—no leadership, no specific demands, lots of talking, zero achieved. But I would be a lot more comfortable with a “normal” goal-directed organization which focuses on external goals and not on molding the minds of its members. I’m very suspicious of mind-molding.
Besides, Duncan’s comments throughout the last week left me with grave doubts about his suitability to lead this kind of project. Low credence, of course, since I’m reacting merely to an internet persona and not to someone I know in real life, but my opinion of that persona took a marked turn to the worse.
Sure, it’s a possible way. I’m concerned with the cost / benefit ratio, though. Plus benevolent God Emperors are in short supply.
Cite? The kinds of communes my friends and acquaintances have lived in, haven’t seemed totalitarian at all.
Not in the sense that the secret police will check your underwear drawer for forbidden literature, but in the sense that they require conforming in more encompassing and more personal ways than the usual institutions of the society (like a workplace or a college, etc.)
Note that things which are basically shared living arrangements on a smaller or larger scale are sometimes called communes even though they don’t requite active integration into the life of that mini-society—I don’t have those in mind.
And, of course, this totalitarianism is not a binary variable but an axis with, essentially, a solitary isolated individual at one end and a hive mind on another.
I agree that 4 is a concern.
I disagree about 2. After having (a) participated in the weekend experiment and (b) done some “back-channel” references on Duncan, my impression is that he hates the fact that leadership will isolate him from the group he really wants to be a part of. I expect that if the experiment is successful, Duncan will eagerly set aside leadership and integrate himself with the group.
I think the troll obliquely raised on good point with their criticism of the example for Rule 6:
Treating something like your sleep disturbances as your responsibility is fine if e.g. you (like me) have lots of trouble falling asleep and something like people whispering 15 metres from your room is keeping you from falling asleep. In that case, those people are doing everything right and really don’t know that they’re hurting you. It is unreasonable to get angry at them if you haven’t explained to them why their behaviour is bad for you.
Sometimes it’s less clear though. I sometimes use the microwave after midnight. I know that the microwave can be heard in my room and in my room mate’s room. When I use the microwave and think he might be asleep, I stop it before the timer finishes and it beeps loudly. There’s not much excuse to wait for my room mate to specifically request that I do this; I’m more than capable of figuring out a) the microwave beeping at the end is loud and the sort of thing that can disrupt sleep and b) there’s a way I can stop that from happening. It does show some failure of consideration if I were to shrug off the potential inconvenience that the microwave could present for my room mate for the slight benefit of not having to watch the microwave.
This points to one of the failure modes of Tell Culture, where people use it as an excuse to stop doing any thinking about how their actions can affect other people. This actually suggests that one potential house experimental norm could be something like “before making an action that might effect another Dragon, pause and consider how it might effective them and if the effect will be a net positive.”
What this all comes down to for me is that it seems unfair to ask people to assume goodwill without also asking them to always attempt to act with goodwill.
I like this comment but I think what this and the original trollpost miss out on is that LW community in general, due to having a lot of people with autism and sensory issues, has a ton of people who actually do NOT have “reasonable expectations of what other people want to guide their behavior”. The OP quoted here is making a common typical-mind type error. Of COURSE it’s better to live with people who intuit your preferences and act in accordance to them without being told what they are. But it’s obnoxious to shit on attempted to solutions to a problem by insisting that morally good people could never have the problem in the first place.
Agreed. I have a bunch of social anxiety and dislike it when a certain degree of social smoothness is treated as necessary to be sorted into to the category of “good person”.
My specific criticism is of people (and I don’t just mean other people; I’ve failed here before) who could (with ease, not with Herculean effort) intuit preferences but use Tell Culture or direct communication norms to completely avoid doing so. This is especially maddening if you have social anxiety, because you’re left anxious about bringing the thing up, especially to someone who seems so otherwise socially competent.
Thanks for the chance to clarify my views here!
Yeah, +1 for not “hiding” behind Tell Culture to save effort.
One of the fixes for the anxiety thing is Circling/Focusing/pair debugging culture, which goes a loooooong way toward both a) building the trust and safety required to bring up such issues with less anxiety and b) actually providing Schelling points for when to say it. We’re also doing a weekly retrospective where it’ll be low-cost and high-support to gently point at such things.
+1 to all of that, especially especially the last line.
-- A note: I originally sent Duncan criticism privately. I didn’t want to add too much negativity to the discussion. But Duncan asked me to post publicly and I will defer to his judgement. Its his project and he is a very capable guy. I really hope DA succeeds, the rationalist community could be doing much better on many metrics. In general I find the model of DA very promising. But I have some serious concerns.
-- The ethics code seems extremely strict.
For example this rule strikes me as extraordinarily hard to follow: “A Dragon will assume good faith in all interactions with other Dragons”. As does “A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts”.
Earlier in the document Duncan said “Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings”. This implies to me that Duncan intends to enforce the CoC pretty strictly. Should Duncan be confidant its reasonable to expect such large deviations from how humans normally operate? I should note that normal bootcamps do not require as much psychologically from their recruits. Even though bootcamps require obedience they don’t normally require recruits to think a certain way.
Duncan explicitly said he was willing to modify norms that members felt were too hard to follow (” Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.”). But he also said that the CoC was unlikely to change. If I thought the CoC was meant more as a set of guidelines than strict rules I would be less worried. But that is not how I interpreted the post.
-- How many people do we expect to leave or get kicked out?
I have moderated some internet communities (And admin an active one now). Temp bans and warnings can only go so far. At some points you have to be willing to pull the trigger and ban people.
The section on reparations reassured me that Duncan was thinking hard about to keep people from falling off the path. In addition, unlike most internet communities, the DA recruits will be heavily vetted. But in order to enforce the reparations you either have to appeal to social pressure or the threat of kicking people out. I think the standards are very strict so serious discipline might be needed.
-- Are there practical or ethical problems with this plan?
People who get kicked out of DA are still required to pay rent until they can find a replacement. Assuming they are on the lease it seems highly unlikely you can kick them out of the house. However if someone gets kicked out of the house they might be pretty negative towards the rest of the group. It probably a bad situation to keep them around, but maybe they can’t easily find a replacement or a new place to live.
Secondly people who get kicked out might be psychologically unable to remain at the DA barracks. But until they can find someone to replace them they are on the hook for rent. In my personal opinion joining dragon army should be a “Good deal” for anyone involved. Its important that the downside of: “get kicked out” → “lose friends, need to find a replacement despite the fact that you got kicked out and maybe can’t give DA a good review, on the hook for lots of rent” is manageable. I would really hate to see anyone get hurt. I assume Duncan shares my concerns but he didn’t address them in the post.
In addition, has Duncan looked into the legalities surrounding renter’s rights in California (and Berkeley in particular)? This isn’t in the post even if he has done the research.
-- Duncan said the following “I also considered whether to update/change my tone given your first impression, but it seems to be enough of an outlier that I probably won’t make any deliberate effort.
Its plausible to me they aren’t much of an outlier. I had the same reaction, as did several people I showed Duncan’s post to (though other people thought Duncan’s post sounded fine). If I didn’t know Duncan was the curriculum director at CFAR I would have thought he was crazy and probably dangerous. Stuff about “living under my thumb”, self comparisons to Tyler Durden and the ender’s game quote about “quick, decisive obedience” really worried me. Some of the most shocking stuff, from my perspective, was in the pop culture references. But a number of things in the main text gave off an extremely strong cult vibe. Some examples include the “house salute” and the “Various call-and-response patterns surrounding house norms”. I should note I am not accusing Duncan of anything, based on his reputation he seems trustworthy. But his tone definitely set off loud alarm bells for me.
--
Again I am really happy people are considering new rationalist norms. Duncan seems like a very good choice to lead an experimental project. The general strategy of DA seems like a good one. But I wanted to share my concerns.
+1; general appreciation for your willingness to make the commentary public, so that I and others can interact with it openly.
EDIT: I got distracted dealing with the troll. I still hope to return to this comment, but if I fail to, please know that I am definitely mulling it over and taking its content seriously, and that I again thank you for posting.
In an internet community, you have less tools to change behavior than in personal conversations (and I say that as having moderated in a big personal development internet forum for years).
As far as personal development frameworks go ideas like of “code of perfection” can be found in Landmark (/The Four Agreements). On the other hand the actual verbal techniques advocated are NVC/Circling/Focusing/Internal Double Crux, which have values of authenticity and accepting the emotions that arise in the moment.
Humans sometimes do have instincts to see other people in bad faith. There are two ways to deal with it. ① Surpress it because you have a codex that doesn’t allow the instinct to be carried out. ② Bring it authentically to the front and be open about it.
Landmarkish thought would advocate ① while Circling leads to ②. Both can work as cultural norms but they are different and if there’s a desire to be in Circling mode, don’t have rules that require the other.
I’m managing/leading an internet gaming community, and the only tools I’ve ever had to use are selection and conversation.
I’ve had one person leave because their goal in joining was to acquire enough information and power to cause harm and they were so unsubtle about it that I was able to identify that and stop them. One additional person left because our norms of ‘don’t cheat’ and ‘be nice to our friends’ were given to him gently by everyone in voice chat every time they were violated.
Oddly enough, both of those people ended up joining a specific competing group that held neither of the norms ‘don’t cheat’ nor ‘don’t make public rape threats towards people who call out your cheating’.
And my selection method? Be public and pushy about what kind of norms you have, and push away people who don’t already have and want to follow those norms.
This post is so thoroughly repulsive and disgusting that I made an account for the sole purpose of pointing out how transparently and obviously perverse this fucked-up proposal is. Naturally I don’t have any actual desire to be critical or rude; it’s just that nobody else is doing it, so because of my infinite kindness and charity (if you have any doubts, rest assured that my closest friends and colleagues will all attest to my beneficent nature), I find myself obligated to step up to the batting plate, so to speak. Ah, if only someone could release me from this great burden. If only.
The author seems to have missed the part of Ender’s Game about the protagonists being children. It’s generally not a good thing for adults to role-play as children (the reasons for which are, I hope, sufficiently obvious to not require elaboration). The dominant impression I get from this is that this resembles the antifa movement and the anti-antifa movement: it’s a bunch of immature adults LARPing but pretending that they aren’t doing so.
Note that despite the author’s insistence on the validity of his experience as a CFAR instructor, he fails to actually point to any concrete benefits that people have derived from that instruction—plausibly because those benefits, when concretely stated without embellishment, are at best underwhelming. Note also that (1) no mention of dealing with problems arising from interpersonal romance are mentioned in the post and (2) the author’s reply to the comment that does point out the probable future existence of such problems receives what can at best be termed a cursory and dismissive reply.
This suggests that, contrary to the author’s assertion of having amassed a diverse and broad range of skills, and contrary to whatever accolades his colleagues may see fit to place upon him, he hasn’t yet attained the level of social awareness of a typical American high school student. It also suggests that the author’s ability to model himself and to model others has more-or-less not yet attained the level of sophistication required to view people as more than one-dimensional. I.e., the post seems to suggest an attitude of “I, a good person, will find a bunch of good people, and we’ll make these good things happen”. I’m pretty sure I’ve met high school students with a more nuanced (and less optimistic) understanding of human nature.
Naturally, this would be excused if the Berkeley rationalist community were full of people who are actually good people and who tend to get things done. Let’s check: Qiaochu Yuan, one of the most mathematically sophisticated members, has to the best of my knowledge hit a dead end in his PhD, and is becoming a CFAR instructor in Seattle, which makes it seem as though he’s actually concretely worse off compared to the counterfactual in which the rationalist community didn’t exist; Eliezer Yudkowsky has shifted in the direction of posting practically-untrue, self-aggrandizing bullshit on Twitter and Facebook instead of doing anything productive; Arbital is best described as a failure; word is going around that Anna Salamon and Nate Soares are engaging in bizarre conspiratorial planning around some unsubstantiated belief that the world will end in ten years, leading to severe dissatisfaction among the staff of MIRI; despite the efforts of a very valiant man, people have still not realized that autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren’t actually women; CFAR itself is trending in the direction of adding bureaucracy for bureaucracy’s sake; my own personal experience with people branded as “CFAR instructors” has been extremely negative, with them effectively acting arrogant out of proportion to their competence, not to mention their below-average levels of empathy; there was that bizarre scandal last year in which someone was accidentally impregnated and then decided not to abort the child, going against what had previously been agreed upon, and proceeded to shamelessly solicit donations from the rationalist community to support her child; etc., etc., etc.
In effect, there seems to be some sort of self-deception around the fact that the Berkeley rationalist community is by almost all reasonable standards severely dysfunctional, with the best people actually being on the periphery of the community. It’s almost as if the author is coming up with the “Dragon Army” in an attempt to help everyone collectively delude themselves into believing they’re much better than they are, because he can’t bear to actually look at the Berkeley rationalist community and see it for what it is: a pile of garbage. Just like how a child from a broken family might imagine that everyone’s getting along. Unfortunately(?), flinching away from the truth doesn’t actually make reality go away.
Amusingly, it actually does seem as though the author partially realizes this. Let’s review the criteria which the author hopes the members of “Dragon Army” will fulfill after a year’s worth of cult membership:
“Above-average”? “Average”? Not exactly a high bar. “At least one employable mental skill, and at least one employable trade skill”? Is the correct inference here that the typical participant is actually expected to be not employable at all (i.e., deficient in both categories)? “First aid & survival”—if there was ever any doubt that this is actually just sophisticated childish role-playing… The fact that I (in contrast with the Berkeley rationalist community) have put very little directed effort into the meta-goal of self-improvement and nevertheless plausibly already satisfy 11 of these 14 criteria, with the other 3 not seeming particularly difficult to attain, is not a good sign!
Despite the fixation on “evolving norms” or whatever, the author seems to be particularly blind to what social reality is actually like and what actually makes communities get along. Consider, e.g., the following quote:
Let me pose a question to the reader of my comment: would you rather live in a house where you have to constantly verbally ask the other residents to stop doing things that they could have reasonably foreseen would bother you, or would you rather live in a house where people actually used reasonable expectations of what other people want to guide their behavior and therefore acted in a way that preempted causing other people irritation?
There are two inferences to be made here:
Members of the Berkeley rationalist community are particularly prone to using bureaucratic rule-setting as a way to compensate for their severely below-average social skills, and
Members of the Berkeley rationalist community are particularly low-empathy and embody the worst of individualism, such that they don’t actually care whether or not what they’re doing might bother others until they’re told to stop.
In my personal experience, both inferences are correct. Ultimately, what this comes down to is a bunch of socially-inept losers with near-autistic social skills trying to attain the sort of basic social harmony that comes naturally to more competent people via a combination of bizarre mimicry and a mountain of bureaucracy. Naturally, and contrary to the author’s bizarre childish idealism, one can expect a hell of a lot of repressed irritation, interpersonal drama, and general unpleasantness from this experiment.
To top off the turd cake with a cherry, the author’s science fiction writing is trash:
Anyone who can vomit that out on a page and feel proud of it isn’t fit to lead or teach anything. Period. The world would be concretely better off if the author, and anyone like him, killed themselves.
PSA:
Do not feed trolls.
In ages past, vitriol like this would be downvoted into oblivion. This was out of recognition that norms of good discourse are more important than the content of arguments. Failure to abide by this spreads rot and makes good communal epistemic hygiene even more difficult.
I notice downvoting is disabled now. Which, sadly, means that people will be tempted to engage with this. Which reinforces a norm of having one’s dissent noticed by acting like an unapologetic asshole. Which burns the future of this garden.
So as a close second, I advise just thoroughly ignoring 18239018038528017428 unless and until they step up to meet more noble conversational norms. If there are good points to be made here, they should be converted into the truth-seeking style Less Wrong aspires to so that we can all engage with them in a more hygienic way.
I appreciate Duncan’s attempts to do that conversion and speak to the converted form of the argument.
But unless and until I see enough evidence to convince me otherwise, I assume 18239018038528017428′s intentions are not truth-seeking. I assume they are inflammatory and will not change via civil discourse.
Ergo, request to all:
Do not feed trolls.
PS: I will follow my own advice here and have no intention of replying to 18239018038528017428 unless and until they transpose their discourse into the key of decency. I expect them to reply to me here, probably with more vitriol and some kind of personal attack and/or attempt to discredit me personally. My ignoring them should be taken as my following my own policy. Note that if 18239018038528017428 does reply with vitriol, it will probably be in some way fashioned as an attempt to make my very refusal to engage look like confirmation of their narrative. Please filter your reading of any replies to my message here accordingly.
I’m the person who advocated most strongly for getting the downvote disabled, and I share some of 18239018038528017428′s skepticism about the community in the Bay Area, but I strongly agree with Val’s comment. There are already a ton of case studies on the internet in how fragile good conversational norms are. I’m going to email Vaniver and encourage him to delete or edit the vitriol out of comments from 18239018038528017428.
(Also ditto everything Val said about not replying to 18239018038528017428)
Thanks for that; I had already noticed this thread but a policy of reporting things is often helpful. It seemed like Duncan was handling himself well, and that leaving this up was better than censoring it. It seems easier for people to judge the screed fairly with the author’s original tone, and so just editing out the vitriol seems problematic.
With the new site, we expect to have mod tools that will be helpful here, like downvoting making this invisible-by-default, to ip-banning and other things to make creating a different throwaway account difficult.
For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)
Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of “tone” and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter’s opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider—a behavior pattern that doesn’t need more practice, IMHO.
(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)
I’m also curious to hear what made you update.
It’s true that sensitivity norms can have subtle effects on a conversation, but nastiness norms can too. If you look at the study cited in the “hold off on proposing solutions” essay, you can see a case where politicizing a topic restricts the space of ideas that are explored. (I think this is actually a more natural takeaway from the study than “hold off on proposing solutions”.) Nasty conversations also often see evaporative cooling effects where you are eventually just left with hardliners on each side. In general, I think nasty conversations tend to leave any line of reasoning that doesn’t clearly support the position of one side or the other under-explored. (This is a pretty big flaw in my opinion, because I think divided opinions are usually an indicator of genuinely mixed evidence. If the evidence is mixed, the correct hypothesis is probably one that finds a way to reconcile almost all of it.) Furthermore I would predict that arguments in nasty conversations are less creative and generally just less well thought through.
Here’s another argument. Imagine 18239018038528017428 showed you their draft comment minus the very last sentence. Then they showed you the last sentence “The world would be concretely better off if the author, and anyone like him, killed themselves.” Would you tell them to add it in or not? If not, I suspect there’s status quo bias, or something like it, in operation here.
Anyway, I think there better ways to address the issue you describe than going full vitriol. For example, I once worked at a company that had a culture of employees ribbing each other, and sometimes we would rib each other about things other employees were doing wrong that would be awkward if they were brought up in a serious manner. I think that worked pretty well.
I just want to point out that Duncan did in fact put a tremendous amount of time in to engaging with this critic (more time than he put in to engaging with any other commenter in this thread, by my estimate).
My other comment should hopefully clarify things, as least with regard to politicization in particular.
To spell out the implications a bit more: the problem with political discourse, the reason it kills minds, is not that it gets heated; rather, it freezes people’s mental categories in ways that prevent them from making ontological updates or paradigm shifts of any kind. In effect, people switch from using physical cognition to think about arguments (modus ponens, etc.), to using social cognition instead (who wins, who loses, etc.). (Most people, of course, never use anything but social cognition in arguments; politics makes even “nerds” or “intellectuals” behave like typical humans.)
It is in fact possible for “heated” or even “nasty” discourse to be very information-rich; this makes sense if you realize that what counts as “nasty” depends on social norms. If you encounter discourse from a different social context (even, for example, simply because the speaker has misunderstood the social context and its norms!) you may read it as “nasty”, despite the fact that the author was specifically intending to communicate content.
Now, of course I don’t consider 18239018038528017428′s comment to be optimally worded—but then, I wouldn’t, because I didn’t write it. This is the important thing to understand: there is value to be had in getting detailed input on the mental states of people unlike oneself.
I agree that Duncan deserves positive reinforcement for engaging with this critic to the extent he did. But I think it was actually good for him epistemically to do so, not just as a demonstration of his willingness-to-bend-over-backwards, and thus, good social nature.
As someone who doesn’t live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I’m part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it’s not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it’s out in the open so I may face the full consequences of my mistakes.
I know lots of people mentioned in ’18239018038528017428′ comment. I either didn’t know those things about them, or I wouldn’t characterize what I did know in such terms. Based on their claims, ’18239018038528017428′ seems to have more intimate knowledge than I do, and I’d guess is also in or around the Bay Area rationality community as well. Yet they’re on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn’t been established other than “works at MIRI/CFAR”, and what they’re doing is just insulting and accusing regular people like the rest of us on the internet. They’re not facing the consequences of their actions.
The information provided isn’t primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of ’18239018038528017428′s comment were to express frustration, slander certain individuals, and undermine and discredit Duncan’s project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.
There’s nothing I do which is more policed in terms of tone on the basis of sensitivity that ’18239018038528017428′ isn’t doing. While we’re talking about norms of sensitivity, let’s talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn’t always splendid or sensitive, and how ’18239018038528017428′ do it, are what separates people who have a non-zero respect for norms, and those who don’t. This coming from me, a guy who lots of people think probably already flaunts social norms too much.
I am anti-sympathetic to ’18239018038528017428′ and whether they’re censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don’t like seeing this sort of drama dominate discourse, and in particular there are lots of us who don’t care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That’s not what anyone needs. Since we’ve established ’18239018038528017428′ seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user ‘18239018038528017428’ wouldn’t need to out themselves in front of everyone to do it. They could’ve had had a friend do it.
There are plenty of ways they could’ve accomplished everything they would’ve wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there’s no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton’s fence for discourse is being torn down here, I don’t believe that’s what’s going on here, and I think the preferences of everyone else on LessWrong who isn’t personally involved deserves a say on what they are and aren’t okay with being censored on this site.
You don’t seem to be addressing what I said very much if at all, but rather to mostly be giving your reaction to 18239018038528017428′s comments. This is demonstrated by the fact that you take for granted various assumptions that it was the purpose of my comment to call into question.
In particular, the speech is not being allowed “to the chagrin of all other users”. I am notably non-chagrinned by the speech being allowed, and I advocate that people be less chagrinned by such speech being allowed.
Needless to say, to be allowed is not to be approved.
What convinced you of this?
A constellation of related realizations.
A sense that some of the most interesting and important content in my own field of specialization (e.g. the writings of Heinrich Schenker) violates, or is viewed as violating, the “norms of discourse” of what I took to be my “ingroup” or “social context”; despite being far more interesting, engaging, and relevant to my concerns than the vast majority of discourse that obeys those norms.
A sense that I myself, despite being capable of producing interesting content, have been inhibited from doing so by the fear of violating social norms; and that this (which is basically a form of cowardice) is likely to also be what is behind the stifled nature of norm-conforming discourse referred to above.
A sense that the ability to look beyond discourse norms (and the signaling value of violation or conformity thereto) and read texts for their information content is extremely intellectually valuable, and in particular, makes texts originating in outgroup or fargroup cultures much more accessible—the epistemic usefulness of which should go without saying.
A sense that a generalized version of this principle holds: the ability to conform to discourse norms, despite their information-obstructing nature, yet still succeed in communicating, functions as a signal of high status or tight embeddedness within a community, achieved via countersignaling. In particular, it cannot be successfully imitated by those not already of similar status or embeddednees: the attempt to imitate Level 4 results in Level 1.
A sense that discourse norms, and norms of “civility” generally, are the result of optimization for a purpose entirely distinct from the efficient transmission of information. Namely, they are there to reduce the risk of physical violence; in fact they specifically trade off communicative efficiency for this. Hence: politics, diplomacy, law—the domains in which discourse is most tightly “regulated” and ritualized being specifically those most concerned with the prevention of physical violence, and simultaneously those most notorious for hypocrisy and obscurantism. This, by contrast, does not seem to be what an internet forum concerned with truth-seeking (or even an associated real-life community of minimally-violent individuals living in a society characterized by historically and globally high levels of trust) is supposed to be optimizing for!
Cool. Let’s play.
I notice you make a number of claims, but that of the ones I disagree with, none of them have “crux nature” for me. Which is to say, even if we were to hash out our disagreement such that I come to agree with you on the points, I wouldn’t change my stance.
(I might find it worthwhile to do that hashing out anyway if the points turn out to have crux nature for you. But in the spirit of good faith, I’ll focus on offering you a pathway by which you could convince me.)
But if I dig a bit, I think I see a hint of a possible double crux. You say:
I agree with a steelman version of this. (I don’t think it is literally entirely distinct — but I also doubt you do, and I don’t want to pressure you to defend wording that I read as being intended for emphasis rather than precise description.) However, I imagine we disagree about how to value that. I think you mean to imply “…and that’s bad.” Whereas I would add instead “…and that’s good.”
In a little more detail, I think that civility helps to prevent many more distortions in communication than it causes, in most situations. This is less needed the more technical a field is (whatever that means): in math departments you can just optimize for saying the thing, and if seeming insults come out in the process then that’s mostly okay. But when working out social dynamics (like, say, whether a person who’s proposing to lead a new kind of rationalist house is trustworthy and doing a good thing), I think distorted thinking is nearly guaranteed without civility.
At which point I cease caring about “efficient transmission of information”, basically because I think (a) the information being sent is secretly laced with social subtext that’ll affect future transmissions as well as its own perceived truthiness, and (b) the “efficient” transmission is emotionally harder to receive.
So to be succinct, I claim that:
(1) Civility prevents more distortion in communication than it creates for a wide range of discussions, including this one about Dragon Army.
(2) I am persuadable as per (1). It’s a crux for me. Which is to say, if I come to believe (1) is false, then that will significantly move me toward thinking that we shouldn’t preserve civility on Less Wrong.
(3) If you disagree with me on (1) and (1) is also a crux for you, then we have a double crux, and that should be where we zoom in. And if not, then you should offer a point where you think I disagree with you and where you are persuadable, to see whether that’s a point where I am persuadable.
Your turn!
I’m gonna address these thoughts as they apply to this situation. Because you’ve publicly expressed assent with extreme bluntness, I might conceal my irritation a little less than I normally do (but I won’t tell you you should kill yourself).
Did he tell people they should kill themselves?
This strikes me as an example of the worst argument in the world. Yes, telling people to kill themselves is an alternative discourse norm, alternative discourse norms can be valuable, but therefore telling people to kill themselves is valuable? Come on. You can easily draw a Venn diagram that refutes this argument. Alternative discourse norms can be achieved while still censoring nastiness.
Telling forum users they should kill themselves is not gonna increase the willingness of people to post to an online forum. In addition to the intimidation factor, it makes Less Wrong look like more of a standard issue internet shithole.
This can be a valuable skill and it can still be valuable to censor content-free vitriol.
Yes, it takes a lot of effort to avoid telling people that they should kill themselves… Sorry, but I don’t really mind using the ability to keep that sort of thought to yourself as a filter.
If we remove Chesterton’s Fences related to violence prevention, I predict the results will not be good for truthseeking. Truthseeking tends to arise in violence-free environments.
Maybe it’d be useful for me to clarify my position: I would be in favor of censoring out the nasty parts while maintaining the comment’s information content and probably banning the user who made the comment. This is mainly because I think comments like this create bad second-order effects and people should be punished for making them, not because I want to preserve Duncan’s feelings. I care more about trolls being humiliated than censoring their ideas. If a troll delights in taking people down a notch for its own sake, we look like simps if we don’t defect in return. Ask any schoolteacher: letting bullies run wild sets a bad precedent. Let me put it this way: bullies in the classroom are bad for truthseeking.
See also http://lesswrong.com/lw/5f/bayesians_vs_barbarians/ Your comment makes you come across as someone who has led a very sheltered upper-class existence. Like, I thought I was sheltered but it clearly gets a lot more extreme. This stuff is not a one-sided tradeoff like you seem to think!
For obvious reasons, it’s much easier to convert a nice website to a nasty one than the other way around. And if you want a rationalist 4chan, we already have that. The potential gains from turning the lesswrong.com domain in to another rationalist 4chan seem small, but the potential losses are large.
Who said anything about “extreme”?
You are unreasonably fixated on the details of this particular situation (my comment clearly was intended to invoke a much broader context), and on particular verbal features of the anonymous critic’s comment. Ironically, however, you have not picked up on the extent to which my disapproval of censorship of that comment was contingent upon its particular nature. It consisted, in the main, of angrily-expressed substantive criticism of the “Berkeley rationalist community”. (The parts about people killing themselves were part of the expression of anger, and need not be read literally.) The substance of that criticism may be false, but it is useful to know that someone in the author’s position (they seemed to have had contact with members of the community) believes it, or is at least sufficiently angry that they would speak as if they believed it.
I will give you a concession: I possibly went too far in saying I was grateful that downvoting was disabled; maybe that comment’s proper place was in “comment score below threshold” minimization-land. But that’s about as far as I think the censorship needs to go.
Not, by the way, that I think it would be catastrophic if the comment were edited—in retrospect, I probably overstated the strength of my preference above—by my preference is, indeed, that it be left for readers to judge the author.
Now, speaking of tone: the tone of the parent comment is inappropriately hostile to me, especially in light of my other comment in which I addressed you in a distinctly non-hostile tone. You said you were curious about what caused me to update—this suggested you were interested in a good-faith intellectual discussion about discourse norms in general, such as would have been an appropriate reply to my comment. Instead, it seems, you were simply preparing an ambush, ready to attack me for (I assume) showing too much sympathy for the enemy, with whatever “ammunition” my comment gave you.
I don’t wish to continue this argument, both because I have other priorities, and also because I don’t wish to be perceived as allying myself in a commenting-faction with the anonymous troublemaker. This is by no means a hill that I am interested in dying on.
However, there is one further remark I must make:
You are incredibly wrong here, and frankly you ought to know better. (You have data to the contrary.)
Well, you’ve left me pretty confused about the level of importance you place on good-faith discussion norms :P
Positive reinforcement for noticing your confusion. It does indeed seem that we are working from different models—perhaps even different ontologies—of the situation, informed by different sets of experiences and preoccupations.
All of these are reasonable points, given the fixed goal of obtaining and sharing as much truth as possible.
But people don’t choose goals. They only choose various means to bring about the goals that they already have. This applies both to individuals and to communities. And since they do not choose goals at all, they cannot choose goals by the particular method of saying, “from now on our goal is going to be X,” regardless what X is, unless it is already their goal. Thus a community that says, “our goal is truth,” does not automatically have the goal of truth, unless it is already their goal.
Most people certainly care much more about not being attacked physically than discovering truth. And most people also care more about not being rudely insulted than about discovering truth. That applies to people who identify as rationalists nearly as much as to anyone else. So you cannot take at face value the claim that LW is “an internet forum concerned with truth-seeking,” nor is it helpful to talk about what LW is “supposed to be optimizing for.” It is doing what it is actually doing, not necessarily what people say it is doing.
That people should be sensitive about tone is taken in relation to goals like not being rudely insulted, not in relation to truth. And even the argument of John Maxwell that “Truthseeking tends to arise in violence-free environments,” is motivated reasoning; what matters for them is the absence of violence (including violent words), and the benefits to truth, if there are any, are secondary.
Is the implication that they’re not reasonable under the assumption that truth, too, trades off against other values?
What the points I presented (perhaps along with other things) convinced me of was not that truth or information takes precedence over all other values, but rather simply that it had been sacrificed too much in service of other values. The pendulum has swung too far in a certain direction.
Above, I made it sound like it the overshooting of the target was severe; but I now think this was exaggerated. That quantitative aspect of my comment should probably be regarded as heated rhetoric in service of my point. It’s fairly true in my own case, however, which (you’ll hopefully understand) is particularly salient to me. Speaking up about my preoccupations is (I’ve concluded) something I haven’t done nearly enough of. Hence this very discussion.
This is obviously false, as a general statement. People choose goals all the time. They don’t, perhaps, choose their ultimate goals, but I’m not saying that truth-seeking is necessarily anybody’s ultimate goal. It’s just a value that has been underserved by a social context that was ostensibly designed specifically to serve it.
But not infinitely much. That’s why communicational norms differ among contexts; not all contexts are as tightly regulated as politics, diplomacy, and law. What I’m suggesting is that Less Wrong, an internet forum for discovering truth, can afford to occupy a place toward the looser end of the spectrum of communicational norms.
This, indeed, is possible because a lot of other optimization power has already gone into the prevention of violence; the background society does a lot of this work, and the fact that people are confronting each other remotely over the internet does a fair portion of the rest. And contrary to Maxwell’s implication, nobody is talking about removing any Chesterton Fences. Obviously, for example, actual threats of violence are intolerable. (That did not occur here—though again, I’m much less interested in defending the specific comment originally at issue than in discussing the general principles which, to my mind, this conversation implicates.)
The thing is: not all norms are Chesterton Fences! Most norms are flexible, with fuzzy boundaries that can be shifted in one direction or the other. This includes norms whose purpose is to prevent violence. (Not all norms of diplomacy are entirely unambiguous, let alone ordinary rules of “civil discourse”.) The characteristic of fences is that they’re bright lines, clear demarcations, without any ambiguity as to which side you’re on. And just as surely as they should only be removed with great caution, so too should careful consideration guide their erection in the first place. When possible, the work of norms should be done by ordinary norms, which allow themselves to be adjusted in service of goals.
There are other points to consider, as well, that I haven’t even gotten into. For example, it looks conceivable that, in the future, technology, and the way it interacts with society, will make privacy and secrecy less possible; and that social norms predicated upon their possibility will become less effective at their purposes (which may include everything up to the prevention of outright violence). In such a world, it may be important to develop the ability to build trust by disclosing more information, rather than less.
I agree with all of this. (Except “this is obviously false,” but this is not a real disagreement with what you are saying. When I said people do not choose goals, that was in fact about ultimate goals.)
Yeah but exposure therapy doesn’t work like that though. If people are too sensitive, you can’t just rub their faces in the thing they’re sensitive about and expect them to change. In fact, what you’d want to desensitize people is the exact opposite—really tight conversation norms that still let people push slightly outside their comfort zone.
I need access to these studies!
Out of curiosity, why do you prefer having downvotes disabled? (Here’s a comment explaining why I want them back.)
Evidence: time and energy put into the comment. Evidence: not staying silent when they could have.
I am not saying theee offending comments are valid, instead I am curious as to why you discounted what I identify as evidence?
Ah, I was using a more colloquial definition of evidence, not a technical one. I misspoke.
What goes through my mind here is, “Trolls spend a lot of time and energy making comments like this one too, and don’t stay silent when they could, so I’m not at all convinced that those points are more consistent with a world where they’re truth-seeking than they are with a world in which they’re just trolling.”
I still think that’s basically true. So to me those points seem irrelevant.
I think what I mean is something more like, “Unless and until I see enough evidence to convince me otherwise….” I’ll go back and edit for that correction.
In what represents a considerable change of belief on my part, this now strikes me as very probably false.
I’m open. Clarify?
See this comment; most particularly, the final bullet point.
Replied.
I offer this model insofar as it helps with communicating about the puzzle -
http://bearlamp.com.au/a-model-of-arguments/
and this one
http://bearlamp.com.au/filter-on-the-way-in-filter-on-the-way-out/
see http://lesswrong.com/r/discussion/lw/p23/dragon_army_theory_charter_30min_read/dsyp
Strong support for this person’s willingness to contribute the opposite opinion.
Strong support for this person’s willingness to take the time to write things up in detail.
Strong appreciation for the trust implicit in this being posted here (i.e. it’s a compliment along the lines of “I expect not to be punished for speaking the truth as I see it.”)
Some regret/sadness that they’re this triggered and vitriolic, and for the tendency toward choosing the worst or straw-est interpretation at every point rather than taking the time to question their own responses and include nuance, but on the other hand, still appreciation for how this contributes to the overall health of the discussion by opening up new threads for debate and ensuring that there isn’t an echo chamber (i.e. maybe it takes that level of aggression to accomplish the thing, and a gentler critique wouldn’t be taken seriously enough?).
Significant disagreement with the choice to hijack the topic at hand to vent about things that are either mostly or completely unrelated, and make claims that are unsubstantiated or wildly inaccurate, and engage in some specious logic toward the end (e.g. ad hominem fallacy).
Hope to have some time later today to respond to the better points this raises.
Thanks for your contribution.
The fact that you think it’s “ad hominem” is itself a betrayal of your own inexperience and lack of perception. It’s perhaps one of the most relevant and least fallacious arguments to make: your fiction is a direct expression of your aesthetics, and the inference I draw from your fiction is that you do not have good aesthetics, and therefore should not be trying, or even pretending, to do something that by nature requires very good aesthetic sense.
It also indicates a tremendous amount of immaturity and childishness. I could have written something better in high school. That’s not a good sign. Your ability to write characters and dialogue is directly tied to your ability to model the world accurately and understand the nuances of human behavior. Ergo, clichéd and trite writing is very damning.
Many words. Probably took a while to write. Some unnecessary things like telling the writer to kill themselves and levelling inherent criticism like attributes of other writing. Other writing is pretty irrelevant to the qualities of this piece. You may have some points in this dung heap but you make it hard to find them. Is it even worth engaging you in conversation?
Oh, I see. You’re what the Eternal September phenomenon is all about. You shouldn’t feel ashamed that you aren’t cognitively gifted enough to quickly and rapidly comprehend the salient points I made without substantial expenditure of mental effort, because you were born this way, which also accounts for your overestimation of the amount of time it took for me to write my comments. But please don’t pollute the comment space under my comments with your puerile excretions.
Perhaps your excessive cognition is ironically blinding you to the grandiose mediocrity of your overwrought replies, such as this one here, which sounds like something I would have written in third grade if I wasn’t already too smart to have written it then, which, as a truly capable mind might have already conceived, I was.
Your original comment, though harsh, at least contained some useful insights. Don’t ruin that by posting comments that are nothing more than 6 lines of insults that no one wants to read.
Part right.
Most of the arguments you set forth are more fallacious and less relevant than not liking all the author’s fiction.
But that’s because most of the arguments you set forth were of the type “Bay Area rationalists have had a lot of problems and therefore this specific plan will have similar problems.”
Oh, I see. This is the part where you’re too attached to your ingroup to realize what a total failure the Berkeley rationalist community is. I bet you also think the Sequences and HPMOR are well-written.
This is why we need downvotes.
[Note: I’ve typed this comment without refreshing the page, and thus have not seen any of the other responses that may have cropped up in the past few hours, nor taken those responses into account in any way yet. I’m seeing only the original reply, here.]
Part 1 of ?
Repeating my thanks before heading into what will be a mix of concession and disagreement—I have qualms about the way you engaged with this post, but am grateful for the fact that you did engage, at all, rather than just staying quiet, and I want to support the core of that even as I complain about certain aspects of your chosen method.
I think your first paragraph had one clear point: “I, as a smart, perceptive person who sees things others often fail to see, found a lot of this viscerally upsetting, which is probably a sign that there are actual problems.” I liked that you added this point, and I think it would’ve been stronger if you hadn’t been so deliberately assholish with the rest of it. I’m going to take the core point seriously as I read further, and see if I can get a clear sense of what it is you see that I don’t.
The comment about Ender’s Game (paragraph 2) is a misunderstanding on your part, either deliberate or easy to clear up—there’s no wargaming in the plan, there’s no battle room, there are no other groups of people playacting as other armies. The aesthetic of Dragon Army was, in short: everyone is expected to keep their eyes open and act independently to do what seems right and sane in the moment. Groups should practice coordinating together to build trust and be capable of action-requiring-more-than-one-individual, but the assumption is that an army run by forty minds will trump an army run by one.
In paragraph 3, you make a valid point about the efficacy and usefulness of CFAR, which is indeed worth questioning, and the side you’re holding down is not obviously wrong. It’s a bit overwrought, given that the phrase “insistence on the validity of his experience as a CFAR instructor” is a clear strawman; I was almost as emphatic about the fact that I’ve written nerdy fanfic, so I think you were just looking for an opportunity to climb up on a soapbox? That being said, your point about interpersonal romance being a relevant and important factor matches my own intuition, and I wish you had appreciated the fact that I wanted to continue thinking carefully about correct solutions rather than just spam the first ideas that popped into my head.
In paragraph four, you make an entirely unfounded leap that is beneath the quality of what’s expected from a poster on this forum. All of your “this suggests” are false handwaving, and I find the rest of your assertions generally laughable, given that there’s only one person in this thread so far who’s demonstrated deep antisocial behavior, and that you’re hurling these insults from a position of anonymity. However, I’m going to continue to take things one paragraph at a time rather than assuming that I’ve seen your entire position as soon as I’ve got a mockable straw model, so we’ll start fresh with your next point.
Hmmm. In the first sentence of paragraph 5, you and I seem to converge somewhat—we both agree that the Bay Area rationalist community is not living up to its promise, and has too few people doing good and impactful work. I’m glad to share this bit of world-model with you. I note that my idea for what to do about it—try a different sort of house/community—is just one possible strategy among many, and I’m curious if you have other concrete suggestions that you’d be willing to offer. I’m especially curious what you’re actually doing, as you seem to have a sort of … scathing dismissal? … of everyone else, and I’d expect from your tone that you must be engaged in at least one concretely high-promise project (else it all smacks of rank hypocrisy). Would you be willing to detail a) what you’re up to, or b) a few concrete proposals that you suspect are higher promise? At this point, it’d be hard to simply abandon the Dragon Army idea, but if a good enough alternative came along, I would take it. The point is not to be seen to be right, it’s to actually make an impact.
I notice that the rest of that paragraph is basically off-topic. Without contributing to the off-topicness, I want to say that I do, indeed, find at least a couple of worthwhile points of agreement within it, but I think most of it is wrong, in addition to being somewhat morally reprehensible re: vicious attacks, and that you’re overconfident in your assertions. If you’d like to shoot me a private message, I’d be happy to say where I agree and where I disagree.
Oh, interesting—paragraph six also begins with a claim I have a lot of sympathy for/agreement with. I don’t hold it as strongly as you do, but I do think there’s a lot of clear dysfunction and self-deception in the community, and I’d like to take steps to correct it. I don’t know how to evaluate your claim that the best people are on the periphery (as I’m a weird mix of professionally central and socially somewhat distant), but again—if you’d like to make concrete recommendations about who I should talk to, or direct some of the people you hold in high esteem to comment on this thread, I suspect you’re right about there being a lot of untapped value. I do note that Dragon Army is not actually pulling from the central or highest status people, but thus far looks to be made up of a lot of solid, normal, representative rationalists, so I think your claim about trying to delude people is straightforwardly false, as is your assumption that I don’t see or don’t want to see any warts and flaws. (I believe there are lots of people who will back me up on this, including some who will claim that I’ve been too hostile or critical. That’s partially why I sympathize with the strength of your negativity.)
Part 2 of 2
Ah, paragraph seven contains the unword “cult,” which I think you’re using to say something, but I’d rather you just actually said the thing, instead of applying the empty, stretched, multi-interpretation label. Like, I think if you laid out specific, concrete objections, I and others could benefit from them, but just saying cult is lazy name-calling.
I do somewhat agree with your objections to the list of specific skills attained after a year. I had hoped that the large word DRAFT at the top, plus the repeated statements that the whole plan was to iterate, and that I didn’t expect to be able to figure out the right stuff on the first try, would’ve clued you in to the fact that I, too, am aware that the list is inadequate. Do you have specific suggestions for replacements? Keep in mind, the hard problem is to balance things-that-will-be-generally-useful-for-a-medium-sized-group-of-people against the fact that everyone involved has their own specific career and expertise already. Part of the impetus here is social, part of it is becoming well-rounded, part of it is practicing the skill of gaining/improving skills, and all of that is trying to avoid skating into trivial irrelevancy. Got any ideas?
As a meta note, I think that people who cower behind anonymity don’t deserve to make concrete claims about their skill sets without backing them up, so until further notice and on a policy level, I’m treating your claim that you meet 11 out of 14 criteria as a flat-out lie (despite its plausibility overall). You’re currently nothing and nobody and have no skills; that will change as soon as you a) reveal yourself or b) demonstrate credibility under this pseudonym.
Your next attempt to strawman things takes a sub-point out of context and deliberately ignores the actual requirement being made, which was that people hold their beliefs and models with skepticism/realize that their internal experience does not represent absolute truth, and that they treat one another with a behaviorist’s lens, using revealed preferences and past behavior as predictors, rather than relying on mental summations that may be false or straw. I’m curious whether, setting aside your mockery of a subpoint, you agree with that point.
Interestingly enough, I have reasonable credence in your two inferences. In my experience, members of this community do attempt to install norms to compensate for social failings (and do have a somewhat higher-than-average level of social ineptitude). And also, I think many people in this community are low-empathy and embody the bad side of individualism. However, unlike you, I see that a lot of people are trying damn hard to correct this, and I’m curious whether you think they should be written off for not being good enough already, or whether you have specific suggestions that differ from the ones already being tried. I note that a big part of what Dragon Army intends to do is just try a whole bunch of stuff (including stuff already known to work; there’s no premium on novelty), and that I think data will be better than armchair ranting.
I suspect you haven’t done much in the way of looking in the mirror when you type the words “repressed irritation, interpersonal drama, and general unpleasantness.” Certainly you don’t meet any of my standards for “how a decent person behaves.” I’m going to try to avoid the fundamental attribution error here, though, and assume that we’ve hit some combination of a) a bad day, b) the problems of online communication, and c) you being unusually triggered or having run out of some important resources.
I’m not going to engage with the ad hominem attack at the end, which, in addition to being wrong as a tactic, also fails in specific. I think that if you compare yourself, who is suggesting suicide as a solution, with OSC, who is definitely wrong about a lot of things but has never gone so far as to claim a fellow human would be better off killing themselves, you’ll note that you might be on the wrong side. I’d check my cap for a skull, at least in the context of today’s mood.
For anyone else—I welcome calm, reasoned elaboration on any of the on-topic points this person made. When I went through blow-by-blow, there were fewer than I’d hoped, but there are true and valuable and important criticisms here, and I’m glad they’ve been added to the mix, and I wouldn’t mind further discussion of them.
Sure, but it’s fun to be an asshole. I love knocking people down a peg. Especially in public.
Asserting that this isn’t elaborate playacting is not very convincing in light of the fact that your first two proposed group norms are (1) a greeting salute and (2) a call-and-response mechanism. I played the beginning of Final Fantasy XIII two nights ago and thought that was the most cringeworthy stuff I’ve seen in months, but you managed to top even that.
The more important thing here is that you imagine this as a problem that can be solved when in fact if the problem did arise, that would itself preclude it from being easily solved. The “solution” is to not select immature people who you can reasonably expect to get into interpersonal drama, which precludes the vast majority of the rationalist community, which is part of the point of my comment.
I can suggest that you talk to Satvik Beri, and maybe direct him to my comment as well, although I feel slightly bad for potentially causing him to spend time on this.
I mean that the Berkeley rationalist community is a cult in the full and unqualified sense of the word “cult”. You, as a high priest, naturally disagree.
This is a good thing practically by construction.
My point is that this is almost completely unnecessary in a world where people begin by defaulting to behavior that is very unlikely to bother others. I am also gesturing at the following:
The rationalist community does not default to such behavior, which is an indication of the conjunction of near-autistic social skills and remarkably low empathy, and
The rationalist community does not default to such behavior, but instead of anyone pointing out that this is a reasonable thing to default to (c.f. Japanese society), people try to patch it up with legalism, bureaucracy, and a laundry list of rules, which in my experience makes it feel like I’m talking to the low-IQ HR department of a large multinational conglomerate.
The fact that the Berkeley rationalist community seems particularly bad at this is a major red flag in almost every conceivable fashion.
I think they should be thrown off a bridge, either metaphorically or literally. I find it detestable to have them near me at all.
Two questions:
Does it look to you like my irritation is “repressed”?
I’m completely anonymous. Exactly what interpersonal drama am I causing here?
I agree that I can be, when I want to be, a very unpleasant person.
I don’t think you actually succeeded in knocking anyone down a peg, though. I’d bet ~$50 that a neutral, outside observer (say, from a different English speaking country) would say that a) you come off far worse than anyone else in the thread and b) they didn’t find your post convincing.
I think our disagreement over the distinction between playacting and not boils down to something like, I believe that the very small nuts-and-bolts of social interaction (jargon, in-jokes, simple trigger-action responses like sneeze “bless you”) are more important than most people give them credit for. In other words, I think the silly theater ends up actually mattering? Or, to be more specific—I think most of it doesn’t matter, but some small bits of it end up being really important, and so it’s an arena I want to do explicit experimentation with. I want to see whether the small salute actually ends up being relevant to bonding and sense-of-purpose, and no, I don’t have a double blind or anything like that, but I will be asking a bunch of fairly introspective people for their thoughts afterward.
I suspect, from your reaction, that you’d basically assert that this premise is false, and that the … skin? … of social interaction is meaningless, at least compared to the actual connections and information conveyed. This seems like a sensible, plausible position to take, but I think your mockery of the alternative hypothesis is unfounded.
I agree that if romance/sex/etc pop up, that would preclude the problem from being easily solved, but where did you get the impression that I was afraid of attempting to solve hard problems? There’s definitely a filter to screen out immature or uncontrolled people; while you yourself might make it through, the persona you’re currently expressing would’ve been rejected by the second paragraph of your original response. We’ve already turned away people for a variety of reasons, and at least one because of exactly this axis.
I appreciate the recommendation that I run things by Satvik. He’s a perceptive thinker and I haven’t run this by him yet. I wish that you’d responded in specific to more of my requests to draw out your suggestions—you’re continuing to clarify your models of the problems, but not offering much in the way of replacements for the things I’m planning to try.
You’re still not saying what you actually mean by the word “cult.” There’s a decent chance I’d agree with you—I’ve described the Bay Area rationalist community as a cult myself, even recently, when talking to friends and family members. But I was careful to disambiguate exactly what I meant by that, and I can’t help but note that your continued refusal to spell it out makes me suspect that you don’t actually have a coherent thing to say, and are just trying to score easy points.
I agree again with 1 (low empathy, etc.) though I think the strength of the effect is smaller than you seem to think it is. I think that you’re still not believing me when I say I agree with 2? Note that I’m calling you out for unacceptable rudeness in this thread, for instance. I also suspect you have a huge typical mind thing going on, and vastly underestimate how easy it is for people to rub each other wrong while acting in complete good faith in a normal society—the bed example was maybe poorly chosen, but I disagree with you that it’s easy to “default to behavior that is very unlikely to bother others.” I’ve been in a wide range of social milieu, and it’s much less about the actual behavior and much more about people’s cough willingness to pick nits and start fights.
I think that you’ve lost all moral authority by doubling down on your “people should die for this” claim, and because of that, I think this’ll be my last attempt to engage with you as an equal (you’re not my equal; at least this facet of your personality is my clear inferior). I will, however, continue to read if you make those concrete suggestions I’m hoping you have somewhere.
In answer to your last two questions: yes, it looks like your irritation is repressed. Not here, because my main hypothesis is that here is where you finally felt safe to vent a ton of irritation that you’ve been repressing in other arenas, for long amounts of time. Just look back at your first post—maybe a quarter of it was in response to me, and the rest is long-simmering, long-festering frustration about a bunch of other things (some of them valid and some of them not). Textbook repress-then-explode. And 2, your claim that posting anonymously equates to not causing interpersonal drama is again so laughable that unless it’s a deliberate joke, you’re revealing this persona to be less socially aware than literally the most awkward and inept rationalist I’ve ever met.
You’re not unpleasant so much as just … not showing yourself to be worth the time. I really hoped I could get more out of you, because I actually know, on a deep level, that I don’t have all the answers and the opposition is the first best place to look. But in terms of useful-criticism-per-word, you’ve been outdone by every other person who’s registered reservation or disagreement here.
I don’t know if I’m neutral (no, because I have an account here for a while now), but I wouldn’t have the same confidence to swing that bet out of there like you do. The post in and of itself is not convincing enough for me to say that your idea won’t work, but it certainly makes me go “hmm, well, he might have a point there”.
Specifically:
“Normal” people don’t need to explicitly write out all the rules for their housing with regards to social rules.
But here there’s a large list of rules and activitities and all that with the goal of getting group housing to work properly.
Also, here’s some examples of the group of people that you want to source your participants from having low social skills.
By the way, if you set up a ton of rules then it usually won’t work.
Thus, there’s a pretty big chance that the rules will not work out and that the social skills of the participants will be too low to have the group housing work.
I am not convinced that this is the truth.
However, if I read in a year from now that this is what happened, I would not be surprised.
Basically what I’m saying is I can see 1 or 2 people leaving due to drama despite the rules if you try this, with a chance greater than, I dunno, 10%?
You’re looking at content, not status (as implied by ‘knocking someone down a peg’). My immediate reaction to the top-level comment was: “well, they have some good points, but damn are they embarassing themselves with this language”. Possibly shaped by me being generally sceptical about the ideas in the OP.
As far as the bet is about the form of the post, rather than the content, I think Duncan’s pretty safe.
I have seen normies having endless fights about trivial things, such as “who should buy toilet paper”, that a simple explicit norm could solve. (For example “people keep buying the paper in turns, when you buy one check this box to keep everyone informed” or “Joe buys the paper, everyone else gives Joe $2 each month” or whatever.)
The best case, of course, would be trying to be nice by default, and solve explicitly the situations where the default behavior fails. But that seems like what would quite likely happen in the Dragon Army anyway… or maybe I am just applying the typical mind fallacy here.
You should take the Hansonian approach. Fights over toilet paper are not about toilet paper.
I’m not the originator of this thread, but that part did resonate with me. I don’t think there’s anything wrong with those skills, but the combination of choice of skills and the desired level of competency does seem to be decidedly mediocre given the effort and people involved.
1) Above-average physical capacity
What is average? In the US, you could probably be somewhat overweight with no strength, speed, endurance, or agility to speak of and still be “above average.”
(2) Above-average introspection
I would expect almost all of the people who volunteer to be part of a rationalist group house to be there or pretty close to there already.
(3) Above-average planning & execution skill (4) Above-average communication/facilitation skill (5) Above-average calibration/debiasing/rationality knowledge
I think my previous comment applies here as well. Perhaps you have a different conception of “average” than I do, but I think if you’re going to establish a long-term mini-dictatorship of a group house, you should be aiming for quite a bit higher than “above average.”
(6) Above-average scientific lab skill/ability to theorize and rigorously investigate claims
I don’t really understand this one. Is your group house actually going to have the ability to practice conducting laboratory experiments? That’s a very high overhead endeavor.
(7) Average problem-solving/debugging skill (8) Average public speaking skill (9) Average leadership/coordination skill (10) Average teaching and tutoring skill
Average? Your goals are to reach average, after a year of dedicated effort? Getting into the 80th percentile of anything numbered 1-10 on this list should require a minimum of effort on the part of dedicated individuals following strict rules, unless you have some specific medical condition interfering.
(11) Fundamentals of first aid & survival
How fundamental is fundamental? This also shouldn’t take very long if you are willing to put in the effort and practice a bit (2 weeks, at the outside, though you could the true basics in a long weekend). I don’t know how it’s related to the rest of the goals, though, or why it’s important enough to be on the rest of the list. Also, you should practice many of these skills in the actual wilderness, which means time away from everything else.
(12) Fundamentals of financial management
Again, I’m not sure what’s “fundamental.” You could spend 2 days on this, or the entire year.
(13) At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill) (14) At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Do you have the ability to teach/practice trade skills at the house? I would expect leaning any of these things, to an employable level, within a year, would require spending time similar to a full-time job somewhere that has infrastructure, in addition to a significant investment of money (at least a few thousand dollars). (I checked some local welding and plumbing classes at community colleges, which is where I’m getting those numbers).
Someone who already has one of these skills (I’m guess you’ll have a few coders at least) is going to be at a tremendous advantage in terms of time and possibly money compared to someone who is not. 13 and 14 are going to each represent a greater time investment than the others combined, unless you already have them.
I don’t know if you care, but I would say I already meet a similar number of these criteria. The only one I definitely don’t meet is 14. I’m willing to tie this account to my real name and explain/prove why I meet them (though some of them would be quite difficult to really prove, I could only argue).
The problem seems to be to be the tradeoff between going deep and going wide, with the added complexity that going deep on the wrong thing seems strictly worse than going wide, and so we’re defaulting to going wide where there’s uncertainty.
Put another way, it’s unlikely that any of those specific skills are going to be particularly important to any of our longest-term goals, but it also seems counterproductive to just sit there thinking about which direction to go in. I’m usually not the biggest expert in the room, but I usually am the most generally competent in terms of being able to fill holes or solve whatever problem crops up, and it’s because I have a habit of just constantly churning and picking up new skills and methods and heuristics wherever I go. I suspect that others would benefit from a similar habit, in particular because once “the right skill” does come along, you have both the affordance to start learning it and a variety of experiences allowing you to learn quickly and efficiently.
That’s a claim. Not necessarily supported, but reasonable, I think, and worth trying out.
I note that I disagree that it’s easy to break averages in all of these things at once. People who don’t actually check their abilities against a standard tend to be wildly overconfident, and people tend to underestimate how long it will take them to learn X or accomplish Y; these things are solidly documented. And while competence does tend to cluster (e.g. “G”), so the picture’s not quite as bleak as the second half of this sentence, once you’ve got a dozen different domains and shooting to be above the 50% mark in all of them, you’re looking at a person who’s approximating one in four thousand, and when you try to get a whole group to hit that mark, the challenge is pretty real. I wouldn’t be surprised if most people have most of this easy, but I think you’re not fully grokking the difficulty of making everybody baseline competent in all of these domains. For instance, you note that many of these skills require only a few weeks, but I don’t know if you added up all of those weeks, compared them to the time commitment, and noted that they’re all being practiced off-hours and people have their own jobs and lives as well.
It’s a floor, though, not a ceiling—we’re aiming at “world class skill,” we’re just not naively expecting that getting there is going to be easy, and initial expectations are meant to be exceeded.
Various additional points …
The trade skill goal got scaled back in response to another comment; it was the hardest/sketchiest one to begin with.
We will have some ability to practice trade skills at the house, and are adopting a norm of going and seeking professional instruction outside from time to time.
I buy that you meet a large number of these criteria; I meet most of them myself. But the ones I don’t have are sticky/tricky.
I don’t think these skills are anywhere near independent. It’s also not obvious that they’re normally distributed. And, being above the 50% mark in a dozen skills by coincidence being unlikely does not at all tell you how hard it is to gain skills if you put in some deliberate work.
I generally am sympathetic to the argument that stuff can be harder than one assumes, but I also am generally cynical about the “average” level of most of these skills. Most people probably don’t even know what “calibration” means precisely enough to test their own level of calibration. I’m not trying to be arrogant here, I pretty much have only heard about the idea of writing down your confidence level of a bunch of predictions and seeing what comes true from the rationalist community and rationalist-adjacent ones.
For the sake of avoiding this issue, and because rather than using terms like “above-average,” I would attempt to pin down ahead of time requirements that are as specific as possible to measure progress in each of the areas you care about.
I don’t think it should take a few weeks each to exceed average in most of these skills. I expect it to take a few weeks total (or 1 day a week for a few months).
I’m plausibly interested in betting a few hundred dollars against you, especially if (as seems likely, given your confidence) you were to bet $1000 against my $250 or something like that. If I imagine the hundred closest people I know uttering the above, I think all but one or two of them are wrong/overconfident.
What statement, specifically, would we be betting on? It’s certainly plausible that I’m underestimating the difficulty in getting an entire group to above these standards in comparison to getting one person. Though, I think the main issue may be a difference in what we perceive as average, rather than a model of how hard learning these skills is.
I spent five minutes trying to operationalize, but I couldn’t come up with anything that seemed workable. For now, we’ll just proceed knowing that at least one of us is wrong. =)
Either way is fine with me, but if you can express in any way what you think “average” is for some of these skills, I would like to know because now I’m really curious.
Thanks for taking so much time to keep responding to a fairly random commenter!
The amount of criteria he hit’s likely depends on the definition of average. The reference class matters a great deal.
I strongly support this post.
It would be much better if it were less inflammatory. The last sentence, in particular, is reprehensible. But you respond to the substance of the criticism you get, not the criticism you might want or wish to have at a later time. Otherwise you might as well be slashing your own tires. The vast majority of the discussion below is simple tone policing. Someone’s telling you that your house is on fire, and you’re complaining that they’re shouting.
It’s correct that it’s incredibly troubling that the author didn’t even consider romantic drama in designing his bootcamp. It’s correct that these are really not impressive outcomes. They’re moderately-functional outcomes. Shouldn’t there be some sort of control group where people attempt a similar level of life-changing upward momentum on their own and see if it was actually effective to cede their autonomy? It is correct that trying to LARP a bizarre combination of Ender’s Game and Fight Club is perhaps not a sign that this person has any idea how grown-ups work.
And most troubling of all, why weren’t these issues noted by anyone who Duncan ran this idea by first? Why does it take this level of willingness to break with social norms to notice the skulls? And no, intoning “I Have Noticed The Skulls” doesn’t mean you’ve actually addressed the problem unless you actually address it. Twelfth virtue!
In a broader sense, what the hell happened? I read the Sequences roughly when they came out, commented here occasionally, moved over to SSC and, more often, the associated subreddit. I donate effectively and regularly, I do my best to tax people’s bullshit with bets, and I do feats with spaced repetition. Apparently while I was doing that and not being directly involved in the community, it turned into… this. Scott Alexander is getting published in moderately prestigious outlets. AI risk is mainstream. Effective Altruism is considerably more mainstream than it was. But the community at the center of it has, if anything, regressed, from what I’ve seen here.
Maybe it wasn’t designed for grown-ups. To quote Duncan,
(Comment too long to add more directly.)
Somewhere else in the comments, Qiaochu says:
Well, given the trajectory of your own life, Qiaochu, I think that actually counts as an argument against “Dragon Army”, and really the rationalist community as a whole, being good for the participants. I notice that you’ve shifted from posting insightful, detailed blog posts to impersonally spamming links to rationalist ingroup bullshit on Facebook all the time—in some sense it’s like you’ve been trending in the direction of being less and less of a real person as time goes on. (Which, as a friend of mine pointed out, is actually generically very common, like how a smart and quirky high school student goes to Harvard, starts adopting more and more of a “professional” demeanor, becomes progressively less interesting, and eventually dies a mental death far in advance of their physical expiration...)
Oh, dear. This is terrible, and I wish you hadn’t posted it, because there’s literally no value to be had in delivering this sort of message in this sort of way. Disendorse; I claim this is evidence that most of your arguments about social capability should be somewhat discounted, since they’re coming from someone unskilled.
I honestly think this person has been engaged with enough, at least until they make the kind of concrete claims you’ve been asking for. I think it’s commendable to have responded with the good mix of “look at their plausibly good points while calling them out on their bad points”, but at some point it becomes uncommendable to engage with people who are clearly not arguing in good faith.
Yeah, I’m done replying at this point. +1 for the outside view check, though—if I weren’t already done, I would’ve appreciated your intervention.
I disagree.
Fair. Care to put forth a model? You don’t have to; simply weighing in is also a contribution (just a less useful one).
Our ability to concretely describe the effects of social groups on people in general are kind of limited, but things like “person X joined social group Y and now they concretely do behavior Z” are available. If you see people join a group and then become concretely worse (in your own assessment), I think it can be valuable to refer to specifics. I think it can be important and virtuous to convey what you think is a pernicious process, and unfortunately naming someone you personally know is a very effective, if cruel way to do it. Anecdata, and especially anecdata based on the content of someone’s facebook feed, is not a great snapshot of a person at different times, but it’s still a source of information.
I’m not sure what you think a better sort of way to deliver this sort of message is, but to some extent any nicer way to do it would be less effective in conveying how bad you think the situation is.
That seems true and correct to me. I note that my response to this specific comment was … motivationally entangled? … with my responses to this person’s other comments, and that I was adopting a cross-comment strategy of “try to publicly defend certain norms while engaging with everything else that doesn’t violate those norms.”
I think it’s defensible to say that, in so doing, I lost … fine-grained resolution? … on the specific thing being said above, and could’ve teased out the value that you were able to identify above separate from my defense of a) norms and b) Qiaochu.
Thanks!
lol
They were just doing their part against dysgenics and should be commended.
Sounds interesting, I’d like to hear more about this.
Being only on the periphery of the community, I’m extremely curious who said valiant man is (full disclosure: this is so I can avoid them and/or assess why the community has not yet shunned them, as I would hope they’d shun you).
Being only on the periphery of the community, I’m extremely curious why your instinctual reaction to a very politically incorrect idea is to shun the people supporting it, and why your model of the world bizarrely concludes that (1) people who live 20+ years as men and then decide, because of their autogynephilic fetish and repressed femininity, that they’re better off as women and therefore are women, and (2) people who have severe mental illnesses that cause them to become suicidal upon contemplation of their own bodies are somehow Actually the Opposite Sex in some timeless, eternal manner which becomes true as soon as they realize it’s true.
Being only on the periphery of the community, I’m extremely curious why you imagine people who are objectively a bunch of losers who can’t seem to accomplish anything of value would be the ones shunning me rather than the other way around. If I were a member of the cultlike “community”, sure, social ostracization would be possible. (Thankfully, I’m not.)
For someone who thinks that they are immune to being shunned, you sure do use an anononym.
I’ve had some thoughts and feelings in this vein; skepticism of trans and so forth. I hold that skepticism with skepticism, though, and I do not reach the point of telling the several extremely smart, perceptive, capable, and empathetic trans humans I know that they’re e.g. dumb or wrong or sick or confused, when I have no inside view, and I think it’s somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way. That being said, I’d be curious to hear more of the thoughts on the other side of the zeitgeist. If you feel like naming this valiant man in private, I commit to not sharing their name any farther than they themselves say is okay.
Hi! 18239018038528017428 is almost certainly referring to me! (I would have predicted that you’d already have known this from Facebook, but apparently that prediction was wrong.)
I tried that first. It turns out that it doesn’t work: any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful. I still had something incredibly important to protect (there is a word for the beautiful feeling at the center of my life, and the word is not woman; I want the right to use my word, and I want the right to do psychology in public and get the right answer), so I started trying other things.
Zack, I think the problem (from my perspective) is that you tried being respectful in private, and by the time you started talking about this publicly, you were already being really harsh and difficult to talk to. I never got to interact with careful/respectful you on this topic.
(I understand this may have been emotionally necessary/unavoidable for you. But still, from my perspective there was a missing step in your escalation process. Though I should acknowledge that you spurred me to do some reading & writing I would not otherwise have done, and it’s not impossible that your harshness jolted me into feeling the need to do that.)
Yeah, that makes sense. Sorry. Feel free to say more or PM me if you want to try to have a careful-and-respectful discussion now (if you trust me).
Thanks. I don’t think that would be good for me, at least right now, but thanks for the offer.
My thoughts on the matter are mostly in my ITT entry on Ozy’s blog and then also in the most recent thread on this topic on their blog. I guess I’d be somewhat curious about your responses to those thoughts.
I agree. E.g. Scott Alexander has said he will ban people from his blog is they do not speak as if the trans theories were true, even if they believe them to be false. But that doesn’t mean it is a good option to be as rude as possible, like 18239018038528017428 above. (Obviously I am not saying that you have adopted this approach either.)
If you’ll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.
Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they’re wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your own confusion or wrongness, and encouraging others to notice it for you when you don’t; the norm, as I understand it, is a cooperative one).
Telling people when they might be sick is (if you’ll forgive me) hardly something to sneeze at, either. They might want to visit a doctor. Health is, for understandable reasons, generally considered important. (This includes mental health.)
As for dumb, well, I simply doubt that comes up often enough to make the statement meaningful. Whatever may be said about the rationalist community, it does not appear to draw its membership disproportionately from those of specifically low intelligence. Your acquaintances—whatever their other characteristics—probably aren’t “dumb”, so to tell them they are would simply be to assert a falsehood.
So: may I be so bold as to suggest either a reformulation of the thought you were trying to express, or even a reconsideration of the impulse behind it, in the event that the impulse in question wasn’t actually a good one?
This is a fair point. I absolutely do hold as my “daily bread” letting people know when my sense is that they’re wrong or confused, but it becomes trickier when you’re talking about very LARGE topics that represent a large portion of someone’s identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through.
I don’t have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that’ll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.
Here’s what it looks like to me, after a bit of reflection: you’re in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).
In this sort of situation, I don’t think it’s necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it’s probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever reason, do go around proclaiming P. At least leave that to the people who are confident that P is false. Otherwise, you are doing epistemic anti-work, by systematically un-correlating normative group beliefs from reality.
My sense was that you were sort of doing that above: you were seeking to reproach someone for being loudly contrarian in a direction that, from your perspective (according to what you say), may well be the right one. This is against your and your friends’ epistemic interests.
(A friendly reminder, finally, that talk of “being a total jerk” and similar is simply talk about social norms and their enforcement.)
I was not aiming to do “that above.” To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.
Your comment was perfectly fine, and you don’t need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there’s a strong chance I’ll be without internet for several days and likely won’t be able to further engage with this topic.
Duncan’s original wording here was fine. The phrase “telling the humans I know that they’re dumb or wrong or sick or confused” is meant in the sense of “socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect”.
To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that’s a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.
I’m frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they’ve written are all ways of socially discouraging someone from doing something. I think that Duncan’s comment was fine, I certainly think that he didn’t need to apologize for it, and I’m fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.
Your principal mistake lies here:
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people “diplomatic” communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary directness is, because the more people are already preoccupied with mentally modeling them.
If we’re engaging in disclosures of this sort, I have felt similarly about many a comment of yours, not least the one to which I am replying. In your second paragraph, for example, you engage in passive aggression by deceptively failing to acknowledge that the people you are criticizing would accuse you of the exact same sin you accuse them of (namely, equating “trans people disproportionately have certain traits” and “boo trans people”). That’s not a debate I consider myself to be involved in, but I do, increasingly, feel myself to be involved in a meta-dispute about the relative importance of communicative clarity and so-called “niceness”, and in that dispute, come down firmly on the side of communicative clarity—at least as it pertains to this sort of social context.
I read your comment as a tribal cheer for the other, “niceness”, side, disingenuously phrased as if I were expected to agree with your underlying assumptions, despite the fact that my comments have strongly implied (and now explicitly state) that I don’t.
As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
It’s fairly common for this cost to go down with practice. Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I’m not necessarily claiming that you or any specific person is acting this way; I’m just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.
That’s a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they’ve made to other’s emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.
To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.
What it looks like to me is that LW and its associated “institutions” and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically endorsing this process, specifically the aspect wherein unique subcultural norms are being overwritten by general societal norms.
The way this comes about is that the high-status members of the subculture eventually become tempted by the prospect of high status in general society, and so in effect “sell out”. Unless previously-lower-status members “step up” to take their place (by becoming as interesting as the original leaders were), the subculture dies, either collapsing due to a power vacuum, or simply by being memetically eaten by the general culture as members continue to follow the old leaders into (what looks like) the promised land.
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”
In most cases calling someone sick when the person suffers from a mental issue isn’t the best way to get them to seek professional help for it.
What is the best way? It’s not like you can trick them into it.
A more serious issue, I would have thought, would be that the “professional help” won’t actually be effective.
If you don’t have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information.
“Did you symptoms X and Y are signs of clinical mental illness Z?” is likely more effective than telling the person “You have mental illness Z.”
If the other person doesn’t feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it’s more likely that they will end up doing what’s right afterwards.
I don’t think “Did you know symptoms X and Y are signs of clinical mental illness Z?” is appreciably different from “You very possibly have mental illness Z”, which is the practical way that “You have mental illness Z” would actually be phrased in most contexts where this would be likely to come up.
Nevertheless, your first and third paragraphs seem right.
In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it. The phrasing of information matters.
This is about behavior, not belief.
I have not disputed “autogynephilic men with repressed femininity and a crossdressing fetish pretending to be women aren’t actually women”, though neither have I affirmed it.
Regardless, I still would not want you, personally, in any community I’m part of, because your behavior is bad. I’m not interested in debating this this; obviously we disagree on what acceptable behavior looks like. Whatever; different strokes for different folks—clearly this community is not for you, but also you seem to still be here, for some reason.
And I would still want to know who’s going around trying to convince people of that statement, so that I could avoid them (for their proselytizing, not for their beliefs) and/or assess why the community has not yet shunned them. (Obviously you can shun the community while it simultaneously shuns you. These are not mutually exclusive.)
So, again, I still want to know who you’re talking about. Who are you talking about?
Hi! 18239018038528017428 is almost certainly talking about me! My detailed views are probably more nuanced and less objectionable than you might infer from the discussion in this thread? But to help you assess for yourself why “the community” (whatever that is) has not yet shunned me, maybe start with this comment (which also contains links to my new gender blog).
Ah, thanks. Turns out I do know who you are and have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction. (And yes, the throwaway’s description of you is somewhat misleading, though mostly that’s because, from their behavior, I would expect anyone they praise to be terrible without redeeming features).
For obvious reasons, I’m extremely curious to hear your analysis if you’re willing to share. (Feel free to PM me.)
I don’t think that’s a good inference! (See the anti-halo effect and “Are Your Enemies Innately Evil?”) Even if you think the throwaway’s rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?
(from the great-grandparent)
I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn’t a punishment for the belief. What would be the point of restricting speech if the goal isn’t to restrict thought?
Probably this is going to be too blunt, but it’s honest, and I’m assuming you’d prefer that:
Basically, because you are psychotic, not an asshole (or at least, afaict, only an asshole as a consequence). And dealing with people who are behaving poorly because of mental issues is a hard problem, especially in a community where so many people have mental issues of one sort or another.
Again, this doesn’t mean I disagree with you (and again neither have I claimed to agree). The fact of your psychosis is not obviously prior to your beliefs. But it is very obviously prior to how you have acted on those beliefs. Or at least it is obvious to me, having spent a great deal of time with friends who behave like you’ve behaved (in public, at any rate; of course you should discount this evidence given that I haven’t interacted with you in person, or at least not much).
It’s evidence, yes.
… This is a much larger conversation for another time. If you have not already internalized “just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it’s true”, I don’t know that I will be able to briefly explain to you why that is the case.
Yes, thank you!
I definitely went through some psychosis states back in February and April, but I seem to be pretty stably back to my old self now. (For whatever that might be worth!) I have a lot of regrets about this period, but I don’t regret most of my public comments.
Oh, I think I understand why; I’m not that socially retarded. Even so—if there’s going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on “arguing for true propositions about human psychology because they’re true” and relatively less emphasis on social acceptability, shouldn’t it be _us_? I could believe that there are such things as information hazards—I wouldn’t publicize instructions on how to cheaply build a suitcase nuke—but this isn’t one of them.
Sure. And we do put relatively more emphasis. But we have not completely and totally thrown away all social convention. Nor should we: much of it exists for good reason.
That seems so obviously true the idea of shunning someone for fighting against people arguing the opposite seems crazy to me. I thought we just called used “she” to be polite, not thought we believed them to be women in any meaningful sense.
I cannot imagine participating in this community for any length of time and sincerely concluding that the mental state you’ve described is actually universal.
Hi! I believe I’m the only person to try shunning them, which happened on Facebook a month ago (since Zack named himself in the comments, see here, and here). The effort more or less blew up in my face and got a few people to publicly say they were going to excluded me, or try to get others to exclude me from future community events, and was also a large (but not the only) factor in getting me to step down from a leadership position in a project I’m spending about half of my time on. To be fair, there are a couple of places where Zack is less welcome now also, (I don’t think either of us have been successfully excluded from anything other than privately hosted events we weren’t likely to go to anyways), and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance. So, I guess we’re in a stalemate-like de facto ceasefire, though I’d be happy to pick up the issue again.
I still stand by my response to Zack. It would have been better if I’d been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself; that’s an area where I’m still trying to grow. I think that collaborative truthseeking is aided rather than hindered by shunning people who call others “delusional perverts” because of their gender. This is, at least in part, because keeping discussions focused on truthseeking, impact, etc. is easier when there are social incentives (i.e. small social nudges that can later escalate to shunning) in place that disincentivize people from acting in ways that predictably push others into a state where they’re hurt enough that they’re unable to collaborate with you, such as by calling them delusional perverts. I know that the process of applying said social incentives (i.e. shunning) doesn’t look like truthseeking, but it’s instrumental to truthseeking (when done with specificity and sensitivity/by people with a well-calibrated set of certain common social skills).
(Just noticed this.)
I wasn’t aware of this, but it seems unfortunate. If successfully ostracizing me isn’t going to happen anyway, “both of you step down from something that you previously wanted to do” seems like a worse outcome than “neither of you step down.”
(For my own part, while I wouldn’t invite you to any parties I host at my house, I have no interest in trying to get other people to exclude you from their events. I consider my goal in this whole affair as simply to make it clear that I don’t intend to let social pressure influence my writing—a goal at which I think I’ve succeeded.)
I hadn’t bothered addressing this earlier, because I wanted to emphasize that my true rejection was “I don’t negotiate with emotional blackmailers; I’m happy to listen and update on substantive criticism of my writing, but appeal to consequences is not a substantive criticism”, but since it is relevant, I really think you’ve misunderstood the point of that post: try reading the second and third paragraphs again.
What I’m trying to do there is highlight my disapproval of the phenomenon where the perceived emotional valence of language overshadows its literal content. I understand very well that the phrase “delusional pervert” constitutes fighting words in a way that “paraphilic with mistaken views” doesn’t, but I’m interested in developing the skill of being able to simultaneously contemplate framings with different ideological/emotional charges, especially including framings that make me and my friends look bad (precisely because those are the ones it’s most emotionally tempting to overlook). People who aren’t interested in this skill probably shouldn’t read my blog, as the trigger warning page explains.
(Seriously, why isn’t the trigger warning page good enough for you? It’s one thing to say my writing to should have a label to protect the sensitive, but it’s another thing to say that you don’t want my thoughts to exist!)
Not all goals are achievable by sufficiently-skilled gentle social manipulation. If you can show me an argument that can persuade me to change my behavior given _my_ values, then I’ll do so. If no such argument exists, then your skill and gentleness don’t matter. (At least, I hope I’m not that hackable!)
it sounds like something happened and there was some miscommunication and things are not fully healed. Would you like help with that?
I appreciate your offer to talk things out together! To the extent that I’m feeling bad and would feel better after talking things out, I’m inclined to say that my current feelings are serving a purpose, i.e. to encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed, though that wouldn’t have been at all true of the old version of myself. This algorithm is a bit new to me, and I’m not sure if it’ll stick.
Overall, I’m not aware that I’ve caused the balance of the discussion (i.e. pro immediate abrasive truthseeking vs. pro incentives that encourage later collaborative truthseeking & prosociality) to shift noticeably in either way, though I might have made it sound like I made less progress than I did, since I was sort of ranting/acting like I was looking for support above.
Is this really a winning move for you? I’m not budging. It doesn’t look like you have a coalition that can deny me anything I care about. From my perspective, any activity spreading the message “Zack M. Davis should be shunned because of his writing at http://unremediatedgender.space/″ is just free marketing.
This seems similar to Leverage in a lot of ways. It seems like it would be really instructive to contrast your plan with Leverage’s plan—as initially intended, and as executed—to see what you plan to invest in that they aren’t, what you’re not doing that they are, and costs and benefits of those differences.
Other contrasting case studies might also add clarity:
Esalen
kibbutzim
the old Singularity Institute house
residential colleges
fraternities
Buddhist monasteries
Christian monasteries
actual armies
actual paramilitary organizations / militias
Sea Org
It probably makes sense to 64⁄4 these with rough sketches from memory/stereotypes/Wikipedia-ing before bothering to do any time-intensive research.
Yep. I don’t have strong ties to Leverage, but I’m talking with a couple of the people and have friends involved who have better models than me. +1 to this point.
Esalen is worth noting because it’s a place that’s extremely intellectually productive. There are many different paradigms of bodywork that come out of Esalen.
Esalen is central for the history of Feldenkrais, Rolfing and a bunch of other paradigms.
If you could build a community that succeeds to do for rationality what Esalen did for bodywork that would be a huge success.
What is Esalen?
The Wikipedia page is https://en.wikipedia.org/wiki/Esalen_Institute
In his Cargo Cult speech Feymann describes the place by saying:
What does this mean? Google isn’t helping and the only mention I see on LW is this post.
The Pareto Principle says that you can 80:20 many things, i.e. get 80% of the value from 20% of the work. If you 80:20 the 20%, you end up with 64% of the value for 4% of the work.
For the next three months, I will embark on my own experiment of living in a high-standards high-group-activity environment. Specifically, a Buddhist temple.
The temple has an even tighter schedule. All residents wake up together at 5 am and go to sleep together at 10 pm. The rest is meditation, study and work, with 4 hours of free time. The weekends are free, so it adds up to being told what to do for 85 hours per week.
Over the years, I have stayed there six times for a week. The first days are usually a fight to adjust to the lower standards of living (the unpleasant valley). As the days go by, I become increasingly energized and sharp. When I leave, I’m in the best state I can be. Not even a CFAR workshop measures up to how much I upgrade in such a short time. And it’s not the meditation. I’ve gone for days without really meditating and I would still upgrade.
This has led me to believe that something about our individualist style of living is profoundly wrong, at least for some people. Seems like a solution to many of our problems lies in collectivism. Think mental health, akrasia, huffelpuff virtue, etc.
I am really interested in how this is going to fly. Please do post updates. I would also love to share my perspective. I think I’ll have some interesting data.
If you’re willing, sharing your perspective in more detail here is welcome (so that all the models are in one place). Else, you’re welcome to PM or email me.
In the spirit of Murphyjitsu, the most obvious failure mode that you didn’t mention is that I expect you to burn out dramatically after a few weeks, from exhaustion or the psychological strain of trying to optimize the experiences of N people. The bootcamp phase is not analogous to anything I’ve heard of you doing sustainably for an extended period of time.
So, do you expect Dragon Army Barracks to work if Eli has to take over for you in Week Four?
Hmm, interesting. My self-model is somewhat incapable of burning out during this, due to an ability to run forever on spite (that’s only somewhat tongue-in-cheek).
It’s a solid point, though. If I condition on burnout, I think that Eli manages or not based on the level of specificity and concreteness that we managed to get in place in the first few weeks. Like, I don’t think Eli is competent (yet) to create the thing, but I do think he’s competent to oversee its maintenance and preservation. So that seems to put a somewhat higher priority on early systemization and scaffold-building than might have otherwise been in my plan.
Good question.
Edit: also, probably the closest analogue to this in my past is being the sole functioning RA on a dorm hall of ~30 high schoolers in a high-stress school environment. That was probably within the same order of magnitude of juggling, once you account for the fact that my increase in skill since then is balanced by the increase in complexity/responsibility. I did a lot to try to manage the experience of those thirty people.
FWIW, my model of Duncan agrees with his model of himself here. I don’t expect him to burn out doing this.
…and even if he does, I expect that the combo of Eli plus the sort of people I imagine being part of Dragon Army would pull it through. Not guaranteed, but with a strong enough chance that I’m basically not worried about a failure mode along the lines of “Flops due to Duncan burnout and subsequent systems failures.”
I would like to say that I share your strong preference for being second in command over first and would like to add a datapoint that I find being first in command to be really stressful in a way that doesn’t hit me or mess with my decision making until after I relinquish the role, at which point it hits hard, and am curious if that happens or has happened to you. (Examples; Being first responder in a medical emergency and keeping everything going right up until the victim had arrived at the E.R. and then throwing up and shaking for the rest of the night, leading a major college class project for a semester that went really well and then essentially shutting down and hiding in my room for a week.)
If I were trying to do what you seem to be trying to do, I would be setting myself up for a major crash once I’d brought the experiment to a close or handed off the baton. Obviously our minds are different in many ways, but I figured it was worth checking to see if you had that issue and found a solution that might be stealable.
Not a full solution, but gesturing in a direction that you might find useful: build the system in such a way that gaming it is encouraged and useful, and that the punishments are somehow self-balancing.
E.g. if the punishment is “do some chores”, somebody who figures out that doing the chores is easier than their other obligations is at least clearing the list of all the chores that need to be done. If they run out of chores to do, new tasks can be added to the list, and they can choose whether doing them is still worth it.
I’m here kinda reminded of the evolution of pen’n’paper RPGs, which originally had disadvantages you could buy during character creation that made you more powerful in exchange; of course people would munchkin by “forgetting” the disadvantages during play. Newer games got past that by making disadvantages give you zero points during character creation (or even cost!), and instead had them award benefits if you roleplayed them during actual game. In general, games have gotten the better the more they have built “trying to munchkin the rules, automatically leads you to play the game more like it was designed to be played” as a fundamental game design principle.
Not sure of how to do the “self-balancing costs” thing, but I am reminded of the bidding systems some houses have for chores, where you offer money for doing some task and if someone else finds the offered amount of money more valuable than the pain of doing the chore they do it; otherwise you do it yourself.
+1 to the general idea; not sure how to implement it myself but it’s worth some five-minute timers.
I have tried similar things.
My strongest recommendation is to beware of internal power struggles. Even if you are fully understood to be in charge, if everyone under you is in a state of emotional mutiny, you WILL become compromised, and you WILL make mistakes, and those mistakes WILL be used to justify further emotional mutiny. This will spiral until you lose everything.
Moreso, some percentage of your trusted minions WILL undergo emotional mutiny. They will discover that they’d rather be somewhere else, doing something else. They’ll discover that there are people other than you they’d like in charge of their lives. They will discover that they don’t trust you as much as they thought they did. Even if you pick the best people—hell, ESPECIALLY if you pick the best people, because the best people will have other people vying for their attention, seeking to undermine you from without.
Chiming in because the problem of helping people level up is close to my heart.
Putting the social dynamics of the experiment aside (since there are plenty of people discussing that aspect), I’d like to offer some good-natured skepticism about the overall approach. (Good-natured meaning, I hope you actually do pursue this because I’m genuinely curious about how this will play out—assuming the safety concerns others have raised are handled well, of course).
My skepticism is: this is too meta and too complicated to lead to actual progress.
I spent a few years at company that tried to inculcate a deliberate process for getting to the right answer, including a culture of radical honesty and formal procedures for making decisions and learning from mistakes. This was a major priority at the company for a long period of time (last I checked, it’s still going on), with backing from the entire senior management team, and was enforced by firing people who couldn’t or wouldn’t skillfully participate. I.e., they took it really seriously and put a lot of effort into it. The people who conceived and implemented it were in my opinion extremely smart and competent.
That said, in my opinion the effort spent on this program did more harm than good to the functioning of the company. The values and culture became an end in itself, as opposed to a means for helping achieve goals, and endless amounts of time and energy were spent debating, elucidating, learning, and critiquing the system. Competent professionals ended up becoming ineffectual because they gave up (or were forced out of) their unreflective expertise and got stuck in endless cycles of second-guessing. Some of that self-reflection may have given rise to new levels of skill (in my case, I did in fact feel like I benefited from my time there, although I think that was largely because it was my first job out of college so I didn’t have that much to un-learn), but generally people felt disempowered by the initiative rather than improved.
In contrast, for the last few years, I’ve been running a tiny company where we have very little meta discussion and mostly just do object-level work. I feel 1000x more productive now than I did at my prior job.
My takeaway from this is that the optimal ratio of meta-level tuning to object-level practice is [small number] : [large number]. Meta-level thinking is extremely valuable and important, but I view it as the rudder on a boat: you need to be constantly making adjustments to keep pointing in the right direction, but 99% of the power generation goes into the main engine pointing forward.
If I had to generate a hypothesis as to why the concrete achievements of the rationalist community are less than might be desired, it would be that the community spends way to much of its energy on meta topics instead of on object-level progress. This is understandable, since a) meta-level discussion of rationality is what created the community in the first place, and b) object-level discussion can often be very boring compared to meta-level discussion. (I miss the intellectual stimulation of my previous job, even as I see it as basically a waste of time in terms of actually building a successful company). While understandable, I think it leads to predictable outcomes: a lot of talk happens but not much gets accomplished.
Looking at the proposed charter, I suspect there will be a very high amount of meta-level discussion, probably significantly more so than at my prior job that I thought was way too meta. That’s because a) it’s built in to the daily schedule, b) it’s built into the mission, which is expected to evolve over time with the participants, and c) it’s built into the community that the participants will be drawn from.
In addition to being too meta, I also suspect this experiment is too complex. Experimenting with a bunch of different norms, on top of the code of conduct and daily schedule, seems wildly ambitious to me. In the company I worked for, the set of norms and practices were set in stone by executive fiat, recruits to the company were presented with them prior to accepting jobs, and adherence to them were a major part of performance evaluation, and there was still a very high employee churn rate and a general agreement that the norms / practices as specified weren’t consistently well-practiced throughout the company. The Dragon charter is for a smaller group of people, which makes things easier, but the norms / practices are expected to be a moving target, which makes things harder.
In my personal experiments with self-improvement, I’ve had the most success with extremely simple plans. My most successful self-intervention to date has been to download a simple habit tracker on my phone, and add a new daily habit, moving on to the next only after successful completion of the prior one for 30 days. When I first started trying to learn new habits, I would add a bunch of new habits at once, and I would always fail. It took me a very long time to get patient enough to only try to change one thing at a time (which requires accepting that I’m going to have habits I don’t like in the interim that I don’t try to do anything about).
Similarly, I’ve been successful growing my current company by having an extremely boring strategy of: ship code, talk to customers, ship code, talk to customers.
Simplicity does not come naturally to me; I like my ideas and strategies to be convoluted, complicated, ambitious, and interesting—I get very bored with simple, straightforward approaches. So I’m a big believer in simplicity because I’ve learned the hard way against all my natural inclinations that—unlike my natural inclinations—it actually works.
So if I were trying to design a charter, I would pick one or two things that I think would be most likely to have a game-changing impact, and just focus on those things until they worked (or didn’t). In contrast, the charter as it exists now feels to me like it has way too many moving pieces. That’s just my intuition, of course, but I hope I’ve given a feel for where that intuition comes from.
Anyway, I admire the ambition in doing a project like this, so I hope my criticism is constructive and useful.
Thanks for the long and detailed response. I enjoyed reading it.
It’s interesting that you highlight meta as being a dangerous failure mode—I actually strongly agree, which is why the aesthetic is tuned toward stuff like “just exercise” and “housemates should produce visible work.” My sense is that a strategy of just doing stuff outstrips in practice a strategy of think really hard until you find the ideal move, especially when you take into account how many iterations you can get in if you’re churning hard.
Hilariously, though, I’m further inside the rationalist bubble than I thought, because I accept your overall summation even though the intent was to be THE OBJECT LEVEL HOUSE (or at least, the house that does stuff even if it goes meta on norms). I still think we’re set up to be relatively ahead, but as you point out, that’s not necessarily a sufficient bar.
However, I’m much more concerned with:
That rings very true to me, and has been an active concern of mine for the past couple of weeks. It seems like there are something like a hundred activities/experiments/norms/projects that are worthy of including in this, and something like 1.3 slots per week (and thus not even room for half), and I’m not at all certain how to best pick and choose and prioritize and optimize for success. In part, I’m hoping that if we just throw ourselves in and iterate (see above) we’ll do better than if we agonize, but yeah, there are a lot of moving parts, and I wouldn’t be surprised if we ended up trying to drastically simplify in like our fifth week house meeting.
If I had to really zero in on basics, I think they are:
Never give up on an experiment until its predetermined end date
Spend ~20 hours a week actually interacting in the same physical space as housemates (at least a subset)
… those, I think, are the iron core of the project.
I’m curious why this is so important to you, unless that it’s just something to try out. I currently live alone and I like it that way, and I see no reason why spending more time with other people would be such a great thing.
You seem really rigid about excuses though. I think the tendency will be that people will come up with an excuse which one finds it unpleasant or difficult to dispute. For example, when I was in the data science bootcamp in Berkeley, people would very frequently say, “I’m sick and I will be working from home today.” Now a lot of people were in fact sick precisely because of so much physical proximity. But it was very obvious in many cases that the basic reason they were staying home was that they were tired of all the company and felt the need to get away. They did not however feel comfortable saying, “I just feel the need to get away.”
The same thing was true when I lived in a monastery. You could not say “I just feel like sleeping in this morning,” so people said “I didn’t come this morning because I didn’t feel well.” We all knew that this simply meant they were tired and felt like sleeping in. But no one is comfortable confronting someone with the fact that they’re not really sick if they say they are.
The focus on physical presence is a combination of research showing that it matters (there’s some stuff I’ve collected from Dunbar, for example) and strong personal intuition from past experience. In many ways, it’s the core of the thing being tested out, but I have a lot of weight on “it turns out to matter more than just about anything else.”
re: excuses, the intention of the house is Not To Do The Stupid Thing.
Clearly, “mental health” days are a real phenomenon—I’ve taken some myself. And on a larger scale, psych blockers/motivational issues are also real. So it’d be stupid to a) pretend they don’t happen, and b) push directly against them all the time, and never look at undercutting them or working around them. This plan pushes directly against them some, with commitments to just show up anyway, but that’s not the only tool—one of the things I hope to do is increase the candor of all housemates, at least within the context of the house. This will take some practice and reinforcement, but I much prefer a norm of “Huh. I notice I just really didn’t want to show up today” --> figure out what’s going on and address it systematically, to a norm of “little white lie that nobody calls out.”
It’s also worth noting that the house has a pretty high introvert quotient, so there will be a lot of us (myself included) who are motivated to safeguard systems giving one the ability to get away from people for a while.
Thank you for writing that! It’s great to see the “too meta” problem spelled out so clearly. It’s similar to the situation in programming that has long puzzled me. Many people and companies have accumulated processes that they swear by (code review, type systems, continuous integration, agile and whatnot) but at the same time lots of people do amazing work with very little process.
It seems like meta stuff has a way of self-justifying and growing, like a bureaucracy. It’s useful if you’re stuck and nothing works, but if you’re making any progress at all, it’s better to steer with the engine so to speak. Radical meta proposals sound attractive to people who have fought their minds to a standstill, but even for such people I think a better idea is starting one small object-level thing on a strict schedule (gym is a good choice), making the mind more mobile for other things in turn.
Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.
I’ve come around somewhat to the outside buddy idea below; I dunno about the buddies knowing each other. That seems to introduce a whole new layer of difficulty, unless you’re just talking about, like, an email list.
Cool. Yes, a mailing list sounds even better than the low-tech solution I had in mind, which was “every buddy learns 80% of the names of the other buddies through the grapevine, and they happen to be one or two hops away on the social network”.
This seems extreme. Do you not expect that each participant will already have at least one friend outside the house they can talk to about the house if things go poorly, without this needing to be an explicit policy? Or do you worry that things will go so poorly that this won’t work for some reason? If so, can you share a more detailed model?
I think there’s a difference between a friend that one could talk to (if they decide to), and a friend tasked with the specific responsibility of checking in and intervening if things seem to be going badly.
Sure, but what I’d like to know is why Nisan thinks that difference is important in this case.
Parts of the house setup pattern-match to a cult, cult members aren’t good at realizing when they need to leave, but their friends can probably tell much more easily.
(I don’t mean the above as negatively as it sounds connotatively, but it’s the most straightforward way to say what I think is the reason to want external people. I also think this reasoning degrades gracefully with the amount of cultishness.)
Yep, this is why I’m in favor of the “outside friend” norm. In particular, despite not planning to make a bad cult, if I accidentally do, I’m in favor of it being noticed as soon as possible, so it can either be fixed or dismantled.
I’m not proposing a house policy here. I’m suggesting that a Dragon would do well to have regular followups with someone outside the house, and I’m proposing that some members of the wider community offer to be those someones.
In the past I’ve had regular video calls with a couple people who were doing long-term experiments with their lifestyle; I think it was helpful. I believe such an arrangement was part of the Leverage polyphasic sleep experiment.
Jacob is right: There’s a difference between a friend one can reach out to if one needs to, and a friend one is scheduled to talk to once a week. Personally, I struggle to keep up with friends without scheduled meetings, and it sounds like the Dragon Army will be very busy.
Also, there is a difference between reaching out to a friend when things have gone very wrong and one needs to get out; and bringing up a less drastic problem during a weekly check-in. In the first case, you need a couch to crash on and maybe a lawyer. In the second case, you need someone who will listen to you and bring an outside perspective, and maybe refer you to other resources.
Partially, I’m afraid that if this doesn’t go well, our community will lose a cohort of promising people. It would be a shame if that happened because we failed to pay attention to how they were doing.
But also, if the experiment goes very well, this arrangement would be a means by which the wider community can learn from what went right.
I really don’t know what you mean by “lose” here (and I’m worried that others will have varying interpretations as well). Do you mean they’ll become less promising? Not promising? Leave the community? Go crazy? Die?
Anyway, this seems sensible, but I still want to nudge you and everyone else in the direction of sharing more explicit models of what you think could actually go wrong.
Sorry, I was imagining a scenario where a person has an unpleasant experience and then leaves the community because for the last several months all their close contacts in the community were in the context of an unpleasant living situation. That’s bad for the person, and unfortunate for the community as well.
I see a possible failure mode where a member of a participant’s family not into any rationalist community sees the Dragon Army rules and pattern-matches the rules and behavior into ‘cult’ (not arguing whether that pattern match is correct here, just saying that it might happen).
A family member concerned that their loved one might be involved in a dangerous cult might take extraordinary measures to remove that person from the situation, which might get very ugly.
I’m not sure that a nonparticipating buddy is sufficient to mitigate the risk of ‘rescue’.
This is a neat idea!
I expect it to fail. And I kind of wish you wouldn’t try: I give maybe a 1⁄4 chance this fails sufficiently dramatically and publicly that I become less willing to be associated with the community because people start associating it with that failure.
In particular, here is what I expect to happen (~60% confidence it goes down something like this):
Someone will start regularly defecting within the first three months. Maybe they don’t keep up with their chores, maybe they skip meetings, maybe they fail to get along with someone and they fight, maybe they persist in doing something they’ve been asked repeatedly not to do, maybe they chafe under your leadership and start practicing malicious compliance. I don’t expect intentional defection so much as executive dysfunction, to be clear, but it has the same effect either way.
You, personally, will lack the force of character or charisma to fix it. (I haven’t met you in person, so this might be way off; I’m just going off your writing and those of your pictures on Facebook I can see. But it takes an extraordinarily good manager to deal with this problem, and there’s nothing in your bio which implies you are one.) You also, not being legally their military superior, won’t have any actually worthwhile carrots or sticks to offer—this is the core problem, as I see it, that you lack the legal authority to properly enforce anything. Also, rationalists are weird, and often don’t respond that well to the usual incentives.
The rest of the house will lose confidence in your leadership as a consequence.
Bad things. I don’t actually know what happens at this step—people move out, or just stop playing by your rules and it reverts to a standard if unusually dysfunctional group house, or what.
Unfortunately I don’t have fixes to offer you here, other than “try to figure out an enforcement mechanism which will work even on rationalists and which you can legally carry out”. I can’t think of such an enforcement mechanism, but haven’t even put a full five minutes into it. Maybe you already have one in mind and I’ve missed it. To be clear, I don’t think “ostracism” will be remotely sufficient, because of the aforementioned weirdness and the fact that people will have other friends to fall back on. (I guess you could only invite people without other friends, or require them to cut off contact with said friends, but that is a terrible idea.) I also want to say that I’ve seen a number of other communities either fail or struggle due to lack of an explicitly specified and actually effective enforcement mechanism for their rules.
Tiny side note: I think it’s very important that members have regular one-on-one meetings with someone other than you, in case their problems are problems with you which they aren’t willing to bring up to your face.
Thanks for this detailed model. I had a sense of this as a failure mode, but I like the specific way you’ve expressed it.
I do actually have a fair bit of managerial skill. I dunno if it’s better than 1⁄100, but it’s at least in that range. I also completely agree about regular one-on-one meetings with other people; in part, that’s what the “pair debugging/rapport building” time commitment is. I wonder if you think it’s important that they be with a specific other person, or if you think just fostering lots of one-on-one communication hits the thing you’re gesturing toward?
A specific other person intuitively sounds better to me, but that might just be because that’s how it has been done in organizations I’ve been in. (Though it sounds hard to schedule if it’s not a specific person, otherwise, and it’s important that this be a regular thing with the specific topic of “talk about how things are going”, not just general spending time together.) Maybe your second in command, maybe a different person from the command structure—I assume there’s going to be people other than you with roles like “general household management” (I am thinking of office managers, if you’re familiar).
I don’t think the pair time accomplishes quite this. Having a specific time set aside for one-on-one meetings specifically as the regular opportunity to bring up issues means issues which might otherwise have stayed at the back of the mind get brought up more. Generic time spent together does not accomplish this. It’s approximately the same reason you want scheduled one-on-one meetings with everyone in the house despite presumably spending a lot of time with the people in the house in other contexts.
Hmmm. It might be good to install as a house norm that everyone has an outside advisor that they commit to checking in with, either once a week or biweekly. Like, someone not directly affiliated with Dragon Army in any way.
That’s only useful if the outside advisor has some level of veto power. I’d suggest something like allowing them to trigger a discussion meeting /outside of Dragon Army Territory/ with the advised, optionally including the Commander and/or other members, and also at the option of the advisor including legal counsel or a medical practitioner.
Not because I expect anyone to need the safeguards involved, but because making those explicitly part of the Expectations makes it harder to coerce somebody into not getting help. Making coercion of the type “You’re fine, no need to waste time and leaving your ingroup to try to explain to some /outsider/ what’s going on, they won’t understand anyway” ring red alarm bell flags is a feature.
upvote
I am open to being an outside advisor / buddy / contact etc to individuals within this and/or with the project as a whole.
Me too!
Can I get contact info from you? I already have Malcolm’s; if there’s an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.
Sent.
Throwing in with Malcolm as interested in being an outside sanity check.
Can I get contact info from you? I already have Malcolm’s; if there’s an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.
Have you ever lived under obedience? This is often considered a prerequisite for holding command of e.g. a monastery.
Would anyone who has lived under obedience write such an astoundingly self-unaware post?
The answer to both questions is no.
No, I haven’t. I’ve participated in a variety of commitment contexts, none of which were at the level of monastic seriousness.
I would guess it’s more about the ability to accurately model what it’s like to be a subordinate (as opposed to being about commitment).
Praise: The focus on actually doing a thing is great.
Criticism: Most of this post was about methods the house will have, why these are OK, etc. Comparatively little was about what the house is going to used to accomplish outside itself. This seems worth putting much more up-front thought into given how much of the point is to make a house that can actually do a thing. Probably your methods and selection criteria are not very well-calibrated for whatever project will turn out to be best—human coordination is much easier when you’re coordinating about something in particular.
Obviously you will not know everything perfectly in advance no matter how much planning you do—but planning to accomplish a particular thing is very qualitatively different from planning to accomplish things in general.
Praise: A lot of the details on how to live together well (group exercise, food, time explicitly set aside for checking in) seem really good. If step 1 is just “learn to live well together,” that is itself a respectable project, and one most of the Rationalists have failed at. Probably most attempts at this fail, we only observe the old communes that didn’t fall apart.
I like both your praise and your criticism. re: the criticism, one of the reasons I’ve held off a bit is a suspicion that I can’t actually well-model the sorts of things the house will accomplish once fully formed (that it will be stranger/more surprising than I think). I had some thoughts, like running a talk series at prestigious universities, publishing a book or a movie, creating an org to teach rationality to middle- or high-schoolers and then doing it, building a robot car, trying to develop Veritaserum, etc. but they were all over the map.
My best guess is that having a highly specific plan that includes steering/replanning capacity and then totally abandoning it when the wheels hit the road because it turns out to be the wrong thing is way better than having a generic plan.
I’d love to see how you’d design a house specifically for any one of these goals. Robot car is the one that I think would give you the most feedback from your internal models during the planning stage, followed by publishing a book or movie. “Create an org” is a bit recursive, and a talk series is probably either too easy or too vague. Not sure what you mean by develop Veritaserum but it seems to strongly overlap with some of Leverage’s most plausibly successful research.
I claim with moderate confidence that simply walking through how the house as currently planned might go about building a robot car would substantially improve not just your plans for particular object-level capacity, but general capacity. “How will this organization change its mind?” might be a lot harder to cash out usefully than “How will this organization change its mind about valve design for the fuel injector?”.
re: your best guess, that makes sense. It’s possible I should just choose one of those plans above (many of which actually have lots of fairly detailed planning behind them already) and run with it for now.
Eli Tyre strongly agrees with your last paragraph, and is (correctly, and appreciated-ly) pushing for the first large-scale project to be determined sooner rather than later.
Hmm.
Thing that sticks out to me: you mentioned the value of doing something as a house as opposed to as a company. Some of these seem like the sorts of things one does at-a-company-in-particular (and seem like they’re require the amount of time commitment that a job requires). Is there something that distinguishes doing this as a house vs doing this as a particularly intensive company?
Note that those are deliberately not in the charter itself, because I doubt they’re sufficient.
Two things distinguish it—one, starting a company is harder than starting a house, and two, a major part of this is to bind people in a society, and everyone around me already seems to have separate buckets for “my job” and “my life.” I think it’s important to start leveling up people and getting people moving in the “my life” bucket, and that the “my job” bucket already has plenty of forward momentum and pressure.
[I don’t want to be here, but this is important].
To Duncan: I am not going to say you are trying to start a cult group, like some other folks did in this thread. However, I am going to suggest some background readings on cults if you are interested. Cults are a hobby of mine. My favorite cults are Scientology, unofficial Scientology derivatives who kept most parts of the belief system (yes they exist), and the Fellowship of Friends and other Gurdjieff-offshoot cults. Also Carlos Castaneda’s group is a fun one. Those are the fun ones to read about.
To people Duncan is talking to: you are a human being, not a space monkey. The space monkey road is not a good road, I speak from personal painful experience. The space monkey road is going to abstract personal growth issues in a way that will be counterproductive for you in the long run, imo.
Ilya: if you recommend your top 2-5 sources, I’ll commit to reading at least 30,000 words in the next two weeks. (I ask for more than one source in case you propose things I’ve already read.)
Scientology: http://www.xenu.net/ (clambake.org). Lots of interesting links there, including about offshoots.
Castaneda: https://www.amazon.com/Sorcerers-Apprentice-Life-Carlos-Castaneda/dp/1583942068. Also some other stuff online, easy to google.
Live stuff on Robert Burton’s Fellowship of Friends: http://robertearlburton.blogspot.com/. Also some exposes are googleable. Also some stuff on wikileaks. I have personal second hand info on this cult (was never in it, but know people who were). The Fellowship of Friends has their main base (Apollo, in Yuba County) in California and preys on educated, high salary types.
There are a ton of Gurdjieff offshoots in various states of virulence/danger. One thing I learned about the concept “cult” is it’s a fairly fuzzy concept and sort of dissipates around the edges into fairly benign reading groups/clubs and so on. Probably has to do with how charismatic the main person (almost always male) is. So discussions of whether something is “culty” or not are, to me, kind of silly. If the question is raised at all, probably yes a bit culty.
I like reading lots of heterogenous sources and personal accounts to try to piece together what’s happening in places like that, rather than books.
Thanks! Half of these are brand-new to me; commitment made.
My favorite cult to read about is Rajneeshism. It’s very recent, the head guy was almost supernaturally charismatic by all accounts, and the story is hilarious! From the collection of 93 Rolls-Royces to a bioterror attack by poisoning salad bars in an Oregon town with salmonella (yes).
BTW, Scott of slatestarcodex has also chimed in against the OP’s proposal:
Slatestar: “Also, Duncan’s taking the wrong strategy by denying it’s a cult. His pitch should be “Hey, cults seem pretty good at controlling their members, let’s get together a bunch of people who are interested in using cult techniques to become the best people they can be by their own values, and see if we can make it work.””
And the circle is complete.
I agree with Scott on this. When proposing that we should return to well-explored territory found to be dangerous (which is what I claim cults are), we should at least be honest about the fact that we’re returning to old territory, and perhaps argue that it was in fact not as well-explored as we thought and there might be good things to be found there.
But instead, Duncan appears to be arguing that, according to the Pendulum model, we have moved so far past the “old way of doing things” that we skipped over the optimum and are now in another poor solution. He suggests his proposal is a gentle nudge towards the optimum, but this doesn’t seem to square with the fact that the “cult” model is the “old way of doing things” that we we’re previously stuck in. So to me it seems more like “swing even harder in the opposite direction!” when the pendulum should actually be slowing down, moving towards the optimum with less momentum than it had previously.
I disagree with Scott that this qualifies as a cult. Outside post that I think sums up the relevant difference.
I’m also opposed to calling it a cult just because a lot of people took one glance at it and leapt to the most uncharitable stereotype possible.
I agree that “cult” is a loaded and derogatory word and probably should be abandoned in favor of more information-carrying terminology. It might be better described as the centralized authority model. I stand by my claim that the centralized authority model is a return to old territory, though, and this meshes well with Scott’s model of the formation of the bi-modal distribution of peoples’ priors about this (marginalized groups have probably been exposed more to the centralized authority model than privileged Westerners).
I don’t know about that. There are a lot of organizations with highly centralized authority which are not cults (by any definition). For example, the military.
I would probably define “cult” as an entity which, when faced with the question “Who are you going to believe, me or your lying eyes?” strongly encourages the answer “You, of course you!” In more abstract terms, a cult depends on controlling the information flow to its members, both through isolation and through inculating high trust for “internal” claims and low trust for all “external” claims.
Cults are not good at getting members to fulfill their own values. Consider the amount of cults that valued sexual purity and ended up with a whole lot of rape and child molestation.
BTW, Scott of slatestarcodex has updated his post with an “on fourth thought” (in addition to his excellent theory on the dynamic motivating disagreement) that states he’s moving away from concern (though not necessarily all the way to “unconcerned”). I’m hoping you would’ve posted this yourself—having sort of implicitly committed to using Scott’s opinion as an advisory authority—if I hadn’t done so myself first. Not just trusting him when he’s on your side, and so forth.
Also, if we are going to keep bringing in questionable outside blogging as source material, there’s this, which I feel fairly treated by and comes from an author with actual relevant life experience.
Also, if we are going to keep bringing in questionable outside blogging as source material, there’s this, which I feel fairly treated by and includes people with actual life experience rather than those talking out of their butts.
EDIT: Scott of slatestarcodex has updated his post with an “on fourth thought” that states he’s moving away from concern (though not necessarily all the way to “unconcerned”).
Note: I’ve also reached out to Scott directly myself.
I think most people can do well by joining the kinds of relationships that are time-tested (marriage, friendship, work, school, gym, army, church...) From how much trouble it took society to get these halfway working and find decent boundaries, you should be skeptical of inventing new ones that will work in your lifetime. Especially if they look suspiciously similar to cults which we already know don’t work.
And I’m not even sure why you need to invent new relationships! You might feel like you have huge problems that require one huge hammer to solve, but that feeling is deceptive. Mitigating the problems one by one, with boring well-known fixes, is easier and works better. If you want to get fit, join a gym. If you want to learn something, go to school. These will give you the right amount of structure and your daily dose of socialization, without regimenting your life like a boot camp, and you’ll be guided by competent people instead of fumbling your way as a crowd of amateurs.
I think there are a fair number of wrong (or at least underjustified/unfounded) claims in the above. e.g. “cults don’t work.”
This is largely not a new invention, and is instead largely a return to structures and values that have been known to work in the past, and have been loosened/undermined in the past few decades.
My opinion of CFAR just fell from “neutral” to “mildly harmful” because they hired someone who’s willing to say the above. On old LW (where Eliezer wrote a sequence on avoiding cults and I was contributing decision theory math) this would’ve been unbelievable. Or maybe I’ve been missing the signs, not being in the Bay Area.
You’re not thinking or arguing clearly, and are instead leaping to conclusions and pulling from stereotypes.
If you lose respect for CFAR over that, it’s the result of your own confusion, and the loss of your endorsement is not one I’d lose sleep over.
One can say “guns are indeed effective” and not be advocating for wanton gun violence. It’s a statement about objective reality—guns do things—not a statement about normative values. Similarly, I can argue with your claim “cults don’t work” (which is clearly, demonstrably false on at least some axes; cults were in fact successful enough to cause large damage to a lot of people’s lives at the very least) without saying “HECK YEAH, GO CULTS.”
I’ll continue to engage, or not, based on whether or not you respond reasonably to the above. Sorry for the impatience, but I’ve written thousands upon thousands of words in this thread by now, and I’m not at all in the mood to let people strawman me at this point (even if they want to try to pull a sneaky status move by claiming seniority-on-the-forum and trying to shame a certain kind of statement without any model behind the shaming).
(I also note that you didn’t bother to respond AT ALL to my claim that you’re making unfounded leaps, nor to my claim that this is in fact a return to previous proven systems rather than an attempt to invent a new one, which makes me think that in addition to smushing together unrelated things in your arguments, you’re not actually here to discuss, i.e. swap statements back and forth on a topic and in fact interact with what the other person is saying, and are instead here to just score points or confirm (rather than falsify) your own models.)
If you took my original comment to mean that cults are harmless, that’s a bit bizarre.
As for previous proven systems, I’m not sure which ones you mean. The closest analogue is religious or socialist communes, which turn bad too often for my taste. The happiest exception is kibbutzim which weren’t nearly as authoritarian as your idea. Then you have the army, which exists today just fine and we know what it’s good for, not sure why we need another one. Then there are boarding schools, sport camps etc. but these are based on learning from professionals which you don’t have.
sigh.
I took your original comment to be saying “cults don’t work.”
Then, when I said “they do, though,” I took your second comment to be pearl-clutching and saying “well, now I think CFAR must be (slightly) evil or stupid for hiring someone who is willing to say out loud that cults work (gasp).”
You cannot possibly have drawn out of my statements above “Duncan thinks cousin_it thinks cults are harmless.”
I’m going to disengage because it’s not easy to have discourse with you (say things clearly, stick to a topic, expose reasoning, actually make progress toward truth or convergence). I don’t understand how your reasoning process works. I’m finding this subthread frustrating and low-value, and thus far the specific points I have been able to tease out of what you’re saying, I generally disagree with (and trust my domain knowledge and expertise more than I trust your skepticism-without-any-concrete-evidence-backing-it-up-from-someone-who’s-already-demonstrated-willingness-to-make-unfounded-leaps).
The Army works just fine, and has goals that aren’t ours. Why not steal much of their model /which works and has been proven to work/?
Especially if the problematic aspects of Army culture can be avoided by seeing the skulls on the ground.
The militaries have a pretty big stick. You can go to prison for insubordination or disobeying orders; in wartime you might well just be shot for that. The Dragon Army… will give you a stern talking to?
.… will banish you from the tribe.
The only person I heard of go to the brig was one who broke into barracks and stole personal property. Falsifying official records or running off to run a side job as a real estate broker was more of a ’30 days restriction, 30 days extra duty, reduction in rate to the next inferior rate, forfeiture of 1⁄2 month’s base pay for 2 months’ thing.
This is why we need downvotes.
Actually I agree. It feels weird to see that one person upvoted my comment without knowing how many would have downvoted it. The same might apply to Duncan’s post, from the comments it seems like it was really polarizing, but the score only shows the 28 upvotes. If I may be allowed another reference to old LW, Eliezer used to advocate that people downvote more, ideally without replying. I think he saw it as a defense against noise and then left when the noise became too much.
You can get a clearer-if-still-imperfect sense from contrasting upvotes on parallel, opposing comments, e.g. it has 28 upvotes and 1029823904812309481320948blargltroll has 10. I highly doubt this would have ever received sufficient mass of downvotes to become invisible.
I’m fairly certain that P(disagrees with blargtroll | disagrees with your proposal) >> P(agrees with blargtroll | disagrees with your proposal), simply because blargtroll’s counterargument is weak and its followups reveal some anger management issues.
For example, I would downvote both your proposal and blargtroll’s counterargument if I could—and by the Typical Mind heuristic so would everyone else :)
That said, I think you’re right in that this would not have received sufficiently many downvotes to become invisible.
First time I’ve heard it referred to as a heuristic. +1 =P
This is a little ambiguous, and it would be more helpful to be concrete.
It’s an appeal to authority and someone shitting on an organization based on one line of a lesswrong comment by one member of that organization, with no request for clarification or depth.
I don’t think there a good argument that a Western Church works that much better than a Yoga Ashram and the setup of the Dragon Army is relatively similar to a Yoga Ashram.
When comparing kids with decent parents schooled children don’t do much better than unschooled children.
When I was at university learning computer programming I quite often used the not-time-tested StackOverflow over the time-tested method of asking the tutor.
Churches don’t have cohabitation, they’re more like clubs, so the risk is lower. And in an ashram you hopefully get taught by a yoga professional, not just bossed around. I don’t see the value of OP’s proposal compared to either.
I thought homeschooled kids were usually taught by parents? Though I agree that you can learn stuff on your own. The problem is learning a broad range of ideas in a manageable time, not just picking a narrow path through topics that catch your interest, and for that many adults find universities useful. Not to mention you meet many smart people and pick up good memes without realizing it. Somewhere on Tumblr I saw a proposal for improving MIRI’s research that said simply “everyone gets a PhD”, I thought there was a lot of truth to that.
I note that you continue to assume/argue as if there will be zero relevant professional expertise, despite the fact that the professional expertise CITED IN THE MAIN POST is far from the only professional expertise that will be brought to bear during the experiment. In our very first dry run outing, we hired professional instruction to learn a new skill—you are factually incorrect in your assertions.
You are doing your level best to make sure to interpret everything here in the strawest, most negative possible light. “just bossed around.” I’m starting to assume you literally haven’t read the post, because it’s rapidly becoming the only possible explanation for your conclusions.
You’re not setting up a school with yourself as teacher, though. You’re setting up a commune with yourself as boss, with rules above and beyond what schools usually require, and which would lead to student rebellion if they were imposed in a university. So if learning is the point, I’d like to understand how your thing is better than a school.
To which I reply, if you’d actually like to understand, a good place to start would be “read the post.” At least the troll did that much.
Also, you miiiiiiiiiight try not leaning so hard on the typical mind fallacy. West Point is a university that people self-select into with far, far, far stricter rules than this, and they don’t rebel. Ditto many sorts of monasteries and temples and martial arts dojos and retreat centers (including many that are non-religious), ditto all sorts of invasive practices by organizations attempting to turn around lives-gone-astray or turbocharge the already-successful (e.g. regimens put into place by life coaches or intensive business training groups).
You’re confusing “I don’t like this” with “this is objectively bad” (or “I would rebel against this” with “no sane people would not rebel against this”) which—to quote you—on Old LW would have been unbelievable.
Once you make even a single good faith attempt to pass my ideological Turing test (my attempt to pass yours is the multithousand word post above), I’ll start taking your criticisms as seriously as I’ve taken everyone else’s.
“Life coaches”, bullshido dojos and religious brainwashing houses aren’t a good group to be in. It seems to me that such places are fine at teaching authority, but not the best way to teach anything else. I wouldn’t go to West Point to learn math or even history, I’d go somewhere that focuses on math or history instead. And even for fitness, dojos lose to compartmentalized workouts like lifting or BJJ.
Maybe my mistake is misunderstanding the rationalist community. I know that they are a slightly weird bunch, but it’d take a lot to convince me that a boot camp environment would suit them. In the Russian Army such folks tended to be miserable, whereas in relaxed civilian jobs they thrived. That’s part of why I’m replying to you, I feel that nerdy types are vulnerable to proposals like yours but ultimately don’t benefit from them. They already have a lot of tension and random windmills going in their minds, putting them in a pressure container makes it worse, compared to doing casual normal stuff.
Your mistake isn’t misunderstanding the rationalist community, it’s strawmanning and stereotyping and typical minding. If you stopped for one second to think, huh, maybe somebody who’s clearly not an idiot and whose models so strongly disagree with mine might see something I don’t, and approached this thing with curiosity instead of blunt assertions about how it’s terrible and you know better, you could’ve, I dunno, asked questions about places where you’re confused, and I would have answered them, and like many, many other places in this thread, there would’ve been a process of mutual updating and convergence as I showed you cool conclusions from thinking I’d already done, and you helped me find holes and flaws and make fixes, and both of us came out of the interaction with a clearer view of reality and a stronger ability to do good in the world.
Even now, like a dozen comments in, you refuse to stop it—putting scare quotes around life coaches and attaching the word bullshit to the word dojos and adding brainwashing to the phrase “religious houses.” You are not here in good faith; you’ve got a negative model you’re in love with and you’re confirmation biasing all over the place. You’re every bit as much a troll as the anonymous person was—you’re just more subtle about it.
Oh, well.
Besides being pure ad hominem, you seem to understand “good faith” as trying to help you. Let me point out that no one has any obligations to help you or to cooperate with you—refusal to do so is not bad faith. Pointing out that your endeavour is misguided and doomed to failure (assuming that’s a point of view honestly held) is not in bad faith either, even if you do not accept the arguments made.
You are perfectly free to not cooperate with people who won’t cooperate with you, but that lack of cooperation on their part is neither malice nor trolling.
You got a lot more defensive over the past few days.
I disagree that the summary is ad hominem—I think it is a concrete description of my highest-probability model explanation of cousin_it.
I don’t interpret good faith as trying to help me. I do interpret it as trying to help us, where I define “us” as “all of the people on LW and in the rationalist community” specifically, and more broadly as “all humans.”
I don’t see cousin_it as doing any kind of truth-seeking or curious investigation, nor do I see them as taking a principled stance against something that is actively dangerous (the way the troll did). Instead, they’re just throwing out straw criticisms without actually bothering to put in the work to engage with the actual topic at hand. It smacks of either careless antagonism or an attempt to score cheap points, whereas many of the people who are openly and unrepentantly opposed to this project still seem, to me, to be acting in good faith.
Buzzword compliance aside, this is precisely what ad hominem is: “a … description of … ”. The subject is your proposal for a commune—not your beliefs about cousin_it.
That sounds to me like pious crap. I don’t see you as different from the 99.9+% of people who are not qualified to judge who is trying to help “all humans” and who is not—and that’s even besides the oft-made observation that road to hell is never in need of repair.
Let me remind you again—we are discussing your proposal for a commune, not whose intentions are pure.
As I said, you are free to cooperate or not, but focusing on what you see as personal shortcomings of people who disagree with you seems like a road that leads to bad places. Especially given that you put forward yourself as the Dear Leader of this potential commune.
Right. The problem is, only some of us are actually discussing.
In point of fact, most of us are actually discussing, but threeish people have just dropped in to lecture with no even hypothetical willingness to change their minds (or at least none credibly demonstrated, as I claim I’ve credibly demonstrated mine).
EDIT: Also, on reflection, I still think you’re either misusing the term ad hominem or mischaracterizing the critique I’m making of cousin_it. I’m not trying to make claims about them as a whole person (e.g. they’re bad in general or they lack the ability to engage in good faith in general), which is I think what is required for it to be ad hominem—I have to be making some fundamental attribution, and I’m not. I’m saying that the words they’ve typed in this thread are inconsistent with someone acting in good faith, which is a claim about observations and causality, and not about character.
You have unreasonable expectations for an internet discussion :-P
I thought Less Wrong was special. I actually did.
It is. Imagine what would happen if you were to put your proposal onto, say, Reddit. However LW, thankfully, is not a hive mind.
I assume you have noted, because you’re perceptive, but just to say here—I have repeatedly expressed credible gratitude for the presence of countervailing models and criticisms and so forth, and done at least some significant updating in plain sight. I don’t think it would be fair for people to round me off to “was looking for a hive mind.”
The point here is merely to what degree LW is special and what can you expect from it. I neither said nor implied that you went looking for a hive mind.
Yeah, I want to similarly underscore/perhaps redundantly state that you have demonstrated extremely high and consistent credibility when it comes to productively engaging in discourse. With the comment above, I was underscoring a thing that plausibly could’ve just gone unstated.
I agree I got a lot more defensive over the past 36 hours, but you’ll note it’s confined almost entirely to two specific cases where I feel people are approaching with unjustified confidence in extremely uncharitable models, after all of the other discussion that’s gone on (which I feel should’ve earned me some credibility).
From your point of view, maybe—but it’s not the only one.
You seem to be welcoming comments about which parts of your plan to slightly bend, adjust, and repaint, but you are visibly hostile to the idea that your proposal is flawed at its core and cannot be saved regardless of tinkering with its details.
Yes—that’s because the proposal is not flawed at its core, and I’m not going to pretend that it is to satisfy pearl-clutchers and lecturers. (More accurately: I have greater than 95% confidence that this experiment, conditioned on it meeting existing criteria for launch, does not cause great harm to people on the scale of six months.)
I note that I am willing to engage with my real, extant uncertainty with people who don’t approach from a holier-than-thou know-it-all condescending lecturing position. For instance, it’s not even clear that the house will actually happen, because it’s not clear that there will be enough people who think that it’s a good idea. I’m not trying to convince any of the potential members—instead, I’m simply revealing revealing revealing the models, shining as much light on them as possible, so people can neutrally evaluate, and I still have ~33% credence on “there won’t be enough justified faith to do it.”
If someone were to say “Hmmm. I’m reasonably confident that this proposal is flawed at its core and can’t work; here are my objections and here are my questions,” I’d engage with them (and this is a credible claim if you look back through this thread). What I won’t engage with is people who don’t even know me who are trying to pull status moves to put themselves above me (and therefore in a position to judge) from the get-go.
As another way to state my point, I’m credibly offering good faith and charity to the vast majority of critics (all but the bottom 3%). But the people who are coming in with deontologically hostile models are not offering me any good faith and charity in return. And you’re right that no one owes me that, but similarly I don’t owe them any response other than “yeah, screw you, too.”
And how do you know that?
Or, let’s put it this way: which evidence short of actually attempting to implement this would persuade you that the proposal is flawed?
So, how much do you care about status? Why is it a big deal?
True. But you are offering them a response. This response illustrates how you react to what you believe is unjustified criticism—and it is not “I disagree. Tap.”
The confidence regarding it not being flawed at its core comes from related past experience, confidence in the individuals involved, the direct evidence of the positive value of norms stolen from Dreamship and Event Horizon, faith in the safety valves of Circling, pair debugging, internal and external check-ins, and commitment to iteration, and the results of having run a trial version that went quite well.
There was evidence I could have gathered from the experimental weekend that would have persuaded me the proposal was flawed, and there were similarly potentially unknown arguments that people here on LW might have offered up that would have been persuasive, too, but at this point, I can’t outline concrete predictable evidence that would cause me to not run this (not actually all that ambitious) experiment. It’s like the pants ending up in Washington DC—there probably exists evidence that would convince me, but I can’t reasonably guess what it might be.
In response to both the status question and the owed-response question, I do believe that people need to adopt a policy of loudly objecting to moves they want to be considered outside the Overton window, especially if those people have some social capital to spend (because they’re doing it not only for themselves but also on behalf of the disenfranchised who can’t afford to push back). In other words, in part, I’m fighting the two people I think are Doing It Wrong because I want to be publicly seen fighting on behalf of not that. I think that it overall increases rather than decreases my credibility on axes that I think are relevant.
You are either grandstanding or misusing terms. People’s objections to your proposal (including both form and content) are firmly within the Overton Window and are nowhere near its boundaries. I have trouble believing that you actually want as tiny an Overton Window as you imply.
If I may make a suggestion? Stop digging. The narrower you make the range of acceptable thought/speech, the less adequate you look. The more you attack and denigrate people who fundamentally disagree with you, the less credibility you have as a leader.
Note again that we are on Less Wrong and within the rationalist community, both of which are very much built around norms of reasoning and discourse; I’m not suggesting a tiny Overton window for the world at large or even one that’s this constricted on all axes.
But yes—I think both Less Wrong and the rationalist community would be far, far closer to the ideal versions of themselves if they doubled or tripled their callouts-of and refusal-to-engage-with sloppy and biased and inappropriate discourse. Overton window being “things a politician can say on TV”—I want “styles of discourse that a high-status rationality community member can publicly endorse” to not include the stuff cousinit and handoflixue were doing. My concerns are almost entirely about form, because I think correct form leads to improved content. I could take any of the objections that cousin_it or handoflixue or 128bargl had and recast them into (e.g.) “the sort of sentences Julia Galef or Rob Bensigner would say,” and they’d be worth fully engaging with, but in their current form, I claim there’s more long-term civilizational value to rejecting them.
I’m entirely okay with losing credibility with people who don’t value the above. Those people shouldn’t hold me in high esteem—we have at least partially opposing goalsets, and will at least occasionally be actual antagonists relative to one another; I’m actually taking some mild encouragement from how violently people I fundamentally disagree with are disagreeing with this project, because it’s weak circumstantial evidence that I’m moving in the correct direction. (i.e. the less adequate I look to you is not necessarily an appropriate measure; Sanders and Clinton both frequently made moves that made them look less adequate to some people.)
And I again disagree with your characterization that I’m attacking and denigrating people who fundamentally disagree with me, and I’m surprised that you’re rounding things off that carelessly. If you want to see personal attacks and denigration, look at (e.g.) the blog post that cousin_it cited to Kaj. Nothing I’ve done here comes anywhere close to that—I’m attacking and denigrating specific forms of argument, and specific modes of reasoning. For example, if you look at the time where handoflixue asked a clear and cogent question without any unfounded critical leaps, I gave a multiparagraph answer with lots of concrete detail. I grumbled at them a bit for their other interactions with me, but I didn’t treat their point or question any differently because they’d bugged me elsewhere. I have no problem with specific people; it’s just that at some point my prior on the VOI of engaging with them drops too low. It’s Bayes—one of my fundamental moral principles is that you should trust in revealed preferences, and barring credible reasons to believe someone’s made a major personality shift, you should evaluate them as the sum of their actions.
(Also, I think it’s not grandstanding if I’m literally practicing what I’m preaching in real time? Like, I’m doing exactly what I claim a person ought to do, not just moralizing with no action behind it.)
I don’t think so. I think they would be dead or sufficiently engrossed in navel-gazing to be functionally dead.
So, grandstanding.
It’s perfectly reasonable to hold one’s enemies in high esteem and in fact one of the traditional measures of success is the caliber of enemies you’ve acquired along the way. For non-fatal competitions you actually want the best, highest-esteem enemies you could find—they will push you to become better (as opposed to nuisance pests who will only encourage you to stay irritated and smug).
That’s the classic “reverse stupidity” argument.
As Alicorn pointed out, the situation is not symmetric. Writing a Tumblr rant is a very different thing from asking multiple people to surrender not insignificant amounts of autonomy to you, as well as become emotionally and financially entangled in a project of yours.
No, you don’t. You actually tend to oscillate between ad hominem attacks and replying to specific criticisms.
Or maybe you don’t think of the “you think wrong thoughts expressed in the wrong way and you should be ashamed of yourself” as an attack? Let me assure you that it is.
If that were so, you would stop engaging with them. But you don’t.
ETA
That’s not how it works. If you loudly proclaim that, say, the use of mis-gendered pronouns is a major human rights violation akin to torture (or that letting trans people use the bathrooms they want is the end of Western civilization), you are grandstanding even if you literally throw a temper tantrum in real life.
I’m now feeling deliberately misunderstood, and if you’re doing that on purpose, I ask you to stop.
We disagree about Overton windows; that’s good, and cruxy.
According to the definition of grandstanding that Google throws up when you type in the word, you’re misusing it (particularly, the word requires you to make claims about my internal state and purpose, i.e. what I’m doing X for, and your best source of data there is my self-report). It’s not grandstanding, and I note it’s far easier for you to name-call than to actually make a specific critique stick.
It’s perfectly reasonable to hold some of your enemies in high esteem—for instance, I note we’re disagreeing pretty heavily here, and I have a great deal of respect for you. But it’s unfounded to jump from some to all. Many of the people opposed to this idea are not high-caliber thinkers and reasoners, whatever other value they have as human beings.
I was extremely careful to say what I actually meant, and then you were extremely careful to strawman me by quoting only part of my words, as if I didn’t say “weak circumstantial” right in the same sentence.
Operationalize your claims that I’m making ad hominem attacks, and I’ll address them one by one. I predict you’ll definitely be able to find 1-3 examples of me sticking a foot across the line, and that they’ll be outweighed by a factor of at least five by me doing the thing I claimed I was doing. I predict you will find no examples that are anywhere near as gross as the ones put forth by cousin_it and handoflixue. I’d be willing to monetize this as a bet.
I’ve stopped engaging with them for their own sake. I have previously explained to you that I think it’s important to be seen openly defending good norms, and thus continue to engage with them for myself and everyone else. I think it was pretty lame of you to just … pretend I hadn’t said that, and again strawman by criticizing me for the thing I’m not really doing.
I am losing respect for you in this subthread, but right now it’s something like “I had you at 957 points, and I’m worried you’re going to drop to 949.” Hopefully this is just some combination of a little bit of triggering and the fact that both of us care about getting this right, and not that you endorse overall the tack you’re taking anymore than I’d endorse the worst 10% of my own reactions on this post.
My working definition of grandstanding is basically “declaring that one’s words or actions have outstanding significance or impact”. Case in point: you being concerned with “long-term civilizational value”. I strongly suspect that your cluefulness about long-term civilizational values is… limited.
It doesn’t help you. Weak circumstantial evidence is still evidence and under reverse stupidity you just don’t have any.
I have no interest in fisking your comments. I offered you an outside view—if you think it’s wrong, there is no reason for me to try to convince you.
Pick one, will ya? X-)
Maybe, but when you say stuff like “I deny your right to judge and interrogate me” you sound like an idiot. The fact that you were capable of typing that sentence and pressing “Send” is not a good sign.
I appreciate your concern, but I think I’ll be fine. Really, I will :-P
I’m glad, because you just lost a lot more. I do, indeed, think your outside view is deeply flawed, and I’ve just lost an illusion about how you in particular are likely to go about engaging in discourse. As an example, you just pulled a fifth-grader-bully trick in the quote
that was purposefully thickheaded in ignoring the whole point of that paragraph.
I didn’t think you would troll/deliberately mischaracterize, endorsedly, when not triggered-in-the-moment. That was firmly outside of my model of you. Now I know something new about you, and it will be useful to me in the future.
A funny thing about you: the more you talk, the worse you look. You started by presenting a very reasonable image—you listened and you expressed willingness to take into account people’s concerns. A bit more than a week passed and you’re already screaming at people IN ALL CAPS, calling them “a jerk” and dropping dark hints about knowledge that “will be useful to [you] in the future”. How is your stress tolerance? You are not performing well when people disagree with you.
You also try to be manipulative—not very successfully, mind you—by dispensing praise and criticism in order to gain the results you want. Since we’re are being all frank’n’all, my opinion of your adequacy as a leader went down a lot during this week—mostly because you wouldn’t shut up. I sincerely reiterate my advice to stop digging.
I don’t mind this whole “the more you talk, the worse you look” thing, because a) it’s symmetrical, and b) I’m entirely comfortable being seen for having exactly the preferences and principles I do have.
I’ve responded sharply, at this point, to exactly four people: a universally acknowledged troll, two people who started out clearly strawmanning me and being heavily anchored on negative opinions without justification, and now you, as you abandon standards in pursuit of scoring points.
I have not willfully misrepresented people, or immediately leapt to unfounded conclusions about their deep character, or engaged in cheap-trick point-scoring tactics against people who didn’t shoot first (with one exception that Alicorn called me out on, and I edited), or any of the other behaviors that I don’t reflectively endorse. I have certainly pulled none of the subpar junk that you’ve pulled in this subthread, and I’m proud to have opposed you as you’ve done it.
As I’ve noted elsewhere—I don’t much care about irrelevant opinions, and as people have demonstrated themselves to be below the bar of what I expect from a LWer and a rationalist, I correspondingly cease to mind what their overall judgment of me is. I generally try to judge how likely a person’s opinion is to closely correlate with truth and useful perspective, and while I hold my disregard with skepticism on the meta level, so as to not unfairly write people off, ultimately evidence is evidence. There are some people who simply demonstrate, fairly conclusively, that they aren’t going to play fair, think straight, update on evidence, etc., and are literally not worth listening to, in a VOI sense (though they may still be worth opposing in public).
I state again that something like 97% of the participants in this thread do seem like their opinions are likely to closely correlate with truth and provide useful perspective, and I’m grateful for the hours that total strangers have poured into helping me dodge mistakes. This project is something like 50% less likely to fail and 30% more likely to be really successful (relative to where it was a week ago) thanks to those contributions.
And sure—probably most of the neutral parties are shaking their heads somewhat—thinking things like “Duncan’s being too aggressive here” or “Duncan’s fighting fights not worth fighting” or “I wish he hadn’t posted X.” But that’s coin I’m spending deliberately, in open defense of things I think are worth defending. There’s no point in social capital if all you do is hoard it—at some point, people who’ve accrued ought to take risks holding lines that others can’t afford to defend. If I lose 5% of the respect that I’ve gained, but also meaningfully embolden others who were too hesitant to defend themselves against bullies by giving them the sense they’re not the only ones bothered by poor discourse, that’s a purchase I endorse. Freedom from trolls isn’t free—turns out even Lumifer will occasionally use Trump-style tactics, if they dislike you enough.
LOL. You smell SJW-ish. A white knight selflessly spending his social capital to defend the weak against the bullies. Against “Trump-style tactics” even! And, of course, you will not be denied for your cause is just.
You are clearly incapable of shutting up so this will be amusing.
So tell me more about things you think are worth defending—especially from the likes of me. Are we still talking about the mere forms of expression which you disapprove of or there’s some deeper ideology involved? Do you see me as lacking honor, or empathy, or proper morals, or the desire to remake the world, or something else?
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
Oh, good! So I can point out things to you and you won’t be able to talk back? :-D
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve at least temporarily ceased replying to Lumifer and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
Once more, please :-)
It’s been a while since the last time I was officially added to the list of the Enemies of the People and… ritually cast out, I guess? This time there even a list of high crimes I’m guilty of—“reasons surrounding norms”. Woe is me!
something to do new thing and that was purposefully paragraph we have assignment help uk to solve the controversy like this
SPAMMITY SPAM SPAM
I was hoping you’d show how your community will be better than current authoritarian communities, which I deeply dislike. Instead you insist that current authoritarian communities are fine and we need more of them. Hopefully you see why that’s unlikely to change my mind, imperfect as it is. Heck, my dislike for cults was clear from the first comment, which makes your jumping onto it even more weird. A master of soft skills would’ve chosen literally anything else as an opening. Even now in the middlegame you can still turn it around, though I can understand if you’re frustrated and don’t want to. My own goal in this conversation is mostly achieved, I can go on or not, up to you.
Please. Actually. Read. The. Available. Information. Above.
I’ve read your post, it’s nothing but red flags. You’re literally proposing that DA members greet each other with a salute and trust you more than themselves. The few upsides you mention (participants are smart, time is limited, etc) come across as excuses why you should get power now. Check out nostalgebraist’s tumblr for more folks who got the same vibe. Your comments make things worse, you clearly like authoritarian communities in your heart rather than consider them a necessary evil.
I use the phrase unschooling and not homeschooling but even if a child gets taught by their parents that still suggests that the average teacher is not skilled enough to provide value to his student that allows them to outperform students taught by lay-people.
The same arguments could be made for why the Dragon Army is a good idea.
Let’s take a random skill from the proposed curriculum, like welding. You could try externally motivated self-study at OP’s group house, or you could go to a community college and ask how long they’ll take to make you a certified welder. It seems to me that even without the authoritarian LARPing, the first option is a weird hybrid, like a flying submarine. It’s more costly than either full self-study (if you can do it) or fully spoon-fed learning at a traditional place for a set term.
The OP’s proposal is to dial motivation to 11 and hope that it leads to effective learning. Even if that doesn’t backfire, at most it lets you see the next bottleneck, and you don’t know how many there are. Traditional schools have solved all of them, and can teach people predictably without requiring much motivation (except for showing up). For well understood skills, I think they are better than rationalist groups in every way.
Traditional schools know how to teach welding but when it comes to teaching introspection or teaching teaching and tutoring skill it’s less clear.
Teachers who have a master degree aren’t better than their colleges. As far as we know those two years of being in the university to learn to teach better is worthless for teaching skills.
I would also doubt that it’s easier to learn programming via a community college course than by living together with people who can program well and who are willing to tutor you a bit.
I’m sorry to say but teaching introspection, rationality or other skills we don’t have reliable tests for is a scam. The fact that more than half of the OP’s curriculum consists of such skills is a big red flag. And learning programming doesn’t require any measures described in the OP, I know it, you know it.
Yes, but you make the argument that traditional institutions of learning are superior. For programming, I don’t think that’s the case.
Do you believe that liberal arts college who claim to teach critical thinking are also scams? From my perspective, they are a lot more scammy because they actually have the money and time to research whether their claims are true.
I think a person who tries a new project where they have a goal that they can’t measure well is a lot scammy than big institutions like a liberal art college.
100% agree that formal education for programming sucks today.
Yeah, pretty much. They take your money and then you can get a job doing something else if you’re good at that thing.
I think we’re mostly in agreement?
I don’t think the goal of OPs proposal is to learn any particular skill. To me it mostly looks like trying to build a tightly-knit group so that each member can use the others as external motivators and close friends to discuss life plans and ideas in detail not really possible between modern colleagues and friends. I.e. the goal is not learning a skill, it’s building a mutual support group that actually works.
I couldn’t comment on the linked Medium article, so I’d like to say that, for many students, particularly middle and high school students, it is simply not true that they are in class voluntarily. I was routinely threatened with dire consequences if I didn’t go to school, and attempts to remain at home and refuse to go were met with physical force—I was literally pulled out of my bed and taken to the car or bus. School is about as voluntary as the military draft.
You missed the entire point.
Edit: my original response was unnecessarily brusque and rude, and I apologize. I can elaborate further, but in the meantime, you might squint at the doc again, because it was a particular message about agency aimed at people in exactly your kind of situation.
The end result of my experiment in school refusal was being put on psychiatric medication. (Which actually did help, if you consider changing my preferences to something more socially acceptable to be helping.)
In hindsight, my best strategy might have been seeking a diagnosis of delayed sleep phase syndrome and requesting accomodations under the Americans with Disabilities Act. (The trigger for all this was that the school changed its starting time from 8:10 AM to 7:40 AM and I was not willing to deal with getting up any earlier.)
I was in a special education school from third to seventh grade, and I was absolutely forced to be physically present at that school as much as any prison inmate was forced to be physically present in prison. They couldn’t force me to do schoolwork, and there were times I accepted a loss of privileges as the consequence for not participating, but any attempt to leave would be met by physical force. (The school even had a “time-out room” in which a student that became violent—a not uncommon occurrence—could be locked inside until he or she had calmed down.)
Participation was indeed a choice. Being physically present was not.
Going to class was not voluntary for me either. The consequences of not going to class included: parents screaming at me, parents kicking my ass (tiger parent style; we didn’t do “grounding” in my household), truancies going onto my “permanent record”, a full day of detention on a Saturday, etc. Things that people call “voluntary” don’t usually result in physical and emotional damage if you don’t do them.
Nonetheless, I skipped class a few times in middle school, and I suffered the consequences as a result. Were the consequences worth the glorious days of freedom that I spent skateboarding near the beach, sitting in a local comic book store marathoning manga, etc.? Maybe; maybe not.
But whether I go to class is a choice that I alone have the freedom to make. My parents and the school can set the consequences, and they can apply a lot of pressure to make particular options more or less appealing, but they can never take away my ability to choose.
So far! Security mindset.
On the positive side, I think an experiment in a more centrally managed model makes sense, and group activity that has become integrated into routine is an incredibly good commitment device for getting the activity done- the kind of social technology used in workplaces everywhere that people struggle to apply to their other projects and self-improvement efforts. Collaborative self-improvement is good; it was a big part of what I was interested in for the Accelerator Project before that became defunct.
On the skulls side, though, I think the big risk factor that comes to mind for me for any authoritarian project wasn’t addressed directly. You’ve done a lot of review of failed projects, and succeeded projects, but I don’t get an impression you’ve done much of a review of abusive projects. The big common element I’ve seen in abusive projects is that unreasonable demands were made that any sensible person should have ‘defected’ on- they were asked things or placed under demands which from the outside and in retrospect staying in the group was in no way worth meeting- and people didn’t defect. They stayed in the abusive situation.
A lot of abusive relationships involve people trading off their work performance and prospects, and their outside relationship prospects, in order to live up to commitments made within those relationships, when they should have walked. They concede arguments when they can’t find a reason that will be accepted because the other person rejects everything they say, rather than deciding to defect on the personhood norm of use of reasons. I see people who have been in abusive relationships in the past anxiously worrying about how they will find a way to justify themselves in circumstances where I would have been willing to bite the bullet and say “No, I’m afraid not, I have reasons but I can’t really talk about them.”, because the option of simply putting their foot down without reasons- a costly last resort but an option- is mentally unavailable to them.
What I draw from the case studies of abusive situations I’ve encountered, is that humans have false negatives as well as false positives about ‘defection’; that is, people maintain commitments when they should have defected as well as defecting when they should have maintained commitments. Some of us are more prone to the former, and others are more prone to the latter. The people prone to the former are often impressively bad at boundaries, at knowing when to say no, at making a continually updated cost/benefit analysis to their continued presence in an environment, at protecting themselves. Making self-protection a mantra indicates that you’ve kind of seen a part of it, but the overall model being “humans defect on commitments too much” rather than “humans are lousy at knowing when to commit and when not to” seems like it will miss consideration of what various ideas will do with false negatives often.
The rationalist community as a whole probably is mostly people with relatively few false negatives and mostly false positives. Most of us know when to walk and are independent enough to be keeping an eye on the door when things get worrying, and have no trouble saying “you seem to be under the mistaken impression I need to give you a reason” if people try to reject our reasons. So I can understand failures the other way not being the most salient thing. But the rationalist community as a whole is mostly people who won’t be part of this project.
When you select out the minority who are interested in this project, I think you will get a considerably higher rate of people who fail in the direction of backing down if they can’t find a reason that (they think) others will accept, in the direction of not having good boundaries, and more generally in the direction of not ‘defecting’ enough to protect themselves. And I’ve met enough of them in rationalist-adjacent spaces that I know they’re nearby, they’re smart, they’re helpful, some are reliable, and they’re kind of vulnerable.
I think as leader you need to do more than say “protect yourself”. I think you need to expect that some people you are leading will /not/ say no when they should, and you won’t successfully filter all of them out before starting no more than you’ll filter all people who will fail in any other way out before starting. And you need to take responsibility for protecting them, rather than delegating it exclusively for them to handle. To be a bit rough, “protect yourself” seems like trying to avoid part of the leadership role that isn’t actually optional: that if you fail in the wrong way you will hurt people, and you as leader are responsible for not failing in that way, and 95% isn’t good enough. The drill instructor persona does not come off as the sort of person who would do that- with the unidirectional emphasis on committing more- and I think that is part of why people who don’t know you personally find it kinda alarming in this context.
(The military, of course, from which the stereotype originates, deals with this by simply not giving two shits about causing psychological harm, and is fine either severely hurting people to turn them into what it needs or severely hurting them before spitting them out if they are people who are harmed by what it does.)
On the somewhat more object level, the exit plan discussed seems wildly inadequate, and very likely to be a strong barrier against anyone who isn’t one of our exceptional libertines leaving when they should. This isn’t a normal house share, and it is significantly more important than a regular house share that people are not prevented from leaving by financial constraints or inability to find a replacement who’s interested. The harsh terms typical of an SF house share are not suitable, I think.
The finding a replacement person part seems especially impractical, given most people trend towards an average of their friends and so if their friends on one side are DA people, and they’re unsuited to DA, their other friends are probably even more unsuited to DA on average. I would strongly suggest taking only financial recompense on someone leaving for up to a limited number of months of rent if a replacement is not secured, and either permitting that recompense to be paid back at a later date after immediate departure, or requiring it as an upfront deposit, to guarantee safety of exit.
If there are financial costs involved with ensuring exit is readily available, there are enough people who think that this is valuable that it should be possible to secure capital for use in that scenario.
Strong approval of all of this. The short answer is, I’ve spent tens of hours working more closely with the people who will actually be involved looking at all of the issues you raise here. We’re all aware of things like the potential for emotional abuse and financial entrapment, and putting possible solutions into place, and I simply didn’t feel the need to lengthen the post by another third to include stuff that’s only half-in-progress and also largely too detailed/irrelevant to outsiders.
(As a single bite-sized example: the “protect yourself” mantra is there to lay the baseline, but thus far we’re also including a) explicit “non-conformity” training in bowing out of activities, coupled with strong norms of socially supporting people who “rule #1” themselves out, and clear ways to resolve anxiety or embarrassment and save face, b) weekly open-ended retrospectives that include room for anonymous feedback as well as public, c) two one-on-ones per week with me in which the number one focus is “how are you, can you be supported in any way,” d) outside check-ins with someone completely unrelated to the house, to provide a fresh perspective and safe outlet, and e) regular Circling and pair debugging so that everyone knows “where everyone is” and has a cheap Schelling point for “I need help with X.”)
This is tangentially related at best, but if you have some high quality non-conformity training I would love to borrow it for my local purposes. I’ve got some, but still feel like it’s the largest weakness in the rationality training I’ve been doing.
I would be much more inclined to believe you if you would actually discuss those solutions, instead of simply insisting we should “just trust you”.
How can you read the parenthetical above and dismiss it as “not discussion” and still claim to be anything other than deontologically hostile?
Because basically every cult has a 30 second boilerplate that looks exactly like that?
When I say “discuss safety”, I’m looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the “check-ins” you’re describing, as a way to emotionally manipulate members. And the “group” check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).
You’re proposing starting something that looks like the cult. I’m asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you’ve provided has been perfectly in line with “you are a cult leader”.
If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I’m not interested in entering into a discussion where the standard is “Duncan must overcome an assumption that he’s a cult leader, and bears all the burden of proof.” That’s deeply fucked up, and inappropriate given that I willingly created a multithousand word explanation for transparency and critique, and have positively engaged with all but the bottom 3% of commentary (of which I claim you are firmly a part).
I think you’re flat-out wrong in claiming that “almost all evidence you’ve provided has been perfectly in line with ‘you are a cult leader.’” The whole original post provided all kinds of models and caveats that distinguish it from the (correctly feared and fought-against) standard cult model. You are engaged in confirmation bias and motivated cognition and stereotyping and strawmanning, and you are the one who is failing to rise to the standard of discussion of this community, and I will not back off from saying it however much people might glare at me for it.
While I agree that a lot of the criticism towards you has been hostile or at least pretty uncharitable, I would only point out that I suspect the default tendency most people have is to automatically reject anything that shows even the most minor outward signs of cultishness, and that these heavy prior beliefs will be difficult to overcome. So, it seems more likely that the standard is “outward signs of cultishness indicate a cult, and cults are really bad” rather than “Duncan is a cult leader.” (This is sort of similar to the criticisms of the rationality community in general).
I think there are a lot of reasons why people have such heavy priors here, and that they aren’t completely unjustified. I myself have them, because I feel that in most cases where I have observed outward signs of cultishness, it turned out these signals were correct in indicating an unhealthy or dangerous situation. I don’t think it’s necessary to go into detail about them because it would take a huge amount of space and we could potentially get into an endless debate about whether these details bear any similarity to the set-up you are proposing.
So it generally seems that your responses to the people who have these very heavy priors against what you are doing to be along the lines of “You can’t just come in here with your heavy priors and expect that they alone constitute valid evidence that my proposal is a bad idea”, and in that regard your rebuttal is valid. However, I do personally feel that, when someone does show up in an argument with very confident prior belief in something, the charitable principle is to assume at least initially that they have a possibly valid chain of evidence and reasoning that led them to that belief.
It could be that there is some social collective knowledge (like a history of shared experiences and reasoning) that led up to this belief, and therefore it is generally expected that we shouldn’t have to back-track through that reasoning chain (therefore allowing us to make confident statements in arguments without producing the evidence). I think that “cults” are a fairly good example of this kind of knowledge—things people almost universally consider bad, except for cult members themselves, so much so that saying otherwise could be considered taboo.
And this is definitely not to claim that every taboo is a justified taboo. It’s also not to say that you haven’t argued well or presented your arguments well. I’m only arguing that it’s going to be an uphill battle against the naysayers, and that to convince them they are wrong would probably require back-tracking through their chain of reasoning that led to their prior belief. In addition, if you find yourself becoming frustrated with them, just keep the above in mind.
For essentially the above reasons, my model predicts that most of the people who decide to participate in this endeavor will be those who trust you and know you very well, and possibly people who know and trust people who know and trust you very well. Secondly, my model also predicts that most of the participants will have done something similar to this already (the military, bootcamps, martial arts dojos, etc.) and successfully made it through them without burning out or getting distressed about the situation. Thus it predicts that people who don’t know you very well or who have never done anything similar to this before are unlikely to participate and are also unlikely to be swayed by the arguments given in favor of it. And even more unfortunately, due to the predicted composition of the participants, we may not be able to learn much about how successful the project will be for people who wouldn’t normally be inclined to participate, and so even if the outcome on the first run is successful, it will still be unlikely to sway those people.
I don’t place much weight on this model right now and I currently expect something like a 30% chance I will need to update it drastically. For example, you might already be receiving a ton of support from people who have never tried this and who don’t know you very well, and that would force me to update right away.
Also, even though I don’t know you personally, I generally feel positively towards the rationality community and feel safe in the knowledge that this whole thing is happening within it, because it means that this project is not too disconnected from the wider community and that you have sufficient dis-incentives from actually becoming a cult-leader.
In short: Don’t let the negativity you are facing become too much of a burden, just keep in mind that it’s possible that many of the most negative critics (besides obvious trolls) are not acting in bad faith, and that it could require more work than is feasible to engage with all of it sufficiently.
I like everything you’ve said here, including the gentle pointers of places where I myself have been uncharitable or naive.
Also, this is very important: You’re asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you’ve refused to address it.
I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.
This post puts me maybe 50% the way to thinking this is a good idea from my previous position.
My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying “Taking care of yourself always comes first, respect yourself”, then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they’ll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully implying that they’ll use it when push comes to shove. Think about how people act when actual conflicts with large fight/flight/freeze responses interact with self-care norms. I suspect some typical-mind, as my model of you is better at that than most people. I think it depends on what “running on spite” cashes out to. This is kind of a known skull, but I think the proposed solution of check-ins is probably insufficient.
My other big concern is what comments like your reply to Peter here imply about your models and implicit relationship to the project. In this comment, you say you’ll revise something, but I pretty strongly anticipate you still wanting people to do the thing the original wording implied. This seems to defuse criticism in dangerous ways, by giving other people the impression that you’re updating not just the charter, but your aesthetics. Frankly, you don’t seem at all likely to revise your aesthetics. And those, ultimately, determine the true rules.
To summarize the nature of my issues here in a few words: aesthetic intuitions have huge amounts of inertia and can’t be treated like normal policy positions, and people’s self-care abilities (and stress-noticing abiities) cannot be trusted in high-stress environments, even under light to moderate testing.
-Olivia
I’m unlikely to revise the aesthetics, but a) the particular operationalization/expression of those aesthetics, and b) the boundary/balance between both the aesthetics and other people’s agency are fully open to debate, iteration, and consensus.
The whole point is to test out the aesthetic as it exists, to see whether it produces a better life for people, so it’s important not to compromise it until some actual testing has taken place. But imagine e.g. a constructed social norm is approved of, proves to be problematic twice, and has one week left before its originally established “re-evaluate” point—I posit you get much better data out of seeing what happens if you keep the norm firmly in place, see the fallout for a week, watch people grumble and adjust, and then re-evaluate on schedule, than if you just constantly say “NOPE, DIDN’T WORK, SCREW THAT.”
I think there’s a strong instinct to buck norms and update in the moment, and that this is a pendulum swing thing—it’s good that we do this a lot more than we did two decades ago, but it’s bad that we do it as much as we do. There’s value in learning to live with rules that don’t change, or rules that are slightly stupid, and by setting rules firmly in place for e.g. three weeks at a time, I think you capture some of that value, at a low price in terms of loss of the flexibility thing.
Does that seem coherent/a valid response to your qualm?
Another way to say this is that I think the bar for “discard this norm” should be raised one notch higher from (straw description) “it bothered one of us once” to “it bothered several of us several times.” If you keep it past the former, I think you see interesting effects in how people shape themselves around one another, and I think there’s some valuable effect from transferring some sovereignty back from the individual to the social fabric (i.e. everybody’s not just quittable at all times).
Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.
I’d suggest evaluating the policy of ‘demand that an approved norm be in place until the scheduled vote’ on the first scheduled vote following each scheduled vote in which ‘a norm was dropped that people wanted to have it dropped mid-cycle but couldn’t because of the policy’.
Your suggestion makes sense for an experiment, but misses the whole point of this experiment. This, to me, seems like exactly the unpleasant valley dynamic. “We tried holding ourselves to a standard of ‘we finish the experiments that we start,’ but we got a couple of experiments in and we didn’t like it. Let’s stop.”
“Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs.”
If you have no norm for evaluating that rule explicitly, it doesn’t mean that you won’t evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won’t quickly learn to put exit clauses in experiments that are likely to need them ‘notwithstanding any other provision’ is failing to accurately predict.
I think you miss the point that Duncan wants to train the ability to be out-of-comfort zone by following through on goals that are set. A norm being very annoying wouldn’t be a reason to drop it before the scheduled vote. The norm would have to actually create substantial harm.
I read that “this is causing substantial harm” would be insufficient to cancel a norm, but expect that “this is creating a physical hazard” would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there’s a false negative in a mideterm evaluation of danger...
Maybe I’m concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.
Duncan’s rule one is “A Dragon will protect itself”.
I don’t think whether something is physical would be the prime distinction but whether the harm is substantial. If following a norm would likely result in someone losing his job, that isn’t physical harm but substantial harm that likely warrants violating the norm.
“roughly 90 hours a month (~1.5hr/day plus occasional weekend activities)” My math says that those weekend activities total the 1.5 hours every day has and also 10 additional hours every weekend.
“Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of “keep paying until you’ve found your replacement.” ”
It seems counterproductive to have people who have left the experiment living in the same house until they are replaced. Exit terms such as ‘two months notice, or less if a suitable replacement can be found or otherwise agreed’ are less coercive.
Yeah, your exit norm is more what I was looking for. Thanks for the rework/reword … I’ll update it to something more like that soon.
The actual number we’re shooting for is 30h/week, but not 30 hours every week. More like 20 hours most weeks and 40 or 50 every now and then.
21 hours most weeks is 3 hours per day, or 2 hours during each weekday and ~10 for the weekend. Just making sure that your daily and weekly estimates don’t contain math errors, not saying anything about the sufficiency of those numbers.
Oh, goodness, you’re actually completely right. I just dumbbrained. The goal is 21 hours per week, on average, but with most weeks having more like 12 hours and some having more like 40.
The numbers are somewhat higher in the beginning both a) because it’s easier to relax expectations than to tighten them, and b) I do suspect we want to frontload the togetherness and do more individual stuff after norming and bonding.
For the record: not for me. At all.
I am Jack’s complete lack of surprise.
I’m curious whether the not for me is “there are different kinds of people and different kinds of brains and different kinds of personalities, and they actually sometimes need different nutrients, and this one is bad for Lumifers,” or whether it’s “there’s something fundamentally broken here that I’m particularly sensitive to but others are more able to tolerate.”
If the latter, I’d love it if you ever manage to put it into words. The goal is to avoid as many of the Stupid Things as possible.
So, I have actually lived in a semi-authoritarian culture, and have a sort of unique experience of seeing high rates of autism function under that culture (and let’s not deny the high rates of autism in this subculture). While this doesn’t sound like “cult” to me, I can think of a couple ways gratuitous harm could occur even if everyone is operating in good faith.
Person A harms Person B. Person B realizes that their violation threshold is much lower than they thought when they signed on, and they want to bring it up for discussion, but you and Person A have a much better rapport than you and Person B. And Person B was uniquely attracted to this because they need their self-care to largely be outsourced to a group structure. So they don’t actually have the skills they need to be agenty outside of group expectations, and simply continue to be harmed while being unable to bring it to anyone’s attention until it’s much to late to repair the relationships. I’d like to present myself as someone who has gotten feedback along the lines of “you’re competent and mature” and who still does this sort of thing. It’s not something that’s easily predicted by the person or by people observing them.
As mentioned in (1), simply outsourcing functionality to a group structure can leave people helpless when they have to act against the group or act without the group. I don’t see much thought put towards transition plans for people when they leave DAB. Relating back to the childhood and adolescent experiences I claimed gave me insight into this, I have seen a lot of people flail once their version of the role you’re taking here is gone. And they get hurt. This applies even more to people who’ve required extra structure to function, as in the case of autism (and I am one of those autistic kids that flailed). You might say that people are accepting that they will get no transition help once they leave the immersive, structured environment you’re creating, but it seems naive to not at least prep them for the struggles they might have.
2a. Transition is even more important given that this is a necessarily isolating endeavor. The things you’re proposing take a ton of time! People will be making a lot of interpersonal sacrifices to participate, and that will degrade whatever safety net they think they’ll have if they leave.
Personally, I’m trying really really hard to separate criticisms from an aesthetic distaste and the fact that this looks like things I have been actively harmed by, when the people in charge were people who loved me and had my best interests at heart. So, apologies, because this comment is definitely biased by that.
As far as “there are different kinds of people and this is bad for helldalgos” goes, this is bad because I would do something like this if I tried to participate: outsource most of my functionality to group norms, overstate my ability to be transparent enough to function in a high trust environment like this, end up hiding rule violations, feel guilty, become dishonest, and have periodic emotional and mental breakdowns where I burn all of my relationships in the house to the ground. The fact that I behave like this under authoritarian structures might be a problem, but it’s not one that’s fixed all at once by starting an immersive group project where someone is in charge of me. I said a few hours ago to someone else that I would definitely participate if I didn’t have so many roots where I live now and if I could actually stand living in the Bay, but upon reflection, I think not.
This is outstanding, and I appreciate you taking the time to write it up.
I think 1) is an interesting and important dynamic that I had not previously thought about, and I’m curious if you have concrete thoughts as to how to repair it. I think that simply acknowledging it, and committing to accede to opinions-other-than-my-own in evaluating whether it’s going on, is an important first step but only gets like 15% of the way there. Similarly, I think norms of regular retrospectives and Circling-type environments will make it marginally more likely that people can bring this stuff forward and get it addressed, but not entirely because anxiety, status, etc.
My first brainstorm there produces things like anonymous feedback structures, “interrupting” norms where people are free to call things to a halt, requests-to-speak-privately and requests-for-third-party-mediation as strong guaranteed “yesses,” and maybe something like a norm that people can call for discussion or mediation to halt until their ideological Turing test has been passed? e.g. I can’t just brush past your claim of harm; you have an absolute right to stop things from moving forward until you are satisfied that I at least understand the magnitude of your internal experience, even if I disagree with your description of what happened externally.
As for 2), it’s an ongoing conversation; back-and-forth in these comments has already produced a lot of clarity on both non-defecty, structured ways of leaving, and also urgent, ejector-seat methods. (I’ve been a little slow to post concrete details because I claim the person clamoring for them loudest isn’t engaging in good faith, but I’d be happy to PM). My current sense, though, is that these structures, while they should be put in place as soon as possible, should also be discussed with the group, rather than emerging entirely under my models.
Thanks again, particularly for your separating criticisms from aesthetic distaste—I feel you absolutely succeeded at that goal, and I felt that your comment was both a) actually valuable and b) entirely constructive.
I’m not sure how to solve it except to avoid authoritarian structures, which is obviously counterproductive for your project. I would recommend taking any opportunity you have to exhibit through actions that fairness can be expected despite your existing rapport with someone. The things you suggested are helpful but not comprehensive. You could screen for anxiety, but this behavior can be found in people who wouldn’t otherwise consider themselves anxious. And it’s not entirely fueled by anxiety, either.
I like the “interrupting” norm idea; I can see it becoming prone to a weaponized-victimhood sort of abuse but that would be easier to see and stop from the outside than the dynamic it could solve. And if someone is constantly claiming that they’ve been harmed, that’s probably a good sign that DAB isn’t a healthy environment for them anyways.
I would be louder about insisting on plans for various types of leaving if I had more of a stake in this project. If I were planning to participate or someone I cared about was, I would be insisting on it with a similar degree of intensity as the other comments you’re referencing. That’s a major part of what will keep it from being what some people are calling abusive, but that I think belongs under the wider umbrella of “dysfunctional.” You’re right that it should be collaborative, and I don’t expect graceful exit plans to leap fully formed from your skull, but yeah. I endorse that level of intensity in terms of expressing just how important exit plans are.
I should admit that as of a few hours ago I have an ongoing bet about the year-long success of this project, and I took the pessimistic view (for reasons other than abuse or harm). I was also incredibly worried about people getting hurt when I read through the document and some other comments. But, having talked to some other people that know you and reading other things you’ve said, I am definitely less worried about harm through intent or negligence than I was. I am still pretty concerned about harm through ignorance or dynamics inherent in the interaction between the people and the system.
Also, excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement:
Yeah, I saw that earlier. In my case, I’m not panicked (or at least, I quickly became not panicked) about rampant abuse, and I also have not been directly exposed to a lot of abuse. My concerns are more about ways I’ve been directly exposed to harm by authoritarianism with good intentions. It’s no coincidence that that is what I was inclined to bring up. Since I’m probably not unique, there’s probably something worth taking seriously in every complaint. But everyone is probably weighting their own concerns the most. So that summarizes to something like:
-abuse is often perpetuated in structures that share significant characteristics with DAB, and you should think about specific plans to avoid abusing people
-there are unique systemic issues with authoritarian structures that facilitate unsustainable dysfunction even when no individual person is deviating much from normal behavior
-sex and romance will cause problems and it might be worth restricting behavior inside the house
-etc (I have not read every single complaint)
+1 to all this. In particular, if my pendulum swing model is correct, the new position of the pendulum (extreme aversion to the risk of abuse) is a result of the pendulum’s previous stuck point being “a lot of people suffering abuse in these kinds of environments.”
I’m proposing swinging back toward the old norm and trying not to cross the ideal point, and I agree it’s a hard problem. Posts like yours are excellent for improving models and reducing risk as a result.
I think it’s okay for people to bet against it; we’re going to have a large betting norm within the house. If nobody bet against, I wouldn’t have anybody to bet with!
Exit plans are now #1 on the “to finalize” list, and have had multiple eyes on. I strongly endorse the way that LW has focused me toward that part of things, which I was underweighting. However, I also note that some people SHOULD ultimately still be dissatisfied with the exit norms, and therefore choose not to participate. Like, that’s part of the definition of high-stakes, high-commitment—it’s not for everybody, and in fact if everybody were on board with the exit norms being sufficient it … wouldn’t be much of anything?
The key, in my opinion, is being clear clear clear clear clear, and that particular part of it was not clear enough to the potential participants, and it will be now.
Thanks again for your willingness to write things up.
Mostly the former. I am an individualist and dislike collectivism. As befits a proper individualist :-) I also recognize that people are different and what’s good for me is not necessarily good for thee. I can survive and function in collectivist environments like you propose, but I don’t like them and don’t see a good reason for me to be there.
As to the latter, it’s hard to do a pre-mortem on something that’s still in flux. Communes of different kinds—from monasteries to kibbutzim and hippies—have been around for many centuries and clearly some people like them and find them useful. There’s enough history (which I’m not all that familiar with) to learn where the common pitfalls lie and what are the major trade-offs that you would be facing. I can’t recommend a book, but I’m sure there’s a few.
Generally speaking, I would expect the most likely mode of failure to be the way power dynamics develop. Authority and power are complicated and deadly—tightly-knit communities can go very bad quickly this way (consult your favourite cult group horror story). Adding sex to the mix generally makes things… more volatile. The rationalist community doesn’t strike me as being particularly capable of managing power issues.
I agree with Lumifer.
Emotionally, the whole proposal strikes me as cultlike in a bad way. I can’t defend that as a factual claim since I only skimmed the post (precisely because it is not relevant to me), but I am pretty sure that living in such a situation even for a short while would make me feel very, very bad.
Same question posed to you—to the best of your ability to tell, is this a bug in the system, a bug in you personally, or a simple instance of diff’rent strokes for diff’rent folks? And if a bug in the system, can you point straight at it?
Speaking entirely for myself: You are proposing a dangerous venture. The path is littered with skulls. Despite this, you have not provided any concrete discussion of safety. When people have brought the subject up, you’ve deflected.
I suspect you haven’t actually poked around in all of the comments—I can point to multiple places where I’ve provided concrete discussion of safety, if you spend five minutes looking and can’t find it.
The biggest concern/red flag for me is one aspect of the authoritarian nature of the project. I would be perfectly fine with fully outsourcing decisions (giving higher intellectual status) but not with being a subordinate in full generality. What I’m trying to point at is the difference between “What should I do? He said to do “x” and I trust his expertise so this is my best option and I’m going to make myself do it if unpleasant” and someone forcing me to do the thing.
Which of the two would be my intuitive reaction depends mostly on your character/attitude and this is something that is completely missing from the discussion so far. Hopefully that is because people know you so they are sure it wouldn’t be a problem but your comments here only show competence and don’t exclude arrogance or enjoying power too much and beginning to boss people around. I found concerning the comparisons to military bootcamps and talking about tyrants as this somewhat paints the image of “someone shouting at people to do stuff” which I expect to have severe negative effects and build up resentment quickly. In other words it seems to me that constraining your image strictly to the one who decides what is to be done as opposed to someone who also enforces the execution would reduce the risk of failure of the experiment. Enforcing by regulating incentives should be fine as it won’t speak to System 1 and provoke the low-level “Who are you to tell me what to do” reaction.
Maybe this is an obvious point that having a nice and respectful leader is better than powerful tyrant but I’m not sure how far I can generalize from my own preferences so decided to share anyway. Apologies if this doesn’t make sense or wastes your time, I’m new to posting here.
This is a clear and cogent point, and thanks for posting it.
I suspect the authoritarian stuff is a necessary catalyst, to get the group cohered together and working, and after an initial period it becomes less and less useful. For instance, I think a major part of the thing is getting everyone to be in the same room at the same times, and that happens fastest and becomes ingrained easiest if someone’s just dictating the time (after reasonably accounting for everyone’s constraints and preferences).
But once everyone’s all in the same room, I don’t think it makes too much sense for an authoritarian to dictate what happens. Like, I think the useful thing is something along the lines of “well, if you all can’t decide where we’re going to eat, then we’re getting pizza”—my plan is to set a minimum bar of “this is a useful thing to be doing,” and to demand that we do at least that, but to in no way restrict people from coming up with something better/more effective/more worthwhile.
So, we start off by having morning exercise and weekly dinner, and then over time, people who are chafing because the morning exercise get to say, “Hey, you know what would be a better use of this slot of togetherness that is taken as a given? Doing X or Y or Z.” The authoritarianism is there to support the scaffold, but is not there to say what grows on it, except in the most general sense of “let’s try to improve” and “let’s lean toward important stuff rather than trivial.”
I also note that I’m somewhat overemphasizing the authoritarian bit, because I expect it’s the most difficult piece to swallow, and I want to really really really really really make sure that I don’t undersell how strict things will end up being. It seems way worse to lose a couple of people who would’ve liked it because I made it sound too restrictive than to include people who are going to be trapped and unhappy because I didn’t give them enough warning.
I would like everyone posting criticism, especially heated criticism, to keep very firmly in mind that Duncan did not have to write this. Whatever your opinion of him, at least make sure you’ve factored in the evidence that he wrote this whole, weird thing, complete with references to Ender’s Game, Fight Club, etc. instead of writing either 1) nothing or 2) something much more reassuring.
There are critics who think Duncan is incompetent and overconfident, and about this hypothesis I can say at least that it is consistent with Duncan having written this post. Then there are critics who think Duncan is, I dunno, evil or power-hungry or something, and I think those people are mostly failing to see what is in front of them.
a
The whole point of him posting this was to acknowledge that he is doing something dangerous, and that we have a responsibility to speak up. To quote him exactly: “good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked”.
His refusal to address basic safety concerns simply because he was put off by my tone is very strong evidence to me that people are indeed being hoodwinked. I don’t care if the danger to them is because he’s incompetent, overconfident, evil, or power-hungry. I care that people might get hurt.
(I would actually favor the hypothesis that he is incompetent/overconfident. Evil people have more sensible targets to go after)
a
I think you’re confusing “refusal to address basic safety concerns to handoflixue directly” with “refusal to address basic safety concerns at all.” I deny your right to judge and interrogate me, because of your failure to exhibit clear thinking and good discourse. I’ve engaged with those very same points in many other comment threads, though—there are literally only three people in this entire thread for whom I’ve determined that the EV of digging into their perspective is not worth it.
I note that there’s a bet waiting in the wings to lend your harsh words credibility. You could charitably offer to donate your winnings to salving the pain of the people you claim to care about.
Duncan,
I think you’re dramatically underestimating how your responses are being read by third parties. Your style of response to handoflixue specifically has made at least one person I’ve spoken to decide to avoid giving you well thought out criticism out of fear of you yelling at them and being very confrontational.
shrug
If you stumble upon a schoolyard fight, and immediately assume that the person you see punching is fundamentally violent and has high odds of attacking you, I think you’re skipping an important step of checking to see whether they’re the bully or whether they’re defending themselves. Most of us have had the experience (either direct or vicarious) of being absolutely infuriated by the people who try to pretend like there’s a perfect symmetry between the punch thrown by the aggressor and the punch thrown by the defender—it’s not hypocritical to both support “not starting fights” and “being willing to end them.”
I am aware of the risk of losing people around the edges, yeah. But I can’t do anything except point to the scores and scores of other responses (it might be over a hundred by now) in which I’ve thanked people for critique, responded in depth, updated visibly in real time, etc.
People get anxious, and maybe they disengage. But anyone who’s not going to be openly and unjustifiably uncharitable has nothing to fear from me in particular. I’m not going to not stand up for myself against bullies and trolls, even if it costs me some quiet whispers that would’ve contained good content.
Everything is tradeoffs. To put it another way: The person who’s refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.
If a), then fair play—this is nobody’s first priority except mine, and I don’t feel entitled to everyone’s opinions; it’s perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.
If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then … well … it probably wouldn’t have gone well anyway.
If c), they’ve followed the whole chain in chronological order and they still think I’m at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don’t know how well we’d be able to pass one another’s ITTs anyway.
To the best of my ability to judge, handoflixue’s earlier comments (e.g. above and below this comment) were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to Harsh Judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I’d demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single token apology conditional on “if” their tone had been too harsh (rather than just saying sorry, I crossed the line, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I’m a bad actor in every post.
(I note that in the places where they didn’t do this, I answered them in the same way I was answering everyone else, up until deciding to disengage on a policy level.)
Given that my stated role model is Ender Wiggin, if somebody thinks handoflixue’s approach is okay, or thinks that I shouldn’t have defended myself, then it shouldn’t be surprising that I claim, as my personal opinion, that their moral compass is drastically askew. There’s a different question about whether I’ve marginally erred, e.g. by being 15% too defensive, but that shouldn’t trigger someone who’s not going to be hostile in the first place to be afraid.
To put it another way: The person who’s refusing to give me their well-thought-out criticism is either a) unable because of costs/time constraints to look further and see that my claim they have nothing to fear is credible, or b) themselves jumping to unfounded conclusions based on less data than they have available to them.
If a), then fair play—this is nobody’s first priority except mine, and I don’t feel entitled to everyone’s opinions; it’s perfectly reasonable to have a policy of not spending a lot of time if your first impression is strongly negative.
If b), and they have time to look but are choosing not to and running with a strawman without questioning their own conclusions, then … well … it probably wouldn’t have gone well anyway.
If c), they’ve followed the whole chain in chronological order and they still think I’m at fault, then that just means we have strongly differing priors on right and wrong/acceptable and unacceptable, and once you get down to values on that level, I don’t know how well we’d be able to pass one another’s ITTs anyway.
handoflixue’s earlier comments were absolutely dripping with assumption-of-evil-intent, outright insults, unfounded leaps to harsh judgments of my fundamental character, poor logic, fallacious smears, and so on and so forth. They dropped into the thread after there were already over a hundred comments, including many where I’d demonstrated credible evidence of good faith and willingness to change my mind, which they completely ignored. They continued to ask loaded, unfair questions and set up strawmans over and over and over, with at least a dozen posts containing both deontological hostility and bad epistemics. They then offered a single apology conditional on an “if” (rather than just saying, sorry, I was too harsh, as I myself have done in these comments at least twice), and dropped the overtly hostile tone while continuing to subtly insinuate that I’m a bad actor in every post.
If somebody thinks that’s okay, or thinks that I shouldn’t have defended myself, then that’s somebody whose moral framework is, in my personal opinion, drastically askew. There’s a different question about whether I’ve marginally erred, e.g. by being 15% too defensive, but that shouldn’t trigger someone who’s not going to be hostile in the first place to be afraid.
Just pondering this passage. Interesting.
Fine. Reply to my OP with links to where you addressed other people with those concerns. Stop wasting time blustering and insulting me—either you’re willing to commit publicly to safety protocols, or you’re a danger to the community.
If nothing else, the precedent of letting anyone recruit for their cult as long as they write a couple thousand words and paint it up in geek aesthetics is one I think actively harms the community.
But, you know what? I’m not the only one shouting “THIS IS DANGEROUS. PLEASE FOR THE LOVE OF GOD RECONSIDER WHAT YOU’RE DOING.” Go find one of them, and actually hold a conversation with someone who thinks this is a bad ideas.
I just desperately want you to pause and seriously consider that you might be wrong. I don’t give a shit if you engage with me.
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
Also: If you refuse to give someone evidence of your safety, you really don’t have the high ground to cry when that person refuses to trust you.
Scott just chimed in against:
One particularly dangerous failure mode is that people may lose the capacity to recognize when the situation is toxic, unhealthy or counter-productive. The sunk cost fallacy is a powerful thing, as are the effects of strong emotional attachment. You may want to consider having a mandatory short vacation period from the house. This will allow people to take some space to get perspective on the house.
You also may want to mandate external social supports such as therapy, external friend groups, etc.
Agree. I think there are ways to do this without making it seem scary or unnatural, like “everyone visits family for a week around Thanksgiving”.
This seems entirely sensible. We’ve already decided to institute the outside friend check-in, and occasional vacation is an obvious extension of that.
I find this project very interesting! I can imagine an alternate-universe version of me being super excited to join it. I think it’s even possible that the this-universe version of me could benefit a lot from joining it. (I would see most of the benefit from myself in solving Problem 2, I think.)
But… I think there is not more than an 80% chance I would make it 6 months in such an environment without hitting the eject button to preserve my own sense of (physical or psychological) safety. (That is, a chance of at least 20% that I would hit the eject button.) I do think it’s great that Code of Conduct rule #1 encourages people to protect their own safety even at the cost of leaving the project. (Although for people of limited economic means this might be hard to execute, given the need to find a replacement, so probably “has the means to deal with needing to leave if the project doesn’t work out” is a screening factor.)
It’s possible this is just a fact about me, more than about the project. But I don’t have the sense that a lot of other members of the rationalosphere would well tolerate, say, an actual military boot camp environment, which feels a lot like the direction this is aimed. It’s possible I’m misunderstanding the degree of control you / the project expects to exert over the lives of the participants. But I know that I got happier when I adopted the rule that adulthood means never letting anybody force me to do anything that feels unsafe, even if refusing has significant costs. (For comparison, my largest concern about going to a CFAR workshop was that being subjected to a “comfort zone expansion” exercise, while in remote woods, with complete strangers, on a sunk cost of thousands of dollars, would be a high-stakes problem if I didn’t like how it went. Pete Michaud correctly disabused me of this concern during the interview.) Again, perhaps this just means that Dragon Army is not for me. But I’m curious what you think about it. It seems hard to imagine I could go 6 months of committing to try to perfectly execute all the stated rules plus one experimental norm per week without ending up in at least one situation where following the rules felt unsafe.
Separately, I’m interested in whether you think Problem 4 could be tackled separately from an all-consuming project like Dragon Army. I feel like I have seen the “desperately hoping nobody will bail after the third meeting” thing a lot before, but usually the context is “a bunch of people vaguely want to get a thing done but nobody has really committed to it”, in which context bailing after the third meeting is not violating any norms or agreements. Without making any new norms, one already has the option of actually asking for explicit commitments, rather than just seeing who shows up, and I think this option is not used often enough. I guess the failure mode of trying to solve Problem 4 alone is, if you ask for explicit commitments, you discover that people just won’t give them in the first place. Dragon Army seems like a big hammer to solve this but maybe it’s the only way?
I think the main issue here is culture. Like, I agree with you that I think most members of the rationalsphere wouldn’t do well in a military bootcamp, and I think this suggests a failing of the rationalist community—a pendulum that swung too far, and has weakened people in a way that’s probably better than the previous/alternative weakness, but still isn’t great and shouldn’t be lauded. I, at least, would do fine in a military bootcamp. So, I suspect, would the rationalists I actually admire (Nate S, Anna S, Eli T, Alex R, etc). I suspect Eliezer wouldn’t join a military bootcamp, but conditional on him having chosen to do so, I suspect he’d do quite well, also. There’s something in there about being able to draw on a bank of strength/go negative temporarily/have meta-level trust that you can pull through/not confuse pain with damage/not be cut off from the whole hemisphere of strategies that require some amount of battering.
It makes sense to me that our community’s allergic to it—many people entered into such contexts before they were ready, or with too little information, or under circumstances where the damage was real and extreme. But I think “AVOID AT ALL COSTS! RED FLAG! DEONTOLOGICAL REJECTION!” is the wrong lesson to take from it, and I think our community is closer to that than it is to a healthy, carefully considered balance.
Similarly, I think the people-being-unreliable thing is a bullshit side effect/artifact of people correctly identifying flexibility and sensitivity-to-fluctuating-motivation as things worth prioritizing, but incorrectly weighting the actual costs of making them the TOP priorities. I think the current state of the rationalist community is one that fetishizes freedom of movement and sacrifices all sorts of long-term, increasing-marginal-returns sorts of gains, and that a few years from now, the pendulum will swing again and people will be doing it less wrong and will be slightly embarrassed about this phase.
(I’m quite emphatic about this one. Of all the things rationalists do, this one smacks the most of a sort of self-serving, short-sighted immaturity, the exact reason why we have the phrase “letting the perfect be the enemy of the good.”)
I do think Problem 4 can probably be solved incrementally/with a smaller intervention, but when I was considering founding a house, one of my thoughts was “Okay, good—in addition to all the other reasons to do this, it’ll give me a context to really turn a bazooka on that one pet peeve.”
Eliezer wasn’t able to complete high school, for what I suspect are related reasons. (The sleep thing may have contributed, but I think it was overdetermined.)
I think I would have been extremely miserable if I had gone through boot camp at 18; I think I would have been able to bear going through it by ~25.
I think a relatively tight analogy can be made between attitudes towards the authoritarianism of a military bootcamp and attitudes towards romantic relationships. Like, if you go through a string of really bad relationships with partners who consistently abused you, you might update that there’s something inherently abusive about relationships and that you just shouldn’t be in one again, ever, because your autonomy is too important. On the other hand there is such a thing as a healthy relationship, even a healthy relationship in which you have less than perfect autonomy because you’ve made some commitments that you’re following through on, and you might be lucky enough to find yourself in one in the future if you’re open to the possibility and search carefully for someone to commit to.
I think I disagree that the pendulum will swing back in the future though. The rationality community being the way it is now, prioritizing flexibility the way it does now, probably has the property that it attracts people who are prioritizing flexibility and turns off people who are looking for reliability. So if anything I expect the problem to get worse over time unless someone makes a deliberate effort to attract looking-for-reliability sorts of people—hopefully Dragon Army can do this.
I don’t get the analogy. So, if you go through a string of really bad military bootcamps? But you need to stay open to the possibility of a really good bootcamp that you can and should commit to?
Yes, but using “military bootcamp” as a symbol of broader kinds of authorities you could submit to, e.g. schools, employers, governments, and keeping in mind that people are learning about how authorities work based on others’ experiences and not just their own.
As someone who’s done the whole military thing (am I alone?), I agree with your view that most members of the rationalsphere would struggle immensely in bootcamp, both in turns of physicality and culture (I’m referring mostly to the Army and Marines here, which focus on actual combat training vs. the Air Force and Navy that don’t).
I totally agree that you would have 0 problems (other than patience with the stupid parts) as you have a high degree of physical ability, emotional resilience, and general cognitive ability. You would very likely excel. I could say the same of Val and Pete, and I’m sure Eli would do well (I don’t know the others you listed well enough to venture a guess).
I have never met Eliezer. However, I suspect he would struggle a great deal and be unlikely to succeed from what I’ve read and been told. I can’t imagine Eliezer playing say football well either. My model of him just says he’s simply not optimized for that kind of environment where his intellectual strengths would be limited and his weaknesses amplified. It’s just not a remotely optimal environment for someone who is (according to my model of him) built like a race car, extreme performance within strict parameters (flat track, maintenance, etc.).
And that’s okay. The military enlisted system at least typically focuses on taking both physical and intellectual generalists and training them to perform a specific job. It’s all about the averages. The cockpit is decidedly not adjusted for individual needs or specialized performance for the vast majority of military personnel.
I do hope you’re at least somewhat right about the long-term, increasing-marginal-returns sorts of gains, since that’s my current strategy for achieving high impact on important matters.
You may wish to consider that this community has a very high frequency of disabilities which render one non-consensually unreliable.
You may wish to consider that your stance is especially insulting towards those members of our community.
You may wish to reconsider making uncharitable comments about those members of our community. In case it is unclear: “this one smacks the most of a sort of self-serving, short-sighted immaturity” is not a charitable statement.
Oh, I missed this one in the shuffle. Note that you chose to quote less than half a sentence, because if you quoted the whole sentence you’d have a heck of a time setting up the strawman you wanted to knock down.
Hi Duncan, I’m a relative newcomer (this is my first LW thread, though I’ve participated in rationalsphere discussions elsewhere), so this may not carry much weight, but I want to somewhat agree with handoflixue here.
One of my stronger reactions to your post is “this is an impossible set of expectations for me and a lot of others”. Which is fine, obviously you can have expectations that some people can’t live up to, and of course it is very good that you are making these expectations very clear.
But I sort of get the sense that you are a person who is fundamentally capable of being reliable and regularly making good life choices pretty easily, and that you sort of don’t get that for a lot of people these things are really hard even if they understand what the right choice is and are legitimately trying their best to do that.
This is based only partly on your post and somewhat more on a mini-talk which (IIRC) you gave at a CFAR community night where you posed the question “does it even make sense for people to seek out advanced rationality techniques such as the ones discussed here when they’re not displaying basic rationality such as eating a reasonable diet and sleeping enough?”. Even then, this question struck me as dangerously wrong-headed, and now that you are proposing to be in charge of people, this seems to take on more importance.
Advanced rationality techniques, at least when applied to one’s self-conception and life choices, are basically therapy. “Failures of basic rationality” are often better described as “mental health issues”. Therapy is how you deal with mental health issues. People with mental health issues need more therapy/advanced rationality, not less! I’ve seen it hypothesized that one reason we have so many mentally ill rationalists is because people with mental health issues must learn rationality in order to function, at least to some degree that is more than most people need.
I don’t actually know you, so my information is pretty incomplete, but my impression is that if someone fails to act in a way you (and they!) think is reasonable, you’re likely to become baffled and frustrated and try to deal with the problem by imposing stricter expectations & consequences. This might work for some people, but for many, it will just make them miserable and less productive because they will be angry at themselves for failing at things that they “should” be able to do.
I think it’s likely that your way of dealing with this is basically to screen out the people who are likely to react poorly to your approach, in addition to causing others like me to self-select out. That’s fine, I guess, though I would still be on the lookout for this sort of issue as a possible failure mode, and maybe also just demonstrate more compassionate awareness that things like reliability are actually almost impossible for some people, and maybe not attribute all of this to having the wrong culture or mindset.
(My general opinion of your project is “this sounds scary and I want to stay very far away from it, and this makes me somewhat wary of the people involved, and I wouldn’t recommend participation to people I know, at the same time I am really curious about how this will go so selfishly I’m a little glad it’s happening so I can gain information from it”.)
Thanks for the long comment. I really appreciate your candor and perspective—I do think I get the fact that other minds don’t work like mine, but you’re right in sniffing out that a lot of that knowledge is top-down and parts of me are still instinctively typical-minding a lot. I work hard to remind myself, e.g. I have triggers on certain words or feelings that cause me to review memories of specific times when my assumptions about what was going on in someone else’s head were blindingly false.
I think I generally agree with you that there’s a large overlap between rationality and therapy, and I’m intrigued by the hypothesis re: mentally ill rationalists; it seems to be pretty plausible.
Here’s my actual plan if someone fails to act in a way that things seem reasonable. Note that this is the “everything but the kitchen sink option,” including aaaaaallll of the steps one might take, and that for smaller disagreements, this can be done as a speed run or stepwise.
Determine whether to follow up in the moment or later based on the needs of the activity, determine whether to follow up in private, in group, or via delegation based on the apparent needs of the person.
Start by asking. What did they think was going on? What were their thought processes? Assume from the outset that people act in consistent, coherent ways, and that basically everyone is trying to make the world a better place.
Try to pass their ideological Turing test. In other words, try to reflect back to them the priorities they were holding and the goals they were attempting to achieve, and keep falsifying my hypotheses until they give a clear endorsement of my summary.
Ask them to model me, in return (note: one important subthread of how the house will run is a check-in along the lines of “is Duncan clear, consistent, and model-able?”). See if they can predict what my priorities were, and if they have a sense of what I’m reacting to. Do not make this some kind of sick high-pressure quiz dynamic … if they shrug and say “dunno,” I’ll just explain.
Try to lay out, from as birds’-eye as possible a perspective, the conflicting goalsets. Point at the causal chains that brought them into conflict, and highlight my model of where things are broken. Ask them if they have a different model/let them update my picture with a better sense.
Form a new plan for the future; explicitly discuss weighing the goals against one another, and how they ought to stack up. Possibly include other people in the discussion at this point, particularly if the defection seemed to have externalities.
Assume that plan failed. Come up with a plausible explanation for why; try to patch the first or second obvious holes. Form an intention going forward.
Check whether reparations need to be made. Hopefully, there’s a standard formula (as in the pushups example). If not, do a similar process of attempting to converge on a good face-saving/balance-restoring action. If there isn’t a clear satisfactory solution, default to a compromise and schedule a future check-in.
Through all of this, run things by others if either party thinks that’d be beneficial. Also consider things like anxiety/introversion, and have the conversation at a deliberate time rather than forcing it if it’s not urgent.
So yeah, in a sense, this might result in stricter expectations and consequences, but not in a blind, top-down way. In situations where there needs to be an immediate response, I’ll take an action/give an order and expect it to work, but I’ll want to revisit any such quick authoritarian moves after the fact, to explain my thinking and confirm absence of undue harm (and apologize/make amends of my own if necessary).
Overall, though, the idea is to build a high trust environment, and trust goes both ways and is easier to lose than to gain. The thing I want people in the house to actually be justified in believing is “Duncan always has good intentions and is making decisions from some kind of a model. He’ll explain when he can, and if he doesn’t, it’s because he has another model saying why he can’t, and he’ll instead explain both models once the thing is over.”
The idea being that I prove trustworthiness in situations 1-8, and people grant me a little leeway in situation 9. But 1-8 definitely have to come first.
This reminds me of Romeo’s comment over here:
http://lesswrong.com/lw/oym/how_id_introduce_lesswrong_to_an_outsider/dryk
Um. Quick reply before I go further—I’m really really confident that the community talk night thing you’re remembering either wasn’t me or that the quote doesn’t resemble what I said. I strongly agree with you that that’s a dangerously wrong-headed way to try carving up the world.
Oh, sorry for that mistake, then! Probably it was someone else. feels mildly embarrassed
I’m glad to hear you agree with my assessment of that way of thinking. In that case not very much of my comment actually stands.
Thank you for your thoughtful response!
Doesn’t Eliezer delete comments on Facebook that suggest exercise as a means of weight loss?
That’s not because he didn’t do the exercise. Bootcamp doesn’t care if you lose weight, they only care if you execute the weight loss program. If you doesn’t meet any of the body proportion standards, you just have to perform extra exercise.
Bootcamp (i.e. the military) cares very much about both losing sufficient weight to meet the standard as well as the ability to perform at a basic level of physical fitness. The different U.S. military services have differing standards, but the general requirements are all comparable.
In an environment where the food supply is tightly controlled and there is constant movement, people tend to lose a lot of weight quite rapidly.
However, if you don’t meet the body proportion standards after a certain time, you will be separated from the military.
Part of the program is separating people who don’t lose weight. That doesn’t mean they care about the height/weight, only that the next box is ‘process for separation’.
There’s not a lot other than adherence to procedure that most of the military actually does care about.
I’m not sure if I’m totally missing your point, or if you’re making a point that’s a distinction without a difference.
In Army basic training, there are two standards one must meet:
height/weight, adjusted for age and gender
PT test, which consists of push-ups, sit-ups, and a 2-mile run, with scoring adjusted for age and gender
Either one will get you chaptered out of the Army within certain timeframes. There is a lot of fine print for specific situations (basic training has some extra cushion), but that’s the ground truth. These same principles apply to the military at large, but the standards and fine print differ.
I don’t know how that squares with: “That doesn’t mean they care about the height/weight.”
In an organization so devoted to adherence to procedure, what the procedures are set up to be is often a pretty strong indicator of what the organization cares about...
No individual cares about anything other than the procedures. Thus, the organization as a whole cares only about the procedures. The behavior is similar /with the procedures that exist/ to caring about fitness, but there is also a procedure to change procedure.
If the organization cared about fitness, the procedure to change the height/weight standards would be based on fitness. As it is, it is more based on politics. Therefore I conclude that the Army cares more about politics and procedures than fitness, and any behavior that looks like caring about fitness is incidental to their actual values.
With respect to power dynamics point one and two, there is another person known to the community who is perhaps more qualified and already running something which is similar in several respects—Geoff Anders of Leverage Research. So I don’t think this is precisely the only group making an attempt to hit this sort of thing, though I still find it novel and interesting.
(disclaimer: I was at the test weekend for this house and am likely to participate)
Yeah, Geoff and Leverage have a lot I would love to look at and emulate, but I haven’t been running under the assumption that I’d just … be allowed to. I’m beginning some conversations that are exciting and promising.
That being said, I do think that the overall goals are somewhat different. Leverage (as far as I can tell) is building a permanent superteam to actually do stuff. I think Dragon Army is building a temporary superteam that will do stuff in the short and medium term, but is more focused on individual leveling up and sending superhero graduates out into the world to do lots and lots of exploring and tackle a wide number of strategies. My model of Leverage is looking for the right thing to exploit on, whereas I’m looking for how to create competent people, and while there’s a lot of overlap those are not the same Polaris.
I similarly think Geoff is highly competent and certainly outstrips me in some ways (and possibly is net more qualified), but I’d posit I outstrip him in a roughly similar number of ways, and that he’s better matched for what Leverage is doing and I’m better matched for what DA is doing (sort of tautologically, since we’re each carving out mountains the way we think makes the most sense). I think the best of all would be if Geoff and I end up in positions of mutual respect and are able to swap models and resources, but I acknowledge he’s a good five years my senior and has no reason to treat me as an equal yet.
EDIT: Also note that Geoff is disqualified by virtue of already being busy, and as for “just join Leverage,” well … they’ve never really expressed interest in me up to this point, so I figured I wouldn’t bother them unless I was no longer employed day-to-day.
What do you think are the key advantages & disadvantages of your Polaris vs Leverage’s? How does this relate to methods?
I dunno about “key.” Open-ended brainstorm, keeping in mind that my models of Leverage are vague and straw and NO insult is intended if I get things wrong …
Leverage advantages—provides a discriminator that lets you tell more accurately who fits and who doesn’t, sounds better if your goal is to accrue funding, is better if your goal is to return money to an investor, provides your participants with a strong mission that they can write in their hearts rather than a vague one that might be hard to grasp, gives you a narrowing principle that helps you discard certain kinds of growth as irrelevant/boon-doggle with reasonably high confidence
Leverage disadvantages—seems (from my limited outside vantage point) to require people to more closely conform to the shape of the leader/take on a singular mission rather than allowing for different colors in the spectrum, seems to fall prey to the intellectual property and get-there-first problems that encourage isolation from the broader network of allies, (maybe) requires you to somewhat distort what you’re doing to please investors, (maybe) requires you to strike the balance between deciding-too-soon and being-decision-paralyzed because you have to cohere around a smaller number of goals at a time
Dragon Army advantages—adheres (slightly) more closely to what the average rationalist wants and thus opens you up to a (slightly) wider range of participants, causes members to gain leadership and facilitation skills of necessity rather than accidentally/luckily, (somewhat more) forces people to confront the question what do you really want instead of giving them an easy out by handing them a distracting answer, doesn’t require as much funding, biases toward action rather than running the risk of spiraling up into the meta
Dragon Army disadvantages—more vulnerable to strawmanning and skepticism because it is less coherent and clear, much more vulnerable to confusion or failure if I get hit by a bus because the models all live in my head and aren’t yet interactable, runs the risk of losing people who are impatient and feel like they’re lost in triviality, is less viscerally rewarding (jack of all tradesing, that is) than getting gold medals as a master, needs a longer runway/longer start time because it’s more explicitly about culture building and less about objective checkpoints that you can meet on the fly
incomplete
Note that I CANNOT STRESS ENOUGH that my models of Leverage are VAGUE AND PROBABLY WRONG and also note that I’m sleep-deprived and I am aware that this may not really answer your question.
Oh, also: AFAIK, Leverage is actually fairly low on precommitment, i.e. if someone were to want everyone to get together in the same room at the same time on a regular basis, they would have to go around and win the argument something like forty times, and at any time someone who’d previously been convinced could just say, “actually, never mind, I changed my mind and I have better things to do again,” and there aren’t any … initially consensual, eventually coercive? … structures in place.
Nothing, in short, to get people across the unpleasant valley except their own epistemics and willpower … no firm, unyielding scaffold that can be built such that others can rely on it. So, Leverage has the advantage of not having the failures of such a system (e.g. people getting trapped and wasting time), and Dragon Army has the advantages of having the benefits of such a system (Actual Reliability that doesn’t require inordinate upfront costs, the ability to develop an ecology of affordances over time upon a Schelling point of togetherness).
An excellent post from Slatestarscratchpad that sums up (I think) something like 85% of the fundamental disagreement that’s fueling the more heated clashes:
I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is “does this behavior seem normal or like a predictive red flag?”. In those cases, your lived experience directly influences your perception. Someone’s actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that’s evidence. The people who didn’t think anything was weird brush off the others as oversensitive, risk averse, or paranoid. Then those raising alarms think of everyone else as callous, imperceptive, or malicious. It’s not just people who don’t alieve the correct base rates. Certainly those people exist, though they’re much more plentiful on Tumblr than in person or on LW. It’s very non-obvious whether a strong reaction is correct.
Neither side can truly accept the other’s arguments. It’s a bad situation when both sides consider the other’s reasoning compromised beyond repair. That brings politics and accusations of bad faith on all sides. But there is a fact of the matter, and the truth is actually unclear. Anyone thinking at enough of a distance from the issue should have honest uncertainty. I suspect you’re particularly prone to refusing to let the conflicting experience of others be seen by your deep internal world-models, to strongly underestimating the validity and reliability of that type of evidence. That would cause what you say to be parsed as bad faith, which other people then respond to in kind. That would cause a positive feedback loop where your prior shifts even further away from them having useful things to say. Then you’d end up a frog boiled in a pot of drama nobody else is experiencing. I’m not sure this is what’s happening, but it looks plausible.
Strong endorsement of all of this.
I like this, and am curious if it caused anyone who was embroiled in the more intense discussions to change their mind or actions?
First, you seem to think that “Getting Useful Things Done” and “Be 99.99% Reliable” heavily correlate. The military is infamous for bloated budgets, coordination issues, and high rates of sexual abuse and suicide. High-pressure startups largely fail, and are well known for burning people out. There is a very obvious failure state to this sort of rigid, high pressure environment and… you seem unaware of it.
Second, you seem really unaware of alternate organizational systems that actually DO get things done. The open source community is largely a loose model of “80% reliable” components, and yet great things get built by these collaborations. Rome wasn’t built in a day, and neither was Linux.
Third, and most bluntly: I don’t think you have the slightest knowledge of Fault Tolerant Design, or how to handle Error Cases, if you would say something like this. I write software that can rely on it’s inputs working maybe 80% of the time. This is accounting software, so it is NOT allowed to fuck up on corner cases. And I do it just fine. 80% is perfectly sufficient, if you know how to build a system that fails safely.
I think this makes you a uniquely bad candidate for this sort of endeavor, because the first iteration of this experiment is going to be running at maybe 80% reliability. You’re going to have a ton of bugs to iron out, and the first run needs to be someone who can work with 80%. And you seem pretty blunt that you’re inept in that area.
Fourth, your thresholds for success are all nebulous. I’d really expect testable predictions, ideally ones that are easy for the community to evaluate independent of your own opinions. It seems like the goal of this exercise should be to produce data, more than results.
All that said, I do value the focus on iteration. I think you will be prone to making more mistakes, and inflicting more unnecessary suffering on participants, but I do not think you have any sort of malicious intent. And with no one else really stepping up to run this sort of experiment… well, if people are willing to make that sacrifice, I’m happy to learn from them?
But I think you dramatically over-estimate your ability, and you’re selling short how badly the first version is going to go. There are going to be bugs. You are going to need to learn to deal with the 80% that you get.
And on top of that, well, the consequences for failure are actually worse than being homeless, since you’re also responsible for finding a replacement. That’s a really huge risk to ask people to take, when you yourself have absolutely nothing at stake.
I think your heart may well be in the right place, but the idea as currently conceived is actively harmful, and desperately needs to build in much better safety protocols. It also needs to be much clearer that this is an initial draft, that it will go badly as people try to figure this out, and that initial participants are going to be suffering through an unoptimized process.
Finally: You don’t have a fail safe for if the whole idea proves non-viable. As it stands right now, you kick everyone out but leave them on the hook for rent until they’ve run 3 replacement candidates by you. In the meantime, you enjoy a rent free house.
It really feels like it needs an “ABORT” button where the participants can pull the plug if things get out of control; if you turn out power mad; or if it just turns out a significant number of participants badly estimated how this would go.
The fact that you have nothing on the line, and no fail-safe / abort clause… really, really worries me?
TL;DR: Your plan is dangerous and you haven’t given nearly enough thought to keeping people safe. Scrap what you have and rebuilt it from the ground up with the notion of this being a safe experiment (and I want to emphasis both the word “safe” and the word “experiment”—you should be expecting the initial version of this to fail at producing results, and instead largely produce data on how to do this better in the future)
Nah.
(Having exchanged half a dozen comments with cousin_it, I now recognize the pattern of a) you’re defaulting to the least charitable interpretation at every possible split point, b) many of your claims and conclusions are flat-out false, c) you’re incredibly confident that you’re correct about all of your assumptions and are including zero nuance or uncertainty, and therefore d) this thread will produce nothing of value. I feel no need to convince people who a, b, and c, especially those who are unable to distinguish object level standards from meta level ones. Contrast your post with jbeshir’s, for instance, which is also highly critical but in an entirely constructive way that doesn’t make the same mistakes.)
Yes, we have noticed the skulls. (http://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/)
Datapoint: I thought handoflixue’s comment was much more reasonable and less uncharitable than cousin_it’s opening comment was; in particular, the points about needing an explicit abort procedure sounded very reasonable and it makes me slightly worried to see you making a comment that implies you’re just disregarding them. (only slightly because of my personal trust in you and your abilities; I expect that people who don’t know you, will get much more worried)
EDIT: I wrote this comment before reading your reply to jbeshir’s comment; your response there further reduces my worry.
Kaj, I’m surprised. What do you think of this? Especially Ctrl+F “self-insert” and “horns effect”.
Not knowing the author, can’t say much else than “someone freaked out”? I see mostly a strong emotional reaction, which looks to me similar than a bunch of other strong emotional reactions that people have had when they’ve pattern-matched things in the rationalist community to their stereotype of a cult, without really actually understanding the community (or necessarily cults either).
Ah, now I see why some smart folks were okay with Duncan’s idea. They pattern-matched criticisms of it to criticisms of the rationalist community! That’s sneaky, even Scott fell prey to it, though he came around quickly (check his tumblr).
It seems like the only way “weird” groups can defend against such radicalization over time is by adopting “normie” ideas. I’ve been advocating that for a while, but I know it’s a hard sell here because many rationalists feel hurt by normies.
Well, what else can you say to a criticism that’s mostly an emotional outburst? That post was using every opportunity it could to interpret Duncan’s post in a maximally uncharitable light and turn stuff into ad hominems, such as “yes, dude, I too had a job in college”. I searched for the “self-insert” phrase like you asked me to, and it brought up a line where the author expressed not liking Duncan’s writing. What substantive point am I supposed to take out of someone’s literary preferences? (also the author mischaracterizes “A Weekend with the Legion”—to the extent that it’s a self-insert fic, it’s one of joining a rationalist group house not founding one, and I’m not sure where the “mary sue” thing came from)
For me personally, a lot of what Duncan wrote resonated in me a lot in that I’ve long wished to live in a society that would be arranged kind of like he described Dragon Army, and it seemed clear that he’d seen the same things and worked off a similar model. Whereas few of the criticisms seemed to understand those intuitions/emotional needs that I presume we’re both operating out of, so ended up missing the mark. E.g. I’m totally willing to buy it when he says that he doesn’t actually want to be the leader, both because I’ve met him, and also because not wanting to be the leader is a major part of why I’m not trying to create a similar project myself now that I’ve read his post (that, and because it would be too difficult to explain to people without them pattern-matching it into cults).
It feels weird saying this to you, and please don’t take it too seriously, but if you feel an emotional need to live in a commune with salutes, push-up punishments and restrictions on criticism, have you considered that your emotions might be wrong (from an outside perspective)? For example, many of my emotions are wrong, that’s why I don’t text my exes while drunk.
No offense taken.
The things you mentioned seem to me more like incidental than essential features of the commune; also I’m not saying that I would agree with Duncan on exactly everything regarding the design—for one, I thought Ender’s Game was an okay book but didn’t see what all the fuss about it was. :) But then again, his project, and I’m sure that my ideal aesthetics wouldn’t be his ideal aesthetics either.
The core things that do appeal to me are… well, this is a little hard to verbalize, since like him this is operating more off a system 1, pattern matching basis rather than any explicit first principles. But things like agreement with the sense that the pendelum of modern society has swung a little too far with regard to individualism and commitment, a sense that there is genuine value in being part of a group where everyone is genuinely entirely commited to the project and each other’s welfare (“One for all, all for one”), where people are willing to try whatever weird things if it works without needing to worry about what outsiders might think, and generally having a strong supportive social structure that offers you help when you’re struggling, pushes you to become the best possible version of yourself when you might otherwise slack off, and provides frequent feedback of how you’re doing regardless.
I think I’d be much happier off in a situation like that, rather than the current situation where it feels like I mostly have to figure out everything myself and it’s a constant struggle to find allies for any project that would make things better and which I can’t pull off just by myself.
But sure, I’m open to the possibility that I’m wrong in this and such an environment wouldn’t actually be good for me, or that I’m reading too much into Duncan’s post and that the intuitions he’s operating out of are actually substantially different from the ones I’m having.
If the problem is lack of supporting structure in modern life, surely the answer is joining traditional institutions, not more costly and risky social experiments?
I think this depends on how much alignment you can expect to have with traditional institutions. Quakers let in gays and atheists, but the politics of the typical member grated; joining the Mormons would involve celibacy until God calls up the prophet and tells them that being gay is okay (which I cautiously expect in less than ten years) and lying about beliefs in the supernatural. Joining the military involves participating in ‘wars’ that I disagree with strenuously, and when I was the right age to do it “don’t ask don’t tell” was still official policy (and, I later learned from an acquaintance who did go to the Academy I would’ve gone to, being openly atheistic was seen as an invitation for hazing by some of the instructors).
I’m not inviting people to join the Mormons. The OP’s curriculum would be better covered by joining a gym, meditation group, public speaking club or graphic design course, which don’t have the problems you mention.
I brought up the Mormons because I seriously considered joining them (and rejected it for the above reasons).
I think you’re fundamentally misunderstanding the nutrient being sought out if you think that the list of four things you mention (individually or all together) would actually satisfy the relevant hunger.
I thought the point was learning skills and interacting with people. If the real point is filling a tribe shaped hole in your soul, I can only repeat my question to Kaj. Are you sure that yearning for a tribe is an emotion that serves your interests?
Given how yearning for a tribe is a “powerful, fundamental, and extremely pervasive motivation” (old paper, but later research has only served to further confirm the general notion), I would guess yes; for me personally, “being in a tribe” seems very much like the strongest unmet terminal goal that I have.
That seems like proving too much, since I don’t yearn for a tribe. Are you sure you aren’t confusing your social needs for a specific dream of fulfilling them?
A motivation can be “extremely pervasive” without being universal. (very few things in psychology are truly universal) You may not share the yearning, but I’ve certainly run into plenty of people who do.
That is possible, and I have made that kind of a mistake before, but if there’s an alternative way of fulfilling them I haven’t found it.
It seems to me like there are flavors of ‘interacting with people’ that require tribe-mates.
Having a tribe is one of my interests.
I think you misunderstand the point. The goal is not to develop skills, the goal is to create an emotional web of support that comes from being a bona fide member of a tightly-knit tribe. You don’t (normally) get that at a gym or a public speaking group.
Possibly excluding some religious communities, which I wouldn’t want to join because I’m not religious, I don’t know of any traditional institutions that would provide general life support. Schools have some support structures in place that are aimed at helping you do better at school, martial arts training supports you become better at martial arts, etc. Which traditional institution is one that you can just join, and which is aimed at making all of its members become the best versions of themselves in all respects?
(By the way, I forgot to reply to this in the earlier comment, but I think that interpreting “start from the assumption of good faith when interacting with other members of the house” as “no criticizing the leader” is… not a particularly charitable interpretation.)
When deciding who to put in power and how much power to give them, the principle of charity is harmful.
It seems to me that institutions that claim to make you better in every way are always scams. The fact that a school will teach you only welding, and will give you a welder certificate in a certain number of weeks if you keep showing up, is a feature. If you join two or three institutions according to your interests, you’ll be fully booked in both self-improvement and social interaction, and it’s still less costly or risky than joining an authoritarian commune.
There’s healthy skepticism and then there’s twisting words wildly beyond any reasonable interpretation...
Also the level of skepticism should be proportionate to the level of authority requested; it makes sense to be more skeptical the more power someone wants. But my reading of the original post agrees with Sinal’s reading, who compares the level of authoritarianism with that of a Boy Scout troop leader. The original post has stuff like the first rule of conduct for a dragon being to protect themselves; it mentioned that people can “hard veto” proposed experimental norms; people are free to leave the experiment if they wish. Duncan’s authority seems to be limited to upholding policies that were agreed upon by group consensus and running them for a limited time; he has mentioned in the comments that he can be removed from power using the kind of procedures one would expect, e.g. a majority vote. The specific examples of his “tyrannical” powers that were given were things like deciding that a specific meeting will be held on Tuesdays even though not everyone wants the meeting to be on a Tuesday.
The Boy Scout troop leader probably has more power over his scouts than Duncan has in the house, and I doubt we’d consider people obviously unsuitable to be scout leaders for the sin of suggesting that scouts should assume good intent in their dealings with each other.
You’re talking like joining this commune would be a huge enormous risk, and I just don’t see that. Sure there’s a risk, but it’s on the same order as joining any other commune or moving in with other roommates—you risk having a miserable time for a while if it turns out you’re not a good fit for each other, and then things may be inconvenient for a while as you need to look for a new place where to live.
Personally I made the mistake of moving in with some wildly incompatible roommates at least once, and have also on other occasions lived together with other people who I’d strongly have preferred not to live together with. Yes, it sucked a lot and made me much more miserable than I probably would have been otherwise. But then I moved out and don’t think I’ve suffered any lasting consequences, and despite the unpleasantness I still don’t consider it a risk on the order of “has to absolutely be avoided”.
Agreed that this is a feature: sometimes one really does only want to learn welding. But if you want to learn dancing and everyone’s only teaching welding, with all the places that claim to teach dancing actually being scams… then that’s a major problem for you, and suggests that you’d get a lot out of it if someone did found a dancing school that actually taught dancing and wasn’t a scam.
I think claiming to teach skills that aren’t taught by any traditional institutions is fishy. (This isn’t an isolated demand, I’ve argued with CFAR folks that they should prioritize research into testing rationality, instead of jumping head first into teaching it.)
Duncan’s project isn’t really about teaching skills, though.
Yeah, when we want to learn things beyond the expertise of a house member (such as when we learned to use firearms during the weekend experiment) we bring in professional help.
The post says it will help you achieve three goals, of which self-improvement is the most important, and gives a list of 15 skills it will help you learn (many of which are fishy by my standard above).
I think what you’re referring to is something like the Holy Grail of institutions. So if someone claims that they’ve found the global optimum of institutions, the right reaction should be one of heavy skepticism. It’s not wrong to seek the global optimum, but when someone proposes that it exists in some well-explored territory based on a somewhat simple model, the argument they should present for it would probably look something like 1) We overlooked some seemingly trivial, but serious details that would have fixed the major issues we had previously and/or 2) Iterating on this idea for a while will not result in diminishing gains for a considerable time.
What we have in society right now is a bunch of local optimums for specific needs. I think we should be prepared for the scenario in which the global optimum looks weird, and is composed of sort of a hodgepodge of various fixes and hacks and specific set-ups to meet different requirements for different people. And I know this looks ugly, but that’s typically what solutions as the output of optimization processes look like. I consider a single hierarchical institution to be a simple model, and therefore consider it unlikely that such an ambitious goal will be reached using such a solution.
So based on my above model of institutions I sort of place low probability on a solution that consists of a simple model already well-explored or without a considerable amount of details tacked-on that have been found through consistent iteration and optimization. Right now I think this experiment will have to be run with significant fail-safe mechanisms in place and outside observation so that this process can actually take place.
Isn’t starting from a simple model and then iterating and optimizing (i.e. exactly what Duncan is proposing) the only way to get to that point?
It’s not obvious to me that Duncan is proposing that. See my comment here. To me, it seems more like iterating and optimizing towards the minimum would get you something far from both the extremes of the libertarian egalitarian model and the one-person-in-charge-of-everything model.
I mentioned in another comment that Duncan’s role seems to be “upholding policies that were agreed upon by group consensus and running them for a limited time”; this does seem like it’s pretty distant from both rampant individualism and one-person-in-charge-of-everything to me.
I’m not sure of how to interpret your referenced comment; you seem to be talking about the “old model” being “cults”, but I don’t know what you mean by cults—I interpret a “cult” to be something like “a small group rallied around a charismatic leader with absolute authority”, but I don’t think that has been the predominant mode of social organization at any point in history?
I interpret “cult” as applicable to both small and large groups and not dependent on whether the leader has charisma or not (It could also refer to small tribes with chieftains, dictatorships, absolute monarchies, etc.). And I think in this regard it has been the predominant mode of social organization throughout history.
But after seeing Scott’s “on fourth thought” I have been more convinced that Duncan has been moving in the direction of placing limits on his power and making sure the appropriate safe-guards are in place, which has updated me away from seeing the pendulum as swinging too far in the opposite direction. I think the question remains whether or not continued updates and iterations will involve further limitations on his authority.
Sure. You are describing a church group, or maybe an entire sect/denomination (see e.g. pretty much all early Protestant movements).
Is it a good idea? As usual, it depends :-/ Sometimes it works out and sometimes it doesn’t. Sometimes you spend a safe and content life doing good work, and sometimes you find yourself killing evil abominations like Catholics.
Besides, such groups evolve and usually not in a good direction. Becoming bureaucratic and ossified is relatively harmless, but being taken over by sociopaths (as per ribbonfarm) can be much worse.
Ok. If you don’t mind, I’ll use you as an interpreter for Duncan, since he doesn’t answer questions much. Can you explain why the idea of a group house with salutes, push-up punishments, restrictions on criticism etc. appeals to you? Is there any evidence that it would help learn skills more effectively, compared to taking a class? Why do you feel that the obvious dangers aren’t dangers, apart from knowing Duncan personally (many real world tyrants were reportedly charming in person) and seeing the list of excuses that’s identical to that of every other cult?
I resisted playing the fallacy game with Duncan because he’s clearly just parroting stuff, but I expected better from you. Okay, let’s go. “You’re being emotional” and “you’re pattern matching” are examples of the bulverism fallacy. Your turn.
OK. I’m even more surprised about you now but let’s drop this.
This person’s post, while containing some overlap with the more true and useful criticism here, is also not the sort of thing I expect people to cite on LW and not, I think, a useful entry in the back and forth here.
On the other hand, the difference in our levels of endorsement of it explains a lot about why our interaction went south in a hurry.
Quoting Qiaochu:
I was tentatively willing to give you some benefit of the doubt even though I don’t know you but I’m really disappointed that you feel the need to score points against a rationalist-adjacent posting to her Tumblr about how your post looks to her from her outside vantage point. I brought a similar-amount-of-adjacent friend to the seder and it freaked her out. Rationalist shit looks bizarre from a couple steps away. You do not have to slam my friend for not being impressed with you.
That’s kind of unfair, considering the sheer amount of point-scoring going on in the original post.
Fair point. I will edit the above to remove point-scoring criticism; if this person wanted to be exposed to it, they would’ve posted here directly. I’ll ask you to leave your comment so it’s clear what originally occurred.
That being said, they certainly have no qualms about tearing into me. Like, my response to them was not a response to “I am unimpressed” or “I have a negative reaction to this,” and I think it’s a little disingenuous or unfair of you to summarize their content thusly. It’s … an asymmetric expectation of charity? Holding a double standard? Or something like that. I’d hope you’d offer feedback to them similar to what you said to me here, to see how they respond.
I know her and she has earned some charity from me. You’re a stranger soliciting a line of credit. Also, her task is “opine on Tumblr” and yours is “benevolent dictatorship”. If you want me to convey to her that your feelings were hurt I could do that for you, I suppose.
It’s less that my feelings were hurt (they were, a little, but I’ve developed a pretty thick skin around “strangers are wrong about me”), and more that you’re saying, to me, “hey, please don’t be uncharitable or overly critical or focus on point-scoring,” and I think the point-scoring exhibited in that post would cause me, in your shoes, to make a symmetric point to my friend. It’s a consistency thing, of supporting the norms I want to see in all places, ignoring partisan or loyalty lines (being willing to call out my allies as much as I’m willing to call out a stranger or an enemy).
I guess if I were to ask you to convey a message, it would be “this person thinks you’ve jumped to unfounded conclusions, and wonders what odds you’d put on ‘I might be wrong.’”
I don’t really see the situations as symmetrical or calling for identical norms.
Thanks. As Lumifer has pointed out, I have become more defensive in the past 36 hours, but I claim it’s almost entirely limited to the two individuals who have shown themselves to be deontologically hostile and extremely overconfident in their models. There’s obviously wiggle room in there to say “Eh, even given that, Duncan, I think you’re overreacting,” but if so, it’s because I feel that after a hundred comments and a multithousand word post (that I didn’t have to make at all, in the first place) I deserve some credit à la I’ve clearly demonstrated willingness to engage positively with criticism and update publicly and admit wrong and so on and so forth (and therefore don’t like comments that presuppose me not being all those things).
I have absolutely no confidence that I’m correct in my assertions. In fact, I was rather expecting your response to address these things. Your original post read as a sketch, with a lot of details withheld to keep things brief.
The whole point of discussion is for us to identify weak points, and then you go in to more detail to reassure us that this has been well addressed (and opening those solutions up to critique where we might identify further weak points). If you can’t provide more detail right now, you could say “that’s in progress, but it’s definitely something we will address in the Second Draft” and then actually do that.
I’ve said “that’s in progress, but it’s definitely something we will address in the Second Draft” all over these comments. You jumped into the discussion two days in and just … didn’t bother to read? I feel defensive and upset over this, because a big part of doing this whole thing out in the public view was to build credibility as a good actor who listens and updates, and I feel like you just completely ignored all the evidence of that as you started to write your critique.
And in that critique, you used a bunch of phrases like “I don’t think you have the slightest knowledge of Fault Tolerant Design” and “you haven’t given nearly enough thought to keeping people safe” and “you yourself have absolutely nothing at stake” and “you seem really unaware of X” and “you’re a uniquely bad candidate” and “the idea as conceived is actively harmful” and on and on and on. You cannot pretend that this language does not indicate strong confidence. Words have meaning.
And most of those things presuppose stuff about my internal state, or my experience, or actions I have or have not taken, and assert those things as fact or extremely likely probability, rather than putting in any kind of hedge or owning “I could be wrong about this” or whatever. You take all sorts of things that you cannot possibly know, and instead of asking about them, build up a structure in which they’re taken as given and Everything Is Bad. You do say “it seems to me” a few times, so some credit is due there, but overall, your post was overwhelmingly assertive and aggressive and lecturing/condescending, in stark contrast to the vast majority of the critical feedback (and in stark resemblance to the few comments I’ve responded to with hostility).
You did not come across as trying to identify weak points and then find out what I thought about them; you came across as trying to tell me that I’m bad/dumb/evil.
For the record: all of your points are under consideration, many of them have been completed to satisfaction within the group, and those which remain are either a) highlighted elsewhere in the comments here by me saying “Yeah, that’s a solid point, we should do something about that,” or b) have, on reflection, been ranked as low-priority.
In the absence of a sound rebuttal to the concerns that I brought up, you’re correct: I’m quite confident that you are acting in a way that is dangerous to the community.
I had, however, expected you to have the fortitude to actually respond to my criticisms.
In the absence of a rebuttal, I would hope you have the ability to update on this being more dangerous than you originally assumed.
Bluntly: After reading your responses, I don’t think you have the emotional maturity necessary for this level of authority. You apparently can’t handle a few paragraphs of criticism from an online stranger with no investment in the situation. Why should I possibly expect you to be more mature when dealing with an angry participant whose housing depends on your good will?
On the off chance that you’re actually open to feedback, and not just grandstanding to look good...
1) I apologize if my tone was too harsh. You are attempting something very dangerous, on a path littered with skulls. I had expected you were prepared for criticism.
2) Commit to posting a second draft or addendum, which addresses the criticisms raised here.
3) Reply to my original post, point by point. Linking me to other places in the thread is fine.
Screw you; it’s not “on the off chance,” it’s been overwhelmingly demonstrated and backed up by multiple people in this thread. You’re attempting to highlight “emotional maturity” in a way that means “I want you to let me be socially dominant over you, despite the fact that I’m violating norms of good faith and discourse.”
In fact, what I have is sufficient emotional maturity to notice when I’m being bullied, and not roll over, even if it’s somewhat socially frowned upon for the bullied to fight back openly. i.e. I reflectively endorse both the calmness and openness with which I’ve reacted to the majority of commenters, and the degree to which I have risen to and matched your hostility rather than just letting you punch unfairly.
I’ll do 3) if and only if you rewrite your original point to include a generally normal amount of epistemic uncertainty/humility for claims made on LessWrong about a person you don’t know well, after that person’s demonstrated willingness to be transparent and to update.
And just to be clear: I don’t give a shit about social dominance. I’m not trying to bully you. I’m just blunt and skeptical. I wouldn’t be offended in the least if you mirrored my tone. What does offend me is the fact that you’ve spent all this time blustering about my tone, instead of addressing the actual content.
(I emphasize “me” because I do acknowledge that you have offered a substantial reply to other posters)
I don’t want to mirror your tone because I think your tone is both socially corrosive and epistemically unsound. I’ve at least in part been fighting you so hard because I want to publicly defend a stance that the way you’ve acted in this thread is unacceptable. Saying “I’m just blunt and skeptical” is not a complete description of the posts you’ve made; others in this thread have been blunt and skeptical without jumping to conclusions, lecturing, and being wildly overconfident that their map is accurate enough to justify throwing excrement around.
I think you’ve fallen far short of the standard of a place like LW in this thread, and I want that opinion known to anyone trying to model me.
You seem to feel that publicly shaming me is important. Should participants in your group also expect to be publicly shamed if they fall short of your standards / upset you?
With the caveat that I’m attempting to shame the way you’re going about engaging in discourse much more than I’m shaming the core of you as a person (really, you’re the one operating on the level of the fundamental attribution error within this particular thread; look in a mirror)—yes, absolutely. Part of having standards is making it socially unacceptable to fall grossly short of them.
That’s modified by things like the “saving face” section above, and the clear intention for all of us to grow and improve, me included—none of us are getting it right on the first try, and you have to scaffold growth and reward with gentle affirmation people who are willing to try to change for the better.
It’s further modified by the fact that people who don’t like these standards can simply not join, and I’ve spent now well in excess of 100 hours making my models crystal clear to those who are considering opting in (so that their decision can be fully informed).
But yeah—anybody who’s falling as far short as you absolutely deserves to be called out for it, and given a choice between “do these concrete things differently” or “lose social points.” Since you’ve repeatedly refused to stop jumping to conclusions and ignore evidence that I’m acting in good faith and not an idiot—since you’ve refused to do concrete things differently—yeah, I wholeheartedly endorse you losing social points, and people updating the way they assume interactions with you will go as a result.
I’ve changed my tone and apologized.
You’ve continued to dismiss and ridicule me.
You’ve even conceded to others that I’m a cut above the “other trolls” here, and have input from others that I’m trying to raise concerns in good faith.
What more do you want?
Alright. As a test of epistemic uncertainty:
I notice that you didn’t mention a way for participants to end the experiment, if it turns out abusive / cult-like. How do you plan to address that?
I think the problem here is the same as the problem of enforcing repayment of loans. If someone borrows a bunch of money, and then later has no money to repay, how should society respond?
Obviously, the thing is not simply “demand money.” Similarly, though, there can’t be no standard of requiring recompense, because that sets up a really bad incentive.
So my current plan is (in addition to really heavily highlighting that people need to think this through/talk with their advisors/visualize failure/ensure they have a buffer sufficient for likely amounts of damage) to set up something like the following norms:
If you conclusively determine that you need to drop from the experiment, no one is allowed to argue or convince; this is referred to as “rule-one-ing out,” and is a thing that we will explicitly practice in small doses in the hope that this will transfer over to larger spaces.
If dropped, you retain full access to kitchen, bathrooms, lawn, living room, etc. but agree to physically avoid house activities (and those house activities will e.g. change to not use shared rooms that you live in). You’re also welcome to leave, but maintain the same sort of “normal” financial obligation that people have when they suddenly vanish, i.e. you’re still paying for your slot for a little while.
“A little while” means that you agree to put forth good-faith effort to find a viable replacement. I said “three potential replacements” as an initial guess to point toward “it’s harder to replace yourself here than in normal houses; there should be some limit to your obligation if we say ‘no’ to your first two choices; you’re definitely not on the hook forever.” It’s possible that the answer should be “two” or something else.
In the event that this fails, something like “you’re on the hook, financially, for rent payments in the 2-6 week window from the time you drop,” which seems like a non-Draconian and fairly boilerplate norm (“this month, and next month too if ‘this month’ ends really soon”).
In the event that this fails, I was planning to just … secretly and quietly absorb the blow? This is made worse by your demand that it be explicit (some things are better as back pocket options), but whatever—few people will see this part. The idea is that OBVIOUSLY (unless you’re starting from the presumption that Duncan is evil) you have to make accommodations for a person who is (by the time they reach this step) both emotionally and financially exhausted/compromised, and part of the whole point of having a large community is that it creates flexibility to absorb blows like that (the damage is spread out enough that it becomes manageable on an individual level).
So at that point, yeah—people could just straight-up defect on the house, and the idea was NOT to blare that from the rooftops, because now there’s a clear incentive for defectors to just defect and impose costs on everyone else. That would’ve been better left as an obvious implicit norm that’s universal among decent people.
On a broader, whole-house level, we’re having open retrospectives every week, with opportunities for both nonymous and anonymous feedback and discussion. I put the odds of this going that far south in under six months at far less than 1%, but in the event that a majority of people decide the thing is bad, it’ll be at most six days before they have a chance to all say so, at the obvious Schelling point for coordination, at which point there’ll be a clearly decisive mass of consensus and I’ll just—be overruled. This is further made more likely to happen if-it-needs-to-happen by the fact that elsewhere in the thread I’ve committed to instituting a requirement that people check in weekly with outside advisors, and by the fact that there are multiple strong/iconoclastic/independent/healthily self-protective personalities in the mix who would have little to no fear in openly opposing me if they needed to, and by the fact that there’s a known second-in-command who’s a good coordinator in the event that things need to happen without me being looped in (noble coup).
In short, the obvious stuff.
I notice I am very confused as to why you keep reiterating actual talking points from actual known-dangerous cults in service of “providing evidence that you’re not a cult.”
For instance, most cults have a charismatic (“well known”) second-in-command who could take over should there be some scandal involving the initial leader. Most cults have written thousands of words about how they’re different from other cults. Most cults get very indignant when you accuse them of being cults.
On the object level: Why do you think people will be reassured by these statements, when they fail to differentiate you from exist cults?
Stepping up a level: how much have you read about cults and abusive group dynamics?
On the object level: because a plurality if not a majority of actual, real humans have indeed been reassured by them, including some who were open critics and said things like “I traveled 50% of the distance toward ‘this is a good idea’ [just from this post].” It’s worth noting that I’m not going to refrain from saying true things that cults have also said; reversed stupidity is not intelligence and the thrust of this post was never “differentiate myself from cults,” it was “here’s a thing I want to try.”
On the discourse level: still jumping to conclusions left and right. “When Duncan said well known, he must have meant charismatic, obviously.” False—Eli Tyre is many, many good things, but “charismatic” is not usually a compliment given to him. Furthermore, I note that you decided to ignore all of the other object-level content in favor of picking one nit (based on false assumptions), so I’m taking that as “you had nothing good to criticize in that other stuff, and so you decided not to say anything at all,” i.e. you’re unable to say “good point” and update incrementally.
Stepping up a level: since you’re inclined to view everything I say in the worst possible light and uncharitably leaping to conclusions, I claim that I’m justified in theorizing that literally no answer would’ve satisfied you (had I said 10 hours, you’d have been smugly dismissive of my lack of research; had I said 1000 you’d have said ‘well, you obviously weren’t paying attention’), and that it was a bullshit question to begin with.
We’re done; I anticipate that other skeptics in this thread (like decius and lumifer and deluks and taygetea, for example) will provide me with the overwhelming majority of the value you might offer, and at a fraction of the cost in you’re-doing-a-bunch-of-the-things-the-sequences-exist-to-warn-against.
Also, as far as “we’re done” goes: I agreed to rewrite my original post—not exactly a small time commitment, still working on it in fact. Are you seriously reneging on your original agreement to address it?
See, now you’re the one leaping to conclusions. I didn’t say that all of your talking points are actual talking points from actual cults. I am confused why even some of them are.
If you can point me to someone who felt “I wrote thousands of words” is, in and of itself, a solid argument for you being trustworthy, please link me to it. I need to do them an epistemic favor.
I was using “charismatic” in the sense of having enough of it to hold the group together. If he doesn’t have enough charisma to do that, then he’s kinda worthless as a commanding officer, neh?
Your claim is false. I wanted to know at what level to hold this conversation. I legitimately can’t tell if you’re waving a bunch of “this is a cult” red flags because you’re trying to be honest about the risks here, because you don’t realize they’re red flags, or because you’re playing N-Dimensional chess and these red flags are somehow all part of your plan.
Can you elaborate on the notion that you can be overruled? Your original post largely described a top-down Authoritarian model, with you being Supreme Ruler.
How would you handle it if someone identifies the environment as abusive, and therefor refuses to suggest anyone else join such an environment?
You discuss taking a financial hit, but I’ve previously objected that you have no visible stake in this. Do you have a dedicated savings account that can reasonably cover that hit? What if the environment is found abusive, and multiple people leave?
Anyone entering your group is signing a legal contract binding them to pay rent for six months. What legal commitments are you willing to make regarding exit protocols?
I notice that you are unusually unable to notice yourself jumping to conclusions. As a challenge, can you find the conclusions you’re still jumping to, above, without curiosity or caveat? Note the plural on “conclusions.”
An excellent question whose answer I’m interested in exposing to literally anyone other than you, the troll, and cousin_it. Also, a question that has been openly and actively discussed and is not yet fully finalized, but boils down to “pretty close to the obvious stuff about voting majorities.”
I am not and have not at any point required that “people should proselytize this, and encourage others to join.” So, I wouldn’t object or find it unreasonable if someone didn’t encourage others to join.
You’ve previously talked out of your butt without ever expressing curiosity as to my visible stake in this. So, repeat my answer to 1: a fine question, which everyone is encouraged to feel curiosity about, and which I’d be motivated and eager to discuss with the potential participants and everyone except you, the troll, and cousin_it.
Similarly, an excellent question that I don’t think is any of your business, though I continue to endorse the fact that I’ve voluntarily made it the good 97% of LessWrong’s business. And I know this is giving away part of the answer, but you just assumed that people would be signing lease agreements with me rather than with the owner of whatever house we rent (and therefore that I would have some fully controlling role in determining exit protocols, rather than simply being a coordinator and a negotiator).
I used the word visible to make it clear that there might be some stake which is not visible to me. If you have made your stakes visible in this thread, I’ll admit I missed it—can you please provide a link?
Furthermore, locking all of this into place in formal language was not a thing I was going to do by myself, but rather was going to be a collaborative, consensus-based process engaged in by the group as a whole, which is obvious if you look at all the other places in this thread and in the original post where I say that we’re going to discuss and iterate and figure things out together.
Or, for example, by the fact that I chose Dragon Army as the model, and not (as has come up elsewhere) Salamander Army.
You shouldn’t quote Scott for support, because he just wrote this:
Link
First, thank you for writing the post so fully and readably—it is really impressive! And I wish you would go to do this, in whatever way you would decide upon. But even if I thought full well the setup was safe (which I do) and the results were exactly as intended, in the most useful and generally good way, I wouldn’t join.
Because I think that when people become parents, they suddenly find themselves in a world that is much more uncertain. You can’t reliably say that you will sleep through the night, for example, even when the kid mostly does. And this is already hard enough to get used to—I know from experience—and it is also hard to begin anew (though this might be less so for men.) Imagine having actually trained yourself to be 100% in control of what you do, or even letting other people know that you are such kind of person. It’s just not robust.
Thanks for the comment. This is unique among perspectives given so far, and I liked seeing it a lot.
Reading the comments… well, this escalated quickly.
I can imagine this going either horribly right or horribly wrong. So I appreciate if a group of volunteers actually does the experiment, instead of everyone offering their preferred analogy for what should happen. Preferably with good safety mechanism, of which I can imagine two, already mentioned in this debate:
(1) Give members a mandatory time off, once in a while, to spend with their friends outside the “Army”. Not just a weekend, but a full week, once in a while.
(2) If possible, it would be good to reduce the financial impact of leaving the group as much as possible. In a perfect world, there would be none. But of course, if you want to live in the same house, that costs money. It would be nice if the group could somehow collect extra money, as an insurance, to allow people leave without financial consequences. Perhaps make everyone pay 10% or 20% extra for the house?
There is always a tension between freedom and commitment, and between individual freedom and group cooperation. It seems generally good to err on the side of freedom, because people in positions of power often have a bias in favor of less freedom (for others, of course), so this is how we balance it. On the other hand, akrasia—almost a proverbial trait of wannabe rationalists—is often an inability to follow one’s own commitments. Already damaging for individuals; making group activity almost impossible. It would be nice to be able to overcome this, and enter high-commitment situations (with limited scope, for limited time). Otherwise, we lose a lot of potential.
I can imagine myself benefitting from some kind of commitment enforcement, and rational life coaching in general. Of course, the devil is in the details. That’s where things can go wrong easily. But if we can create enough safeguards, I support trying this, because there is so much to win.
A possible approach could be to select in advance two or three people trusted by the rationalist community as supervisors of the project. The supervisors would not participate in the project directly, but would have regularly scheduled meetings with members, individually, outside of the project, where the members could provide their opinions, and after hearing all of them, the supervisors would post an anonymized summary report on LW.
This is all generally sensible. +1
EDIT: Except for the part about posting an anonymized summary report on LW. It’s entirely reasonable to have outside advisors and supervisors (in the sense of “well, if the thing’s as good as I say it’ll be, then I have no reason to want to hide”). However, it’s silly to pretend that the house grants LW any kind of oversight, or specifically seeks LW’s approval—I posted here because I thought LW would be a) mildly interested and b) would, in exchange for the mild interestingness be willing to provide some solid, concrete criticism, but that’s pretty much as far as it goes.
That reminds me of an event during a retreat where a cake couldn’t get backed because they required chocolate that was brought to bake the cake was consumed beforehand. It was even baking-chocolate.
It seems like good cooking or baking leads to people buying specific ingredients and it’s bad if they can’t count on those ingredients not being consumed before the planned meal.
Yeah, I think notes saying “do not eat” will suffice; the key is just to get people to use that coin only when it’s for a specific plan.
You might also want a mechanism to handle “staples” that individuals want. I have a few foods / ingredients I like to keep on hand at all times, and be able to rely on having. I’d have no objections to other people eating them, but if they did I’d want them to take responsibility for never leaving the house in a state of “no X on hand”.
The food policy strikes me as one of the more trivial and unimportant parts of the proposal. I’m not saying you’re taking it too seriously—I think that shared living spaces should have clear rules about who gets to eat what. It’s just that this particular food policy seems easily to change without changing the core “authoritarian” structure of the Dragon Barracks.
Funny story by the way, I really like it.
To add to the story, the person who wanted to bake the cake build the oven for baking it beforehand out of parts like an old washing machine.
Concerns about your philosophy
1) You focus heavily on 99.99% reliability. That’s 1-in-10,000. If we only count weekdays, that’s 1 absence every 40 years, or about one per working lifetime. If we count weekends, that’s 1 absence every 27 years, or 3 per lifetime. Do you really feel like this is a reasonable standard, or are you being hyperbolic and over-correcting? If the latter, what wold you consider an actual reasonable number?
2) Why does one person being 95% reliable cause CFAR workshops to fail catastrophically? Don’t you have backups / contingencies? I’m not trying to be rude, I’m just used to working with vastly less fragile, more fault-tolerant systems, and I’m noticing I am very confused when you discuss workshops failing catastrophically.
3) Numerous open source programs have been written via a web of one-shot and low-reliability contributors. In general, there’s plenty of examples of successful systems that tolerate significantly more than 0.01% defection. Could you elaborate on why you think these systems “close the loop”, or aren’t destroyed? Could you elaborate on why you think your own endeavors can’t work within those frameworks? The framing seems solidly a general purpose statement, not just a statement on your own personal preferences, but I acknowledge I could be misreading this.
4) You make a number of references to the military, and a general philosophy of “Obedience to Authority”. Given the high rate of sexual assault and pointless bureaucracy in the actual military, that seems like a really bad choice of role model for this experiment. How do you plan to avoid the well known failure states of such a model?
5) You raise a lot of interesting points about Restitution, but never actually go in to details. Is that coming in a future update?
6) You seem to acknowledge that you’re making an extraordinary claim here when you say “I’ve noticed the skulls”. Do you think your original post constitutes extraordinary proof? If not, why are you so upset that some people consider you suspect, and are, as you invited them to do, grilling you and trying to protect the community from someone who might be hoodwinking members?
7) Do you feel comfortable with the precedent of allowing this sort of recruiting post from other people (i.e. me)? I realize I’m making a bit of an ask here, but if I, handoflixue, had written basically this post and was insisting you should trust me that I’m totally not running a cult… would you actually trust me? Would you be okay with the community endorsing me? I am using myself specifically as an example here, because I think you really do not trust me—but I also have the karma / seniority to claim the right to post such a thing if you can :)
I note for others reading this comment and wondering why it hasn’t been addressed that I’ve ceased replying to handoflixue and a couple of other posters on a policy level, for reasons surrounding norms of discourse, strawmanning, epistemic humility, presence or absence of good faith, etc. It’s possible that the above contains good questions or insights; if someone else chooses to repost/re-ask/rephrase sections of this, I’ll likely respond to them.
http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
I want to publicly express my strong support for this experiment/meta-experiment.
I think that my support is particularly noteworthy as I’m presently a core member of a different taking-each-other-seriously co-living experiment that is profoundly different in its philosophy. (Mine is not in Berkeley, nor rationalist.) Therefore some people might assume that I would be opposed to Dragon Army Barracks.
Things in common between the experiment I’m part of and Dragon Army Barracks:
is “high-commitment, high-standards, high-investment”
is trying to actually make & achieve something together
is addressing unanchored abandoned loneliness thing
has consciously explicated commitments and assumptions
is intended to produce a high-level of consistent excellence and ability to effectively collaborate
Things that are different:
We’re very far from authoritarian or hierarchical. Although we’re also not egalitarian, consensus-based, or even democratic per se… but we have essentially zero of telling-other-people-what-to-do
Our basic collective navigating framework is [Kegan-5 / fluid mode / post-rational], rather than [Kegan-4 / systematic mode / rational] (good summary of this distinction)
Our focus is almost entirely on the meta-level of building the new cultural platform we’re building. We don’t have any expectations of each other on the levels of specific object-level projects or explicit behavioral norms (aside from ones necessary for the house’s function)
I think that these differences are core to why I am part of this project that I’m part of, and why I consider it to be the most valuable investment I could be making with my time and energy. I am, therefore, non-Berkeley-residence aside, not going to be applying to DA. As I said above though, I strongly support Dragon Army Barracks as an experiment and potentially as an ongoing resource to individual and collective growth.
Reasons why I think that DA is a good idea:
Expected value of high amounts of worthwhile object-level output. As Sebastian Marshall says, “the gains made from living more purposefully are forever—the time you’ve spent well will remains well-spent even if you fall off for a while sometimes. Most people don’t even try, which is why most people don’t succeed.”
I expect it will also produce a lot of developmental progress for people involved; that if you were to be able to sort rationalists by amount of growth in a year, the Dragons would all be in the top quartile, and would occupy many of the top 10 slots. This, even if the experiment were to end after 6 months.
The DA Barracks is an intervention that is attempting to produce change on a very fundamental level of the system that is a group house. This is a powerful leverage point (see Donella Meadow’s article… I would say this is around a 2 or 3, and most group houses have only done mild experiments at the 4-6 level.)
I agree with and/or resonate with the six points that Duncan makes in Section 2 of this document.
The project-level value of learning here is also very high: this will greatly inform future experiments, whatever their leadership basis.
If I had kids, I would absolutely sign them up for any summer camps or classes Duncan was running. I think the amount of power he would have in relation to them would be similar to the amount of power he’ll have in this situation.
A final reason is this: I think that we as humanity need to rapidly make progress on being able to effectively coordinate in non-hierarchical ways, which is what the project I’m part of is about. Corollarily, humanity is kind of mediocre at doing this in many contexts. Therefore if non-hierarchical projects aren’t emphatically directed towards solving that challenge itself, I expect them to be outperformed by projects that are leveraging existing understanding about how to coordinate effectively in hierarchical ways. i.e. in this case, Dragon Army Barracks.
I really, really wish Kegan levels didn’t come in an order, so a claim to be at a higher Kegan level than someone else didn’t look so starkly like a claim to superiority. It’s turning me off even trying to take them seriously, because everyone who uses them looks like they’re just self-aggrandizing to me.
I’m totally with you in wishing that Kegan levels weren’t getting socially entangled with claims to superiority!
...but that can’t be achieved in the way you describe: they would be a fundamentally different thing if they didn’t come in the order they do. It’s not a personality typing system, it’s a model of human development over time. Probably some people who are talking about them are self-aggrandizing; people are known to do that with just about everything they can get their hands on.
I suspect that your heuristics about not trusting people who brag about their Kegan levels are probably decently good heuristics, as it could be reasonably expected that that would be ineffective in just the way you’re describing here.
I first learned about the CDT model from a conversation I had with someone who used to work with Kegan, and who readily noted that he was not himself consistently operating out of stage 5. Robert Kegan has said that about himself too, which I found surprising and originally interpreted as being a failure mode in the opposite direction—false humility or something. But now it strikes me as not that unlikely. There’s a big difference between being able to recognize abstractly (or in others) what it means to be subject to one’s own interpretations & ideologies, and being able to actually not do it.
There’s an unfortunate phenomenon here, where the value of the concept gets diluted because the people who are finding the Kegan models helpful but aren’t claiming to be at higher Kegan levels than others… are harder to notice.
Anyway, I realize that I may sound like I’m making a superiority claim here myself. I will address that directly, kind of like Duncan is doing re: skulls above.
My understanding—based more on reading things like this than Kegan’s own work—is that the “fluid mode” (~=K-5) does have capabilities that the “systematic mode” (~=K-4) does not; much like multivariate calculus can be used to re-derive the equation for the volume of a sphere, but not the reverse. Is multivariate calculus superior to sphere equations? In functional senses yes, but not in a social status way. And also not in all domains! It’s certainly slower if you just need to calculate the volumes of a bunch of spheres.
I’ve spent a considerable amount of time over the past year working to develop the ability to operate in the fluid mode, and I think that that makes a lot of sense for me and many other people, but I don’t think that that’s highest priority for everyone right now. Hence my strong support for Dragon Army.
I like the paragraph “my understanding” a lot. In particular, while I think I have some limited, flickering access to K5, I notice that operations which come out of being solidly K4 often cause me to outstrip/outperform people who are entirely in K5, which seems to me to be something analogous to “I’m successfully calculating the volumes of a bunch of spheres and you’re just stuck there mired in re-derivation.”
i.e. relative strengths in different domains.
I’m not sure what it means to be entirely K5. To me the phrase sounds like Chapman’s description of the postmodernists who are at K3 and tried to skip K4 entirely and are without any real access to the ability to use a system.
Fair. “People who overwhelmingly operate from a thing where I’m comfortable applying the label K5,” where overwhelmingly means 90+% and comfortable means 90+%.
How do you filter for people who are Kegan-5 when you are seeking to accept members?
We don’t! Each of the individual members themselves aren’t necessarily Kegan-5, but the person spearheading the project (who is in her 70s) certainly is. And so, therefore, are our models, our equivalent to a “charter”, etc.
It’s also the case that the mode of interaction that we’re training here is fluid as opposed to systematic, which shows up in the ways that we make agreements, commitments, and the general way-we-do-things-here. I was very much operating in (and committed to!) systematic mode when I first joined several years ago, and I’m still getting comfortable with this. It’s challenging but worth it, and we’re working to build a bridge to meta-rationality to make that learning process easier.
I think that Duncan’s intended context will potentially be (a) an awesome place to go from Kegan-3 to Kegan-4, and (b) an awesome place to operate in an exceedingly high-functioning Kegan-4 way. It asks that of its members. I don’t expect it to create a demand for most Dragons to operate in a Kegan-5 way, which is the core different between it and the project I’m a part of.
Is there more information available on your project publically? Or some information I can get non-publically?
Not officially at this stage; we’re in a process of overhauling a lot of things, including answers to questions like “who are we?” and “what are we calling ourselves?”
That said, this category of posts on my blog has a lot of content about our philosophy, models, culture, etc.
Somewhat scattered reactions:
I am really interested to see the result of this experiment.
I think the underlying models are extremely plausible, with the next bullet point as a possible exception.
I am aesthetically very skeptical of phrases like “absolutely reliable” (in Problem 4). I don’t think it’s possible for something to be absolutely reliable, and it seems dangerous/brittle to commit to achieving something unachievable. However, this may be primarily an aesthetic issue, since I think the solution presented in Problem 3 is very sensible.
I don’t buy claim 4, “It does actually require a tyrant”. I agree that it isn’t always possible to achieve consensus. I don’t think that hierarchical authority is the only way to solve that problem. Democratic Centralism is a well-tested alternative, for instance.
I find the code of conduct worrisome, at least as presented. The rules seem likely to encourage hypocrisy and dishonesty, since they make psychologically implausible demands which in many cases are undetectable at time of infraction. This could potentially be mitigated by norms encouraging confession/absolution for sins, but otherwise I expect this to have corrosive effects.
I am totally uninterested in joining the experiment, despite my interest in its outcome. I would be likely to be interested in substantially more time-boxed activities with similar expectations.
“norms encouraging confession/absolution for sins” is a somewhat … connotation-laden … phrase, but that’s a big part of it. For instance, one of the norms I want to build is something surrounding rewarding the admission of a mistake (the cliff there is people starting to get off on making mistakes to get rewarded, but I think we can dodge it), and a MAJOR part of the regular check-ins and circles and pair debugs will be a focus on minimizing the pain and guilt of having slipped up, plus high-status people leading the way by making visible their own flaws and failings.
+1 for noticing and concern. Do you have any concrete tweaks or other suggestions that you think might mitigate?
Also: “absolute” is probably the wrong word, yeah. What I’m gesturing toward is the qualitative difference between 99% and 99.99%.
There’s definitely a qualitative shift for me when something moves from “This is very likely to happen” to “This is a fact in the future and I’ll stop wondering whether it’ll happen.”
While I think it’s good to remember that 0 and 1 are not probabilities, I also think it’s worthwhile to remember that in a human being they can be implemented as something kind of like probabilities. (Otherwise Eliezer’s post wouldn’t have been needed!) Even if in a Bayesian framework we’re just moving the probability beyond some threshold (like Duncan’s 99.99%), it feels to me like a discrete shift to dropping the question about whether it’ll happen.
I think that’s a fine time to use a word like “absolute”, even if only aesthetically.
Yeah, there’s some switch from “am maintaining uncertainty” to “am willing to be certain and absorb the cost of an unpleasant surprise.” Or from “would not be surprised by failure” to “have decided to be surprised by failure.”
Those sound like good ideas for mitigating the corrosive effects I’m worried about.
My personal aesthetic vastly prefers opportunity framings over obligation framings, so my hypothetical version of the dragon army would present things as ideals to aspire to, rather than a code that must not be violated. (Eliezer’s Twelve Virtues of Rationality might be a reasonable model.) I think this would have less chance of being corrosive in the way I’m concerned about. However, for the same reason, it would likely have less force.
Re: absolute. I agree that there can be a qualitative difference between 99% and 99.99%. However, I’m skeptical of systems that require 99.99% reliability to work. Heuristically, I expect complex systems to be stable only if they are highly fault-tolerant and degrade gracefully. (Again, this may still be just an aesthetic difference, since your proposed system does seem to have fault-tolerance and graceful degradation built in.)
On the other hand… look at what happens when you simply demand that level of reliability, put in the effort, and get it. From my engineering perspective, that difference looks huge. And it doesn’t stop at 99.99%; the next couple nines are useful too! The level of complexity and usefulness you can build from those components is breathtaking. It’s what makes the 21st century work.
I’d be really curious to see what happens when that same level of uncompromising reliability is demanded of social systems. Maybe it doesn’t work, maybe the analogy fails. But I want to see the answer!
Who exactly will be doing the demanding and what would be price for not delivering?
Authoritarian systems are often capable of delivering short-term reliability by demanding the head of everyone who fails (“making the trains run on time”). Of course pretty soon they are left without any competent professionals.
Do you have examples of systems that reach this kind of reliabilty internally?
Most high-9 systems work by taking lots of low-9 components, and relying on not all of them failing at the same time. I.e. if you have 10 95% systems that fail completely independently, and you only need one of them to work, that gets you like eleven nines (99.9{11}%).
Expecting a person to be 99% reliable is ridiculous. That’s like two sick days per year, ignoring all other possible causes of failing to make a task. Instead you should build systems and organisations that have slack, so that one person failing at a particular point in time doesn’t make a project/org fail.
Well, in general, I’d say achieving that reliability through redundant means is totally reasonable, whether in engineering or people-based systems.
At a component level? Lots of structural components, for example. Airplane wings stay attached at fairly high reliability, and my impression is that while there is plenty of margin in the strength of the attachment, it’s not like the underlying bolts are being replaced because they failed with any regularity.
I remember an aerospace discussion about a component (a pressure switch, I think?). NASA wanted documentation for 6 9s of reliability, and expected some sort of very careful fault tree analysis and testing plan. The contractor instead used an automotive component (brake system, I think?), and produced documentation of field reliability at a level high enough to meet the requirements. Definitely an example where working to get the underlying component that reliable was probably better than building complex redundancy on top of an unreliable component.
Yeah. I’ve got a couple brilliant and highly capable friends/allies/advisors who also STRONGLY prefer opportunity framings over obligation framings. I think that’s one of the things where the pendulum has overcorrected, though—I think the rationality community as a whole is rather correctly allergic to obligation framings, because of bad experiences with badly made obligations in the past, but I think we’re missing out on an important piece of the puzzle. You can run a successful thing that’s, like, “we’ll do this every week for twelve weeks, show up as much as you like!” and you can run a successful thing that’s, like, “we’ll do this if we get enough people to commit for twelve weeks!” and I think the two styles overlap but there’s a LOT of non-overlap, and the Bay Area rationalists are missing half of that.
I actually totally buy this. There are some things where you just have to commit, and accept the obligations that come with that.
My hesitation primarily comes from the fact that the code of conduct seems intended to be pervasive. It even has requirements that happen entirely inside your own mind. These seem like bad features for an obligation-based system.
My model is that obligation-based systems work best when they’re concrete and specific, and limited to specific times and circumstances. “Commit to performing specified activities twice a week for twelve weeks” seems good, while “never have a mental lapse of type x” seems bad.
That makes sense, yeah. I’m hoping the cure comes both from the culture-of-gentleness we referenced above, and the above-board “Yep, we’re trying to restructure our thinking here” and people choosing intelligently whether to opt in or opt out.
Good place to keep an eye out for problems, though. Yellow flag.
Edit: also, it’s fair to note that the bits that go on inside someone’s head often aren’t so much “you have to think X” as they are “you can’t act on ~X if that’s what you’re thinking.” Like, the agreement that, however frustrated you might FEEL about the fact that people were keeping you up, you’re in a social contract not to VENT at them, if you didn’t first ask them to stop. Similarly, maybe you don’t have the emotional resources to take the outside view/calm down when triggered, but you’re aware that everyone else will act like you should, and that your socially-accepted options are somewhat constrained. You can still do what feels right in the moment, but it’s not endorsed on a broad scale, and may cost.
This framing does bother me less, so that is a fair clarification. However, I don’t think it applies to some of them, particularly:
True. Updated the wording on that one to reflect the real causality (notice negative model --> share it); will look at the others with this lens again soon. Thanks.
I applaud the experiment, and the writeup! Do you have a place where you’ll publish metrics (people contacted, interest level, etc. before starting, and self-reported or objective measures of your stated objectives every week)?
That’s not been formally set, but yes—that’s the biggest ask we get from outsiders interested, and it’s clearly one of the “obvious things” that we ought to do, so it’s been part of the plan for a while now. We just have to hammer out the details once the group is set.
Depending on interest, we may publish those updates here on LW, or make them available through my blog or FB, or some other option we haven’t thought of yet.
From the skeptical side, I would strongly suggest committing to a publicly visible schedule for updates, reports on transitions (e.g. out of bootcamp), and a final report. The outside world would be well served by knowing how this turns out, and having a schedule which is evidently independent of considerations such as “is this currently going well” would do a great deal to reassure us that we will know in time.
I do note that, while I’d like to collect data and make that data available to other humans trying to do cool stuff in the world, I’m not particularly concerned with assuaging all skeptics/reassuring people who, from the outside, think that it’s bad. This post is sort of my one big push to do that, after which I planned to shrug and just let people make the judgments they’re gonna make.
A schedule is still a solid structure just along the “do this properly” axis, though.
If you don’t commit to publishing negative results, I commit to refusing to trust any positive results you publish.
That’s absolutely fair. The point I’m trying to make is that it’s not about publishable results either way. Like, yes, I’d like to ship useful information to the outside world, but that’s a distant second priority to making good things happen on the ground.
What I do commit to is not making the choice to publish based on whether things are good or bad. I commit to publishing if and only if a) I have spare time and cycles, and b) there’s something useful for others to hear.
The only way there would be nothing useful to learn is if there was a complete failure due to circumstances outside of the influence of anyone involved, such as an earthquake that halted the plan. Even then a quick note to that effect would be of use.
0) This is not for me, not because of a bug in the proposed structure but because I don’t know you and don’t know any of the people recommending you. There are two people that immediately came to mind who, if they proposed this with themselves in your place, I would join up with over most situations and three more I would probably follow like this over my current situation.
1) You can’t name something Dragon Army and not expect nerd pedantry, but this is pedantry with a point behind it. Dragon Army (in the book) distributed leadership down as much as possible. Each toon leader had more degrees of freedom from Ender’s plans, each toon had a second who was expected to make decisions, and soldiers were more free to question their toon leaders. I know Dragon Army (the name) has a certain positive association in rationalist circles, but what you’re describing sounds more like Salamander Army. This is meant as nerd pedantry more than disagreement with your proposed goals or metrics (Salamander was doing really well in the standings after all) but the difference between Salamander and Dragon hierarchy seems important in this context. Dragon Army won by having a dozen good commanders all thinking at once, Salamander won by having one or two good commanders and being able to expect sharp obedience from everyone under them.
2) The second highest value change (Highest is brought up in point 0) would be some form of “I Told You So” and accountability. I find I am much happier to submit to doing things I think are incorrect if my dissension has been recorded and I can point at it later. Something like an internal prediction market is probably overkill and would erode confidence in leadership in a bad way, but a norm where someone could say “I’m 70% confident this treehouse won’t support enough weight if we nail it like that” and someone quickly sticks that in a google form might be fast enough not to interrupt things. This may or may not help with general cohesion or be relevant to the people who are actually probably joining.
This is sort of related to how often “sure, I’ll do it the way you said as long as I have it in writing that I think it’s dumb” has saved me by covering my rear, it also provides an important check on an incompetent leader, but mostly I’d want it because then the nagging thought “this is a bad idea” is out of my head and I can forget about it for a while. It’s sort of like singing a song out loud sometimes stops it being stuck in your head.
3) “Internal economy trading effort for money and so on” Can I pay someone to do my lateness-apology push ups for me? That’s a joking example, but given the likelihood of having large income discrepancies something of that nature may come up, and it might be worth having a framework for it. In the same ballpark, intense cooperation seems like it might be odd in non-DA associated things. Examples; what happens if one member applies for a job at a company another member works for? What happens if one member commits a crime and asks other members to be their alibi? I don’t really expect either of those examples to actually come up, but they are examples where organizations structurally similar to what you’re proposing can do very well for its members in ways that maybe aren’t good for the surrounding social structures.
4) If I knew that this general sort of setup was working well for all concerned, I wouldn’t consider it lasting indefinitely with the same leader to be a bad thing. That said, since you stated an intention to only lead it for about a year, ‘temporary’ leaders leading indefinitely is pretty strongly associated with this general sort of setup no longer working well for all concerned. If this started today, and you were still leading it in two years, I’d take that as evidence something has gone wrong. This gets lessened greatly if individual people are regularly rotating out of the group and all have wonderful praises for it.
All of the above is even more true for romantic/sexual relations between the leadership and the rank-and-file.
5) I’m strongly in favour of this being tried, and I’ll be reading any updates with great interest. Good luck!
Thanks for the detailed comment!
1) Yeah, I’m emphasizing the more authoritarian parts, because those are the more dangerous/aversive ones, but in fact Dragon Army is the source of the aesthetic. I agree with almost everything you said in 1), and that’s what the house is supposed to be like. Don’t forget, though, that while Ender distributed authority as broadly as possible, he was firmly, absolutely in command, in the end. When he spoke, they moved. The key thing was that a) he used that as rarely as possible and b) he didn’t undercut his toon leaders when he exercised central authority.
2) Yeah, absolutely. We’ve already installed a norm of making direct, one-to-one bets, and are almost certainly going to install prediction markets and “I told you so” structures. In particular, I think the people originally opposed to a given failed experiment should be given greater weight in the next decision, if their predictions about that experiment came true. It’s tough to balance this against “creating perverse incentives,” but I think we can manage it.
3) Yes. It’s tricky, because we have to work out rates-of-exchange between e.g. rich and poor participants, but an internal economy is something I hope to create with second-priority urgency (i.e. in the first couple of months).
4) I’m not committed to ceasing after a year, if all is going swimmingly, but essentially I want to open that question up to the group itself after six months.
5) Thanks!
My curiosity is satisfied by your answers to 2-4, but I want to dig a little deeper into 1 if you don’t mind.
The source of the aesthetic is Dragon Army but emphasizing Salamander since those are the pieces more likely to be found off-putting makes sense to me. If someone’s on the fence, they probably shouldn’t go forward. That said, you may have overemphasized your ideal here. Ender was not firmly, absolutely in command; his toon leaders took up body-guarding him over his direct objections in a way that they wouldn’t have for a more authoritarian commander. Would you consider such a mutiny to be a sign you’d failed, or a sign you’d succeeded? (I strongly don’t expect body-guarding to be relevant, but I can imagine similar well-intentioned disagreements.)
Also, since you are changing the emphasis like this I wonder what your plans are for any Nikolai Delphikis* or Beans** that wind up involved? “Screen or vet people carefully so we don’t have any” is noted as probably a good idea, but is also insufficient.
*By Nikolai, I mean someone who would be happy following a confident leader, but feels out of their depth being expected to constantly adapt without sufficient direction. A potentially good Salamander member who read the Salamander description, and was surprised by the Dragon direction it took. Maybe even someone who looks very Dragon-like in most situations, but finds themselves the least improving member of what you set up. On the one hand, if you’re pulling from the rationalist population this seems an unexpected direction to find errors in, on the other hand I have had the experience unexpectedly of finding myself the slowest and least agenty person in a group and it was demoralizing in a way that made me empathize with the fictional Nikolai.
**By Bean, I mean someone who gets involved expecting more degrees of freedom or a higher position on the hierarchy than they wind up with. Bean put himself in Dragon Army knowing he was coming right out of launch, knowing he was small, and knowing Ender would have no reason to pay particular attention to this particular rookie, and then got upset that he wasn’t given any authority or special notice. If you have at least fifteen people not counting yourself or your second, I’d be willing to make a 1:1 bet that you are going to wind up with someone wanting more degrees of freedom or more authority than you want to give them.
I actually take the text of Ender’s Game pretty seriously as a model; I think it offers a lot of good perspective on human morality and interaction. So I actually have the example of the toon leaders bodyguarding Ender as a salient … um … parable? … in my head already, and would view that as a sign I’d succeeded.
We’ve already got a Bean; his name is Eli Tyre. His position as second-in-command didn’t exist through the whole eight months of planning this until 12 hours before I posted the charter. Similarly, the more credible responsibility others can take, the more I get to do less; the only block here is credibly believing that the people taking power will do the right thing on all the levels of meta, or setting up scaffolds such that damage-from-mistakes is minimized and survivable.
As for Nikolais, the first priority is the sign of the derivative (are you progressing positively), the second priority is the derivative (is your progress steep), and a distant, distant third is your actual position (are you in fact now good at X). A major part of the point of the house is to make everyone, myself included, feel a bit like Nikolai? i.e. we want everyone to be at the edge of their growth. But similarly, we want every Nikolai to have a Bean … hence the tight-knit, do-things-together, check-in one-on-one social structure.
I … think that answered your questions? Let me know if I missed something important.
I think it’s a solid proposal.
One major caveat I think is that it’s a structure that wouldn’t work for most people in the rationality community. Calling most of them libertines incompatible with such a strict framework wouldn’t be too far from the truth. But those are the views of a very distant outsider who doesn’t know the the deeper views/feelings of the Berkeleyans you refer to, and is only familiar at a superficial glance.
But for a niche group of strongly driven baby rationalists lacking for direction/purpose who aren’t opposed to operating within a strict structure, I don’t know how this wouldn’t be an ideal framework to use.
As a former military enlisted, I think all the military comparisons made are valid. Allow me to include one more. I believe that also like the military, there will be a high turnover rate—once people get what they want out of the community, they leave. As I allude to earlier, the appeal of joining is acquiring skills in discipline/organization/direction. Once those are acquired, there is very little left to motivate people to stay. But, in both cases, this isn’t really a bad thing either. If everyone leaves after the one year commitment, but they reflect on the experience positively, then it would still be considered a success.
Yeah. In most-but-not-all of my conceptions of the house, I imagine “leaving” the post of guy-in-charge after a year, if not six months. Maybe not leaving the context as a whole, but “turning over” as far as roles are concerned.
It’s hard to go from being the boss of someone to being their subordinate, and vice versa. I think it’s more plausible to shift into an advisory, strategic, consultant, or executive role rather than swap.
Sounds awful to me. I would absolutely hate to live somewhere where I was regularly told what to do and/or expected to fit in with rituals. I tolerate this kind of thing at work because I have to.
What will you say when people come to you saying “I’m not sure this is really worth it for me”? I personally don’t think self-improvement is a very stable overall goal. In my cursory acquaintance, most cults/high-demand living situations tend to believe in “something greater”—often something quite ridiculous, but nonetheless something bigger than the individual. Perhaps it is important to have something which seems to trump feelings of personal discomfort.
Basically what I tell people (in answer to 2) is “ABSOLUTELY trust that instinct. This requires pretty high confidence that this is the right move, and DEFINITELY high confidence that if it’s the wrong move you won’t take significant damage over the six month period. If you’re unsure, the answer should be ‘no.’”