The Archipelago Model of Community Standards
Epistemic Status: My best guess. I don’t know if this will work but it seems like the obvious experiment to try more of.
Epistemic Effort: Spent several months thinking casually, 25ish minutes consolidating earlier memories and concerns, and maybe 10ish minutes thinking about potential predictions. See comment.
Building off:
Open Problems in Group Rationality [Conor Moreton]
Archipelago and Atomic Commutarianism [Scott Alexander]
Claim 1 - If you are dissatisfied with the norms/standards in a vaguely defined community, a good first step is to refactor that community into sub-groups with clearly defined goals and leadership.
Claim 2 - People have different goals, and you may be wrong about what norms are important even given a certain goal. So, also consider proactively cooperating with other people forming alternate subgroups out of the same parent group, with the goal of learning from each other.
Refactoring Into Subcommunities
Building groups that accomplish anything is hard. Building groups that prioritize independent thinking to solve novel problems is harder. But when faced with a hard problem, a useful technique is to refactor it into something simpler.
In “Open Problems in Group Rationality”, Conor lists several common tensions. I include them here for reference (although any combination of difficult group rationality problems would suffice to motivate this post).
Buy-in and retention.
Defection and discontent.
Safety versus standards.
Productivity versus relevance.
Sovereignty versus cooperation.
Moloch and the problem of distributed moral action.
These problems don’t go away when you have clearly defined goals. A corporation with a clearcut mission and strategy (i.e maximize profit by selling widgets) still has to navigate the balance of “hold their employees to a high standards to increase performance” and “make sure employees feel safe enough to do good work without getting wracked with anxiety” (or, just quit).
Such a corporation might make different tradeoffs in different situations—if there’s a labor surplus, they might be less worried about employees quitting because they can just find more. If the job involves creative knowledge work, anxiety might have greater costs to productivity. Or maybe they’re not just profit-maximizing: maybe the CEO cares about employee mental health for its own sake.
But well defined goals, with leaders who can enforce them, at least makes it possible to figure out what tradeoffs to make and actually make them.
Whereas if you live in a loosely defined community where people show up and leave whenever they want, and nobody can even precisely agree on what the community is, you’ll have a lot more trouble.
People who care a lot about, say, personal sovereighty, will constantly push for norms that maximize freedom. People that care about cooperation will push for norms encouraging everyone to work harder and be more reliabl at personal freedom’s expense.
Maybe one group can win—possibly by persuading everyone they are right, or simply by being more numerous.
But,
A) You probably can’t win every cultural battle.
B) Even if you could, you’d spend a lot of time and energy fighting that might be better spent actually accomplishing whatever these norms are actually for.
So if you can manage to avoid infighting while still accomplishing your goals, all things being equal that’s preferable.
Considering Archipelago
Once this thought occured to me, I was immediately reminded of Scott Alexander’s Archipelago concept. A quick recap:
Imagine a bunch of factions fighting for political control over a country. They’ve agreed upon the strict principle of harm (no physically hurting or stealing from each other). But they still disagree on things like “does pornography harm people”, “do cigarette ads harm people”, “does homosexuality harm the institution of marriage which in turn harms people?”, “does soda harm people”, etc.
And this is bad not just because everyone wastes all this time fighting over norms, but because the nature of their disagreement incentivizes them to fight over what harm even is.
And this in turn incentivizes them to fight over both definitions of words (distracting and time-wasting) and what counts as evidence or good reasoning through a politically motivated lens. (Which makes it harder to ever use evidence and reasoning to resolve issues, even uncontroversial ones)
Then...
Imagine someone discovers an archipelago of empty islands. And instead of continuing to fight, the people who want to live in Sciencetopia go off to found an island-state based on ideal scientific processes, and the people who want to live in Libertopia go off and found a society based on the strict principle of harm, and the people who want to live in Christiantopia go found a fundamentalist Christian commune.
They agree on an overarching set of rules, paying some taxes to a central authority that handles things like “dumping pollutants into the oceans/air that would affect other islands” and “making sure children are well educated enough to have the opportunity to understand why they might consider moving to other islands.”
Practical Applications
There’s a bunch of reasons the Archipelago concept doesn’t work as well in practice. There are no magical empty islands we can just take over. Leaving a place if you’re unhappy is harder than it sounds. Resolving the “think of the children” issue will be very contentious.
But, we don’t need perfect-idealized-archipelago to make use of the general concept. We don’t even need a broad critical mass of change.
You, personally, could just do something with it, right now.
If you have an event you’re running, or an online space that you control, or an organization you run, you can set the norms. Rather than opting-by-default into the generic average norms of your peers, you can say “This is a space specifically for X. If you want to participate, you will need to hold yourself to Y particular standard.”
Some features and considerations:
You Can Test More Interesting Ideas. If a hundred people have to agree on something, you’ll only get to try things that you can can 50+ people on board with (due to crowd inertia, regardless of whether you have a formal democracy)
But maybe you can get 10 people to try a more extreme experiment. (And if you share knowledge, both about experiments that work and ones that don’t, you can build the overall body of community-knowledge in your social world)
I would rather have a world where 100 people try 10 different experiments, even if I disagree with most of those experiments and wouldn’t want to participate myself.
You Can Simplify the Problem and Isolate Experimental Variables. “Good” science tests a single variable at the time so you can learn more about what-causes-what.
In practice, if you’re building an organization, you may not have time to do “proper science”—you may need to get a group working ASAP, and you may need to test a few ideas at once to have a chance at success.
But, all things being equal it’s still convenient to isolate factors as much as possible. One benefit to refactoring a community into smaller pieces is you can pick more specific goals. Instead of reinventing every single wheel at once, pick a few specific axes you’re trying to learn about.
This will both make the problem easier, as well as make it easier to learn from.
You Can ‘Timeshare Islands’. Maybe you don’t have an entire space that you can control. But maybe you and some other friends have a shared space. (Say, a weekly meetup).
Instead of having the meetup be a generic thing catering to the average common denominator of members, you can collectively agree to use it for experiments (at least sometimes). Make it easier for one person to say ‘Okay, this week I’d like to run an activity that’ll require different norms than we’re used to. Please come prepared for things to be a bit different.’
This comes with some complications—one of the benefits of a recurring event is people roughly know what to expect, so it may not be good to do this all the time. But generally, giving the person running a given event the authority to try some different norms out can get you some of the benefits of the Archipelago concept.
You Can Start With Just One Meetup
Viliam in the comments made a note I wanted to include here:
It is important to notice that the “island” doesn’t have to be fully built from start. “Let’s start a new subgroup” sounds scary; too much responsibility and possibly not enough status. “Let’s have one meeting where we try the norm X and see how it works” sounds much easier; and if it works, people would be more willing to have another meeting like that, possibly leading to the creation of a new community.
Making It Through the ‘Unpleasant Valley’ of Group Experimentation.
I think this graph was underappreciated in its original post. When people try new things (a new diet or exercise program, studying a new skill, etc), the new thing involves effort and challenges that in some ways make it seem worse than whatever their default behavior was.
Some experiments are just duds. But oftentimes it feels like it’ll turn out to be a dud, when you’re in the Unpleasant Valley, and in fact you just haven’t stuck with it long enough for it to bear fruit.
This is hard enough for solo experiments. For group experiments, where not just one but many people must all try a thing at once and get good at it, all it takes is a little defection to spiral into a mass exodus.
Refactoring communities into smaller groups with clear subgoals can make it possible for a group to make it through the Valley of Unpleasantness together.
Overlapping Social Spheres
Sharing Islands and Cross Pollination
In the end, I don’t think “Islands” is quite the right metaphor here. One of the things that makes social archipelago different from the canonical example is that the islands overlap. People may be a member of multiple groups and sub-groups.
A benefit of this is cross pollination—it’s easier to share information and grow if you have people who exist in multiple subcultures (sub-subcultures?) and can translate ideas between them.
How much benefit this yields depends on how mindfully people are approaching the concept, and how much of their ideas they are sharing (making both the object-level-idea and the underlying reasons accessible to others).
This post is primarily intended as reference—I have more specific ideas on what kinds of communities I want to participate in, and thoughts on “underexplored social niches” that I think others might consider experimenting with. Some of those thoughts will be on the LessWrong front page, others on my private profile or the Meta section.
But meanwhile, I hope to see more groups of people in my filter bubble self organizing, carving out spaces to try novel concepts.
- The Relationship Between the Village and the Mission by 12 May 2019 21:09 UTC; 137 points) (
- Meta-tations on Moderation: Towards Public Archipelago by 25 Feb 2018 3:59 UTC; 78 points) (
- Hufflepuff Leadership and Fighting Entropy by 7 Jun 2018 0:28 UTC; 51 points) (
- The Archipelago Model of Community Standards by 21 Nov 2017 3:21 UTC; 49 points) (
- Appeal to Consequence, Value Tensions, And Robust Organizations by 19 Jul 2019 22:09 UTC; 45 points) (
- LW authors: How many clusters of norms do you (personally) want? by 7 Jul 2019 20:27 UTC; 38 points) (
- Schism Begets Schism by 10 Jul 2019 3:09 UTC; 24 points) (
- 2 Mar 2023 6:42 UTC; 7 points) 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (
- Communities you might join thread by 25 Nov 2017 9:07 UTC; 6 points) (
- 30 Jan 2022 22:48 UTC; 4 points) 's comment on Vavilov Day Discussion Post by (
- 7 Feb 2023 7:48 UTC; 4 points) 's comment on Said Achmiz’s Shortform by (
- 12 Feb 2018 7:34 UTC; 3 points) 's comment on Hazard’s Shortform Feed by (
Thoughts about additional ramifications of this (not optimized much for readability).
Background on Epistemic Effort:
I’m of the belief that if I’m proposing a major idea that I’m hoping people will take action on, I should think seriously about the idea for… N minutes. N varies. But the key is to look into the dark, accounting for positive bias: What the ways an idea might not succeed? What consequences might it have that didn’t fit as prettily into the narrative I was crafting?
In my personal experience, this takes something like 30 minutes at least. In my original Epistemic Effort post I suggested 5 minutes, but I’ve found 15 minutes is barely enough to finish searching through existing thoughts already in the back of my mind. 30 Minutes is how long it takes to get started trying to think multiple steps into the future and general novel concerns.
This process is somehow very different from the process that generates my original blogpost.
I’ve noticed that now that I know it takes at least 30 minutes, I’m a lot more hesitant to even try to take 5. (I almost went ahead and posted this post without doing so, and then flagging “didn’t think for 5 minutes about how it might fail”, and then that felt embarrassing and it seemed important enough to do so that I went ahead and did it. But it might bode poorly for the idea)
“What About the Babyeaters?”
A failure mode of canonical Archipelago-ism is altruism + “think of the children.” If The Other Group is focused on something actively harmful, and you don’t trust people to be able to leave, or you think that the harms take root in childhood before people even have a chance to choose a civilization for themselves (say, secondhand smoke in the privacy of people’s own homes, or enforcing strict gender norms from early childhood...
...then the “to each their own” concept advocated here doesn’t sound as persuasive.
This is a different issue with Social Subculture Archipelagism. Even if there are no literal children, you may worry about pernicious, harmful ideas taking root that you expect to be memetically successful even though they are dangerous.
It’s very conceivable to me that the outcome of a “successful” Archipelagism taking root would be Bad Ideas Winning and the whole thing to end up net negative.
My current take is that some kind of “Bad Ideas Being Successful” thing is likely to happen, but that the overall Net Harm/Good will be positive. I don’t really have a justification for that. Just a feeling.
Observations of what’s happened so far....
Competing Norms
During the Hufflepuff Unconference, I ran into issues of how norms collide. I wanted people to either firmly commit to coming, or to not come. This failed in a few ways:
I held the event in a public space known to the surrounding community, which meant people with no idea of the norms ended up coming regardless.
Som people who were scrupulous and recognized that they couldn’t follow the rules chose not to come. Some people who didn’t care about respecting the rules just came anyway, creating a mild asshole filter. (I only have evidence of this affecting maybe 2-4 people total in either direction, but it was noticeable)
People who earnestly wanted to come and follow the rules ran into an issue where other people who weren’t firmly committing to things prevented them from making a firm commitment. (i.e. someone had a long lost friend visiting for the week, and the friend wasn’t sure of their schedule, and Person-A definitely wanted to make time for their friend if they could, but definitely wanted to come to the Unconference if their friend would be busy).
This last issue eventually resulted in my changing the rules to “please respond with an explicit estimate of how likely you were to come, and some of the decision-relevant things that might affect whether you come or not.” I think this worked better.
I don’t have evidence that this is that big a problem (I tried one experiment, it didn’t work as well as I wanted, I came up with a solution. System working as intended). But it implies future issues that one might not foresee
It’s Hard To Make Spaces
I have attempted to create a few spaces (at different times in different cities). And in general, it’s harder to create a new space dedicated to a particular thing than I’d have thought (in particular, finding enough people who care about a thing to seriously try out novel norms). In New York, it was hard because there weren’t that many people. In the Bay Area, it’s been harder-than-I-expected because although there ARE enough people that I expect to flesh out subcultures, those people have more things competing for their attention.
I (currently) expect to be able make things happen, but it won’t be as easy as hanging out a shingle. 45 people came to the Hufflepuff Unconference, but I spent 2 months and several blogposts hyping that up. (More recently, I tried to get an Epistemic Unconference happening that’d have a different set of norms, and I couldn’t get a critical mass of people interested. I didn’t try very hard—it’s Solstice Season and I need to conserve my “hey everyone let’s all do an effortful thing!” energy for that. But this clarified the degree of difficulty I’d have attracting interest in things)
I expect to have an easier-than-average time getting people interested in things, and it to still require a couple enthusiasm-driving-blogposts per individual thing.
So with that in mind...
Predictions and Gears-Thinking
(IMO this is the hard part of a “think seriously for 30 minutes” thing. This will be most stream-of-conscious-y of the sections)
First, I guess my most obvious prediction is “doing this at all is harder than I was hoping, and barely anything happens.”
Futher predictions are sort of weird, since the act of saying some of them out loud might make them come true. (It occurs to me I could secretly make predictions and see if anything happens in a year. I may do that but am not doing it yet)
I notice that the default way my brain is attempting to generate thoughts here feels like the Social Modesty Module running.
The second thing my brain’s doing is listing the things I hope will happen, and then see how my internal-surprise-o-meter feels about it.
The third thing, that I will actually record here, is listing things that seem like they might happen, that I want to happen or am afraid might happen, and not list my particular predictions yet but at least get the predictable-in-theory-ideas out there:
How many people will actually attempt to start a subgroup or change norms at one they already control as a result of this blogpost?
How many people will end up involved with those subgroups?
How many groups will happen secretly or privately? How many public?
How many will try experiments past the Valley of Discomfort?
In a year, and in 5 years, how many people will feel that those subgroups were useful?
How many novel social norms will be developed?
How many times do I expect that I’ll be surprised by something that happens as a result of this blogpost?
How many times do I expect that I’ll be confused by something that happens as a result of this blogpost?
How many attempted social norms will clash in actively bad ways?
Will I end up regretting this blogpost (separate questions for “will I think it turned out not to work but was still the right thing to push for at the time”, and “will I think, in principle, that I should have spent my time and social capital doing something else?”
Will people end up more socially isolated, less, or, neutral, as a result of this class of experiment?
(Huh, result of this: “generate hypothesis you can test without stressing about actually deciding on your predictions” was a suprisingly useful technique—I notice with several of the above that I have a least some intuitive sense of how it will play out, and in others, I notice I expect things to fail by default, but that I immediately see ways to make them less failure prone, if I chose to spend the time doing so)
It would be interesting to have a place to collect information about the subgroups. Monthly newsletters from non-secret ones on their progress and lessons learned?
I’d happily write about my group creation leanings.
Cool. It feels a bit premature to create anything formal to collect this sort of information, but simply making blogposts chronicling your progress seems like a good start.
(Note: My plan is to discuss specific-community-planning stuff on my private blog, and to periodically refactor the content for the front page, i.e. discussing it in a way that is relevant to people outside the community in question, and I think this is probably be the best general format for now)
Actually, I was planning on doing just this:
https://namespace.obormot.net/Rdb/20171116001
If the group still feels like a part of the (LW-related) rationalist community, they should be willing to write something about them on LW, once in a while. Maybe this should be encouraged by some kind of regular “Rationalist Subgroups Report Thread”, where each group would briefly describe themselves, and say what they did during the <whatever would be the interval of that Thread>.
Then this Thread would act as the place to collect information; and perhaps it could have a related wiki page.
I am pessimistic about this. Not just because of the potential for Successful Bad Ideas; that’s already happened within the rationalist community without an explicit decision to archipelagize (uncontroversially, Leverage, and, more controversially, postrationalism). But also because a fractured community is vulnerable, and I expect this to cause or accelerate fracture. We’re already using different norms in different places; Tumblr/Twitter/LessWrong 1.0/Facebook, Bay vs. non-Bay, Berkeley vs. SF vs. South Bay. I don’t think this needs more pushes to diverge.
In fact, to the extent I’m working on community issues it’s trying to find means of strengthening ties. My pro-epistemic holiday creation project is trying to find things to reinforce the shared features without tying us to any subculture. I think this is more needed at the present moment.
I agree that fracturing is a risk (that should have made my earlier list), and I think it’s a fair question to ask “how do you expect Archipelago to handle that risk?”. (I do have some answers to that)
My counter question is “if you’re not doing something Archipelago-like, how do you handle genuine conflict over what norms should exist?” (the same question I’d ask any group that’s attempting to resolve things via vague consensus)
I also argue:
Archipelago is happening by default anyway, only badly/haphazardly
People want different things. People are naturally isolated due to accident of geography and “which parts of the internet feel most comfortable?” and “people cluster in small groups for social reasons” and “people tend to like interacting with particular types of people.”
I think if you are dissatisfied with that, you need to take particular countermeaures. Those countermeasures depend on which sub-problem you’re solving, and I think those countermeasures are largely orthogonal to Doing Archipelago On Purpose.
People who want popular things can get their thing represented at larger spaces. People who want unpopular things are screwed, and may ultimately decide to leave.
The reason I’m motivated to do this is the first place is that putting continuous effort into things is unpopular, and this applies to things ostensibly part of The Rationalist Shtick. (It’s not unpopular in a “people feel it’s unpopular” sense, just a “hard things are hard, therefore most people don’t do them sense”)
(Note that you can find people who put continuous effort into things and hang out with them, but the gatekeeping is entirely via vague-social-networking. One goal of On-Purpose-Archipelago is to make it easier for public facing groups to have standards.)
Metaphorical UniGov?
A section I probably should have included in the original essay is “what role does centralization play here?” In geopolitical Archipelago, Scott suggests a “UniGov” that everyone pays taxes to, whose job is to punish defectors, to provide education about different islands, and to be an inoffensive place for people to live if they have nowhere else to go.
Do we need a version of that? I think yes, but the implementation details might be different enough that it’s probably better not to try and force the metaphor to work.
I think there should be period large, public facing events, intended as a place for people from different subcultures to mingle and either generically bond (Solstice type things), or present their best ideas (Unconferences, CFAR Reunion [expensive but public facing]).
(I don’t currently believe smaller events can serve this purpose. A specific small event might bridge between two small groups in a specific fashion, but not act as a general-purpose-improve-connection-across-the-subsubcultures. I suppose lots of smaller events can do this collectively, although I would bet against this as a working systematic solution)
[Aside—Epistemic Habit, notice opportunity to operationalize prediction]
I want to say something in favor of haphazard archipelagos!
I think that it can often be a case of James Scott-ian local knowledge. Certain norms may naturally evolve because they are suited to the people who evolve them, in a way it’s hard to imitate through conscious norm design (because you can’t predict people’s needs, or you can’t come up with as clever solutions as the hivemind can, or whatever). In general, people can do a lot of really clever social cognition subconsciously that is really hard to explain consciously (this is why social skills classes are so bad).
To be clear, I am absolutely behind deliberate subculture design (with reasonable safeguards to avoid institutionally abusive communities); I am pretty much always in favor of more experimentation and more empiricism. But I also think that “haphazard archipelago” is not the same as “bad archipelago”.
Definitely agreed with that.
I think the existing system is basically an ad-hoc filtering system that meets the social goals of many othe people already in it. I’m not sure whether it works for finding people who are competent when you need competent people. (It might be doing a reasonable job of filtering for some combo of “competentish and a person you want to hang out with”)
Three problems that seem like they need solving are:
1) It’s hard for newcomers (or even, “moderately-old-timers” to network their way into the existing social groups). This sucks for them.
2) I seem to run into people who do end up in one sub-community but don’t really know which the other ones are or how to find them.
3) The aforementioned “none of the subcultures actually do quite the thing I want”.
4) I worry that the system filters more for “people’s ability to proactively navigate social clusters” then it filters for any other particular kind of competence.
I consider that there is a proof of concept of small events, regularly done and shared, providing enough common connection that there is centralization: American civic religion. Thanksgiving is a small event (usually) with a lot of variation and a few core components. People do all kinds of things on the 4th of July, but there will be some fireworks in there and some red white and blue. The Super Bowl is one our tribe generally opts out of, but it’s widely shared and also mostly small events.
None of that provides connection as strong as an Amish or Orthodox Jewish community, but it’s competitive with Twice-A-Year Catholics/Jews who show up just for the big ceremonies.
If regular small events provide cultural touchstones that any two small groups can bridge to each other, that’s useful as glue for conversation and cross-fertilization, which I think is more important. And they’re less epistemically dangerous than a clapping-along groupthink ceremony like Solstice.
Hmm. So, I certainly support more small events like you describe, but I’m not sure I grok what your goal with them is.
“American Civic Religion is competitive with twice-a-year Catholics” sounds true, but Broader-Rationality-Culture already seems at-least-as-good as both of those.
i.e, with Modern American Christmas, there’s a range of activities ranging from “singing songs drunkenly” to “quiet Midnight Mass” to “just visiting your relatives for dinner” to “watching the Rockettes at Rockerfeller center”.
LessWrong-descended-culture already seems to essentially be the sort of “twice a year Catholics/Jews” esque culture, with religish-holiday-esque-Winter-Solstice, general-festival-esque Summer Solstice, and CFAR Alumni Reunion and EA Global as non-religion-flavored things.
My impression is that your concern is more like “not liking that approach to establishing broader connection”.
My point is that clearly, small events are effective at sustaining ties, and so we should have some. I also am very suspicious of some common elements like singing in unison. They’re common enough to be perceived as secular, and so not given the same skepticism as explicit trappings of religion. I think this is a mistake. They have a literally hypnotic effect such that they tend to override standard skeptical filters, and I don’t see them as any less dangerous to epistemics when they are used to promote a fairly-good ideology.
My concern is that. used uncarefully, complex ceremonies and ritual creates belief system lock-in. The more you include, the more content you are embedding that is bypassing evidence-based reasoning and getting treated as true until the audience thinks to question it. Knowing that even the best belief system assuredly has flaws, I think it is wrong to do this any more than absolutely necessary.
So, there’s obviously a bunch of disagreements I have there, but they don’t feel like they’re touching on any of my cruxes. (I listed four major events, only one of which was especially ritualized. I feel like my arguments stay mostly the same in a parallel world where Solstice doesn’t exist)
I do agree that there should be more small events of the sort you describe. I’m not as excited about them as you because I think it’s a lot harder to get lots of people to do a distributed small event than a big event.
I also just don’t feel a need for that many repeated events, large or small, and I feel close-to-saturated on them. (to add an additional repeated event, I’d either need to sacrifice a currently-in-rotation repeat event, or sacrifice the ability to try new things)
I wonder whether creating and splitting off the subgroups could actually be a mechanism to preserve the large community. Some people seem to come to LW meetups for wrong reasons; for example they prefer clever arguing and ignore the science; it would be nice to have them form the separate “clever arguers club” and leave. If people sort themselves out, everyone is saved the unpleasantness of having to kick out people who clearly don’t belong and disrupt the goals of the group. (“Let the heretics leave peacefully.”)
But of course this has the risk that the disruptive people would prefer to stay in the majority group (e.g. a disruptive clever arguer may dislike the presence of other clever arguers); or simply have enough time to participate in all subgroups.
Re: Leverage
I actually think Leverage is not an instance of a bad idea getting too[1] much mindshare, but is an example of fracturing. Leverage is actually one of the most Archipelag-y subcommunities that exists already. Most people think it’s a bad idea, so they went off on their own and did their own thing. And now AFAICT they don’t interact with anyone else much. [Edit: previously ended paragraph with “System working as intended”]
[1] Obviously they have non-zero mindshare and some financial resources, and one can argue over whether non-zero is too much. (I personally think Leverage is more valuable than most people think it is, but I am not currently interested in defending that claim and don’t expect others to take my word on it)
Strong disagreement that this is how we want things to go. Leverage is a black hole. It’s like Wonka’s chocolate factory without the delicious chocolate. Nobody ever goes in except to join! And nobody ever comes out! Occasionally I interact with Nevin, and they do some recruiting and fundraising, but lots of cool people and friends of mine and ours vanished without a trace. And That’s Terrible.
One can disagree about the value of Leverage. Maybe they’re doing good work. Maybe they’re not. I mean, they never talk to us, so how could we possibly tell? But if the proposal is for various groups of us to go off in their corners and never talk to the other groups again, I think that part is really bad.
Ah, I worded something weirdly there and then said a thing I didn’t mean to say.
“System working as intended” originally came after an extended anecdote about them making a bid more public respect/status, and then failing.
I meant the comment to be saying “Leverage isn’t an example of the Bad Idea Gains traction problem, but is an example of the Splintering Problem.”
First, thank you so much for writing this!
It is important to notice that the “island” doesn’t have to be fully built from start. “Let’s start a new subgroup” sounds scary; too much responsibility and possibly not enough status. “Let’s have one meeting where we try the norm X and see how it works” sounds much easier; and if it works, people would be more willing to have another meeting like that, possibly leading to the creation of a new community.
I am afraid I can’t give this topic the time and energy it deserves, so I will mostly recycle some old thoughts: I believe that a community designed for long-term survival needs to be “eukaryotic”—to allow subgroups with different ideas of how to achieve the stated common goal, and different levels of commitment; yet treating all subgroup members as valid members of the group. (That means, a subgroup going against the goals of the whole can be excommunicated, just like an individual going against the group. It’s just that neither mere membership in a subgroup, nor a lack thereof, is considered a violation of group norms. It is perfectly okay to join one or more subgroups, just like it is perfectly okay to ignore them.)
This is important in long term, because people’s capacity to participate in group activities change over time. For example, now that I have kids, I cannot spend the same amount of time on LW as I did before. (Generally, people with kids, but also e.g. currently changing jobs, will opt out of time-intensive group activities. If you kick them out, or just make them feel unwelcome, you are losing their potential contribution in future, when their situation changes again.) On the other hand, people with enough time and lot of agency will feel that more should be done, and you should provide them a valid way to do this, getting some other members involved, but allowing other members to ignore the project. Otherwise, they will probably leave the group and go to some other place where their contributions will be more welcome.
The obvious example of a very-long-living “eukaryotic” organisation is the Catholic Church. On the outside, it has clear boundaries (people generally know whether they are members or nonmembers), but actually very little is expected from the members, other than profess their membership and participate in a few rituals (which is technically costly signalling, but recently the cost is relatively low, e.g. some people just visit the church once in a year on Christmas). People who desire more intense participation can become priests and/or join some internal subgroup such as Jesuits. Both options are valid for a Catholic; the whole organisation works on the assumption that most people will choose the “easy” option and some will choose the “hard” option; without either group the organisation would fall apart. There can be multiple competing internal groups, such as Jesuits and Dominicans, as long as they signal credibly that they are subordinate to the whole.
I hesitate to use Mensa as an example of a successful organisation, but let’s admit it has survived a few decades. Mensa is also “eukaryotic”; the officially recognized subgroups are called “special interest groups”, and are defined by a shared interest of a few Mensans, e.g. playing chess. (When I was more optimistic about Mensa, I was thinking about creating a rationalist SIG within Mensa.) You can participate in one, or many, or none of the SIGs.
Then I have a personal example of when I was active in the Esperanto community. On outside, it is a community of people speaking the same artificial language; but the problem is that despite the shared language those people often lack a common topic. (Which usually defaults to meta, but using Esperanto to talk about Esperanto and the Esperanto movement can get boring quickly. Except to old people, who often love to talk about the past endlessly.) So with a few friends I founded a subgroup, with the goal to promote Esperanto using internet and electronic media generally. This didn’t mean leaving the larger community, nor inviting everyone to the project; we were a well-defined subgroup. We created a few websites and multimedia products, and later merged with another subgroup.
So, if the rationalist community is to exist long enough, it should have similar structure. A clear boundary with simple but clear rules (e.g. “if you believe in horoscopes, you are not a rationalist, no matter how you wish to identify”). Active subgroups. And some authority (e.g. a council of high-status rationalists) that can authoritatively declare the boundaries, admonish and excommunicate heretic subgroups, talk to media, etc. (to prevent “if you say that Deepak Chopra is not an important member of the rationalist movement, that’s just your opinion, man”).
Thanks. Particularly like:
I added it (citing you) to the OP since it was pretty concise and fit into the Practical Applications section.
Dangling sentence at the beginnig:
This is one major advantage of having separate sub-reddits instead of merely seperate tags. It allows each area of discussion to develop its own norms and rules of discussion and is a large part of why Reddit is so vibrant.
Probably the easiest place for many of these groups to develop would be online because it’ll be hard to get the numbers elsewhere, but we need the technical infrastructure to make that easy. I mean, I could try creating a group on another site, but people are so much more likely to join when it is right here on Less Wrong.
I roughly agree, but if much more conversation continues along this line I think would be best to move it to the meta section. (I have some additional thoughts but am holding off for now)
In particular, thank you for pointing out that in social experiments, phenomenal difficulty is not much Bayesian evidence for ultimate failure.
Learning a new set of norms/standards and sticking to them in the right contexts is often not easy. Getting a bunch of other people to choose to do so seems likely to be harder. (Although that’s just my immediate sense of how it is; it may be completely wrong…)
Thus, I have a weak expectation that this will only work well when participants and organizers are fairly good at deliberate standard-following and standard-establishing, respectively. Which actually seems like something the rationalist community is pretty good at, but it might still be worth giving some thought to how to be better at it.
You’re missing the end of a word there.
Yeah, definitely correct that this is a necessary subskill and there are some groups of people where this is really hard because the meta-norms are all about freethinking individuality (and the rationality community comes from roots that often have that).
But I think enforcing norms for the first time mostly just (“just”) requires you to be generally good public speaking (the sort of commanding tone you need to get people to do anything, normswise or otherwise, which I think is an important skill for meetup organizers to acquire no matter what, and if you’ve got the “comfortable talking in front of people” thing down the “get people do things” part I think is easy-ish)
Including in a meetup announcement “this meetup will be following X norms”, and then saying it again at the beginning of the meetup usually works fairly well in my experience.
Communities are crutches—we need them for reasons, and that’s why they don’t just fall to pieces. You could talk to people who already are doing this. Some of them, at least, will say that they don’t like watchers (non-participants). And then how will you observe the thing you wanted to?