The Archipelago Model of Community Standards

Epistemic Sta­tus: My best guess. I don’t know if this will work but it seems like the ob­vi­ous ex­per­i­ment to try more of.

Epistemic Effort: Spent sev­eral months think­ing ca­su­ally, 25ish min­utes con­soli­dat­ing ear­lier mem­o­ries and con­cerns, and maybe 10ish min­utes think­ing about po­ten­tial pre­dic­tions. See com­ment.

Build­ing off:

Claim 1 - If you are dis­satis­fied with the norms/​stan­dards in a vaguely defined com­mu­nity, a good first step is to re­fac­tor that com­mu­nity into sub-groups with clearly defined goals and lead­er­ship.

Claim 2 - Peo­ple have differ­ent goals, and you may be wrong about what norms are im­por­tant even given a cer­tain goal. So, also con­sider proac­tively co­op­er­at­ing with other peo­ple form­ing al­ter­nate sub­groups out of the same par­ent group, with the goal of learn­ing from each other.

Re­fac­tor­ing Into Subcommunities

Build­ing groups that ac­com­plish any­thing is hard. Build­ing groups that pri­ori­tize in­de­pen­dent think­ing to solve novel prob­lems is harder. But when faced with a hard prob­lem, a use­ful tech­nique is to re­fac­tor it into some­thing sim­pler.

In “Open Prob­lems in Group Ra­tion­al­ity”, Conor lists sev­eral com­mon ten­sions. I in­clude them here for refer­ence (al­though any com­bi­na­tion of difficult group ra­tio­nal­ity prob­lems would suffice to mo­ti­vate this post).

  1. Buy-in and re­ten­tion.

  2. Defec­tion and dis­con­tent.

  3. Safety ver­sus stan­dards.

  4. Pro­duc­tivity ver­sus rele­vance.

  5. Sovereignty ver­sus co­op­er­a­tion.

  6. Moloch and the prob­lem of dis­tributed moral ac­tion.

Th­ese prob­lems don’t go away when you have clearly defined goals. A cor­po­ra­tion with a clearcut mis­sion and strat­egy (i.e max­i­mize profit by sel­l­ing wid­gets) still has to nav­i­gate the bal­ance of “hold their em­ploy­ees to a high stan­dards to in­crease perfor­mance” and “make sure em­ploy­ees feel safe enough to do good work with­out get­ting wracked with anx­iety” (or, just quit).

Such a cor­po­ra­tion might make differ­ent trade­offs in differ­ent situ­a­tions—if there’s a la­bor sur­plus, they might be less wor­ried about em­ploy­ees quit­ting be­cause they can just find more. If the job in­volves cre­ative knowl­edge work, anx­iety might have greater costs to pro­duc­tivity. Or maybe they’re not just profit-max­i­miz­ing: maybe the CEO cares about em­ployee men­tal health for its own sake.

But well defined goals, with lead­ers who can en­force them, at least makes it pos­si­ble to figure out what trade­offs to make and ac­tu­ally make them.

Whereas if you live in a loosely defined com­mu­nity where peo­ple show up and leave when­ever they want, and no­body can even pre­cisely agree on what the com­mu­nity is, you’ll have a lot more trou­ble.

Peo­ple who care a lot about, say, per­sonal sovereighty, will con­stantly push for norms that max­i­mize free­dom. Peo­ple that care about co­op­er­a­tion will push for norms en­courag­ing ev­ery­one to work harder and be more re­li­abl at per­sonal free­dom’s ex­pense.

Maybe one group can win—pos­si­bly by per­suad­ing ev­ery­one they are right, or sim­ply by be­ing more nu­mer­ous.


A) You prob­a­bly can’t win ev­ery cul­tural bat­tle.

B) Even if you could, you’d spend a lot of time and en­ergy fight­ing that might be bet­ter spent ac­tu­ally ac­com­plish­ing what­ever these norms are ac­tu­ally for.

So if you can man­age to avoid in­fight­ing while still ac­com­plish­ing your goals, all things be­ing equal that’s prefer­able.

Con­sid­er­ing Archipelago

Once this thought oc­cured to me, I was im­me­di­ately re­minded of Scott Alexan­der’s Archipelago con­cept. A quick re­cap:

Imag­ine a bunch of fac­tions fight­ing for poli­ti­cal con­trol over a coun­try. They’ve agreed upon the strict prin­ci­ple of harm (no phys­i­cally hurt­ing or steal­ing from each other). But they still dis­agree on things like “does pornog­ra­phy harm peo­ple”, “do cigarette ads harm peo­ple”, “does ho­mo­sex­u­al­ity harm the in­sti­tu­tion of mar­riage which in turn harms peo­ple?”, “does soda harm peo­ple”, etc.

And this is bad not just be­cause ev­ery­one wastes all this time fight­ing over norms, but be­cause the na­ture of their dis­agree­ment in­cen­tivizes them to fight over what harm even is.

And this in turn in­cen­tivizes them to fight over both defi­ni­tions of words (dis­tract­ing and time-wast­ing) and what counts as ev­i­dence or good rea­son­ing through a poli­ti­cally mo­ti­vated lens. (Which makes it harder to ever use ev­i­dence and rea­son­ing to re­solve is­sues, even un­con­tro­ver­sial ones)


Imag­ine some­one dis­cov­ers an archipelago of empty is­lands. And in­stead of con­tin­u­ing to fight, the peo­ple who want to live in Science­topia go off to found an is­land-state based on ideal sci­en­tific pro­cesses, and the peo­ple who want to live in Liber­topia go off and found a so­ciety based on the strict prin­ci­ple of harm, and the peo­ple who want to live in Chris­tian­topia go found a fun­da­men­tal­ist Chris­tian com­mune.

They agree on an over­ar­ch­ing set of rules, pay­ing some taxes to a cen­tral au­thor­ity that han­dles things like “dump­ing pol­lu­tants into the oceans/​air that would af­fect other is­lands” and “mak­ing sure chil­dren are well ed­u­cated enough to have the op­por­tu­nity to un­der­stand why they might con­sider mov­ing to other is­lands.”

Prac­ti­cal Applications

There’s a bunch of rea­sons the Archipelago con­cept doesn’t work as well in prac­tice. There are no mag­i­cal empty is­lands we can just take over. Leav­ing a place if you’re un­happy is harder than it sounds. Re­solv­ing the “think of the chil­dren” is­sue will be very con­tentious.

But, we don’t need perfect-ideal­ized-archipelago to make use of the gen­eral con­cept. We don’t even need a broad crit­i­cal mass of change.

You, per­son­ally, could just do some­thing with it, right now.

If you have an event you’re run­ning, or an on­line space that you con­trol, or an or­ga­ni­za­tion you run, you can set the norms. Rather than opt­ing-by-de­fault into the generic av­er­age norms of your peers, you can say “This is a space speci­fi­cally for X. If you want to par­ti­ci­pate, you will need to hold your­self to Y par­tic­u­lar stan­dard.”

Some fea­tures and con­sid­er­a­tions:

You Can Test More In­ter­est­ing Ideas. If a hun­dred peo­ple have to agree on some­thing, you’ll only get to try things that you can can 50+ peo­ple on board with (due to crowd in­er­tia, re­gard­less of whether you have a for­mal democ­racy)

But maybe you can get 10 peo­ple to try a more ex­treme ex­per­i­ment. (And if you share knowl­edge, both about ex­per­i­ments that work and ones that don’t, you can build the over­all body of com­mu­nity-knowl­edge in your so­cial world)

I would rather have a world where 100 peo­ple try 10 differ­ent ex­per­i­ments, even if I dis­agree with most of those ex­per­i­ments and wouldn’t want to par­ti­ci­pate my­self.

You Can Sim­plify the Prob­lem and Iso­late Ex­per­i­men­tal Vari­ables. “Good” sci­ence tests a sin­gle vari­able at the time so you can learn more about what-causes-what.

In prac­tice, if you’re build­ing an or­ga­ni­za­tion, you may not have time to do “proper sci­ence”—you may need to get a group work­ing ASAP, and you may need to test a few ideas at once to have a chance at suc­cess.

But, all things be­ing equal it’s still con­ve­nient to iso­late fac­tors as much as pos­si­ble. One benefit to re­fac­tor­ing a com­mu­nity into smaller pieces is you can pick more spe­cific goals. In­stead of rein­vent­ing ev­ery sin­gle wheel at once, pick a few spe­cific axes you’re try­ing to learn about.

This will both make the prob­lem eas­ier, as well as make it eas­ier to learn from.

You Can ‘Time­share Is­lands’. Maybe you don’t have an en­tire space that you can con­trol. But maybe you and some other friends have a shared space. (Say, a weekly meetup).

In­stead of hav­ing the meetup be a generic thing cater­ing to the av­er­age com­mon de­nom­i­na­tor of mem­bers, you can col­lec­tively agree to use it for ex­per­i­ments (at least some­times). Make it eas­ier for one per­son to say ‘Okay, this week I’d like to run an ac­tivity that’ll re­quire differ­ent norms than we’re used to. Please come pre­pared for things to be a bit differ­ent.’

This comes with some com­pli­ca­tions—one of the benefits of a re­cur­ring event is peo­ple roughly know what to ex­pect, so it may not be good to do this all the time. But gen­er­ally, giv­ing the per­son run­ning a given event the au­thor­ity to try some differ­ent norms out can get you some of the benefits of the Archipelago con­cept.

You Can Start With Just One Meetup

Viliam in the com­ments made a note I wanted to in­clude here:

It is im­por­tant to no­tice that the “is­land” doesn’t have to be fully built from start. “Let’s start a new sub­group” sounds scary; too much re­spon­si­bil­ity and pos­si­bly not enough sta­tus. “Let’s have one meet­ing where we try the norm X and see how it works” sounds much eas­ier; and if it works, peo­ple would be more will­ing to have an­other meet­ing like that, pos­si­bly lead­ing to the cre­ation of a new com­mu­nity.

Mak­ing It Through the ‘Un­pleas­ant Valley’ of Group Ex­per­i­men­ta­tion.

I think this graph was un­der­ap­pre­ci­ated in its origi­nal post. When peo­ple try new things (a new diet or ex­er­cise pro­gram, study­ing a new skill, etc), the new thing in­volves effort and challenges that in some ways make it seem worse than what­ever their de­fault be­hav­ior was.

Some ex­per­i­ments are just duds. But of­ten­times it feels like it’ll turn out to be a dud, when you’re in the Un­pleas­ant Valley, and in fact you just haven’t stuck with it long enough for it to bear fruit.

This is hard enough for solo ex­per­i­ments. For group ex­per­i­ments, where not just one but many peo­ple must all try a thing at once and get good at it, all it takes is a lit­tle defec­tion to spiral into a mass ex­o­dus.

Re­fac­tor­ing com­mu­ni­ties into smaller groups with clear sub­goals can make it pos­si­ble for a group to make it through the Valley of Un­pleas­ant­ness to­gether.

Over­lap­ping So­cial Spheres

Shar­ing Is­lands and Cross Pollination

In the end, I don’t think “Is­lands” is quite the right metaphor here. One of the things that makes so­cial archipelago differ­ent from the canon­i­cal ex­am­ple is that the is­lands over­lap. Peo­ple may be a mem­ber of mul­ti­ple groups and sub-groups.

A benefit of this is cross pol­li­na­tion—it’s eas­ier to share in­for­ma­tion and grow if you have peo­ple who ex­ist in mul­ti­ple sub­cul­tures (sub-sub­cul­tures?) and can trans­late ideas be­tween them.

How much benefit this yields de­pends on how mind­fully peo­ple are ap­proach­ing the con­cept, and how much of their ideas they are shar­ing (mak­ing both the ob­ject-level-idea and the un­der­ly­ing rea­sons ac­cessible to oth­ers).

This post is pri­mar­ily in­tended as refer­ence—I have more spe­cific ideas on what kinds of com­mu­ni­ties I want to par­ti­ci­pate in, and thoughts on “un­der­ex­plored so­cial niches” that I think oth­ers might con­sider ex­per­i­ment­ing with. Some of those thoughts will be on the LessWrong front page, oth­ers on my pri­vate pro­file or the Meta sec­tion.

But mean­while, I hope to see more groups of peo­ple in my filter bub­ble self or­ga­niz­ing, carv­ing out spaces to try novel con­cepts.