Permissions in Governance

Link post

Com­pli­ance Costs

The bur­den of a rule can be sep­a­rated into (at least) two com­po­nents.

First, there’s the di­rect op­por­tu­nity cost of not be­ing al­lowed to do the things the rule for­bids. (We can in­clude here the penalties for vi­o­lat­ing the rule.)

Se­cond, there’s the “cost of com­pli­ance”, the effort spent on find­ing out what is per­mit­ted vs. for­bid­den and demon­strat­ing that you are only do­ing the per­mit­ted things.

Separat­ing these is use­ful. You can, at least in prin­ci­ple, aim to re­duce the com­pli­ance costs of a rule with­out mak­ing it less stringent.

For in­stance, you could aim to sim­plify the doc­u­men­ta­tion re­quire­ments for en­vi­ron­men­tal im­pact as­sess­ments, with­out re­lax­ing stan­dards for pol­lu­tion or safety. “Stream­lin­ing” or “sim­plify­ing” reg­u­la­tions aims to re­duce com­pli­ance costs, with­out nec­es­sar­ily low­er­ing stan­dards or soft­en­ing penalties.

If your goal in mak­ing a rule is to avoid or re­duce some un­wanted be­hav­ior — for in­stance, to re­duce the amount of toxic pol­lu­tion peo­ple and an­i­mals are ex­posed to — then shift­ing up or down your pol­lu­tion stan­dards is a zero-sum trade­off be­tween your en­vi­ron­men­tal goals and the con­ve­nience of pol­luters.

Re­duc­ing the costs of com­pli­ance, on the other hand, is pos­i­tive-sum: it saves money for de­vel­op­ers, with­out in­creas­ing pol­lu­tion lev­els. Every­body wins. Where pos­si­ble, you’d in­tu­itively think rule­mak­ers would always want to do this.

Of course, this as­sumes an ideal­ized world where the only goal of a pro­hi­bi­tion is to re­duce the to­tal amount of pro­hibited be­hav­ior.

You might want com­pli­ance costs to be high if you’re us­ing the rule, not to re­duce in­ci­dence of the for­bid­den be­hav­ior, but to pro­duce dis­tinc­tions be­tween peo­ple — i.e. to sep­a­rate the ex­tremely com­mit­ted from the ca­sual, so you can re­ward them rel­a­tive to oth­ers. Costly sig­nals are good if you’re play­ing a com­pet­i­tive zero-sum game; they in­duce var­i­ance be­cause not ev­ery­one is able or will­ing to pay the cost.

For in­stance, some the­o­ries of sex­ual se­lec­tion (such as the hand­i­cap prin­ci­ple) ar­gue that we evolved traits which are not benefi­cial in them­selves but are sen­si­tive in­di­ca­tors of whether or not we have other fit­ness-en­hanc­ing traits. E.g. a pea­cock’s tail is so heavy and showy that only the strongest and healthiest and best-fed birds can af­ford to main­tain it. The tail mag­nifies var­i­ance, mak­ing it eas­ier for pea­hens to dis­t­in­guish oth­er­wise small vari­a­tions in the health of po­ten­tial mates.

Such “mag­nify­ing glasses for small flaws” are use­ful in situ­a­tions where you need to pick “win­ners” and can in­her­ently only choose a few. Sex­ual se­lec­tion is an ex­am­ple of such a a situ­a­tion, as fe­males have biolog­i­cal limits on how many chil­dren they can bear per life­time; there is a fixed num­ber of males they can re­pro­duce with. So it’s a zero-sum situ­a­tion, as males are com­pet­ing for a fixed num­ber of breed­ing slots. Other com­pe­ti­tions for fixed prizes are similar in struc­ture, and like­wise tend to evolve ex­pen­sive sig­nals of com­mit­ment or qual­ity. A test that’s so easy any­one can pass it, is use­less for iden­ti­fy­ing the top 1%.

On a reg­u­la­tory-cap­ture or spoils-based ac­count of poli­tics, where poli­tics (in­clud­ing reg­u­la­tion) is seen as a ne­go­ti­a­tion to di­vide up a fixed pool of re­sources, and loy­alty/​trust is im­por­tant in re­peated ne­go­ti­a­tions, high com­pli­ance costs are easy to ex­plain. They pre­vent dilut­ing the spoils among too many peo­ple, and cre­ate var­i­ance in peo­ple’s abil­ity to com­ply, which al­lows you to be se­lec­tive along what­ever di­men­sion you care about.

Com­pet­i­tive (se­lec­tive, zero-sum) pro­cesses work bet­ter when there’s wide var­i­ance among peo­ple. A rule (or bound­ary, or in­cen­tive) that’s meant to min­i­mize an un­de­sired be­hav­ior is, by con­trast, look­ing at ag­gre­gate out­comes. If you can make it eas­ier for peo­ple to do the de­sired be­hav­ior and re­frain from the un­de­sired, you’ll get bet­ter ag­gre­gate be­hav­ior, all else be­ing equal. Th­ese goals are, in a sense, “demo­cratic” or “anti-elitist”; if you just care about to­tal ag­gre­gate out­comes, then you want good be­hav­ior to be broadly ac­cessible.

Re­quiring Per­mis­sion Raises Com­pli­ance Costs

A straight­for­ward way of avoid­ing un­de­sired be­hav­ior is to re­quire peo­ple to ask an au­thor­ity’s per­mis­sion be­fore act­ing.

This has ad­van­tages: some­times “un­de­sired be­hav­ior” is a com­plex, situ­a­tional thing that’s hard to cod­ify into a rule, so the dis­cre­tional judg­ment of a hu­man can do bet­ter than a rigid rule.

One dis­ad­van­tage that I think peo­ple un­der­es­ti­mate, how­ever, is the chilling effect it has on de­sired be­hav­ior.

For in­stance:

  • If you have to ask the boss’s per­mis­sion in­di­vi­d­u­ally for each pur­chase, no mat­ter how cheap, not only will you waste a lot of your em­ploy­ees’ time, but you’ll dis­in­cen­tivize them from ask­ing for even cost-effec­tive pur­chases, which can be more costly in the long run.

  • If you re­quire a doc­tor’s ap­point­ment for giv­ing pain med­i­ca­tion ev­ery time, to guard against drug abuse, you’re go­ing to see a lot of peo­ple who re­ally do have chronic pain do­ing with­out med­i­ca­tion be­cause they don’t want the anx­iety of go­ing to a doc­tor and be­ing sus­pected of “drug-seek­ing”.

  • If you have to get per­mis­sion be­fore clean­ing or con­tribut­ing sup­plies for a shared space, then that space will be chron­i­cally un­der-cleaned and un­der-sup­plied.

  • If you have to get per­mis­sion from a su­pe­rior in or­der to stop the pro­duc­tion line to fix a prob­lem, then safety risks and defec­tive prod­ucts will get over­looked. (This is why Toy­ota man­dated that any worker can unilat­er­ally stop the pro­duc­tion line.)

The in­hi­bi­tion against ask­ing for per­mis­sion is go­ing to be strongest for shy peo­ple who “don’t want to be a bother” — i.e. those who are most con­scious of the effects of their ac­tions on oth­ers, and per­haps those who you’d most want to en­courage to act. Those who don’t care about both­er­ing you are go­ing to be un­daunted, and will flood you with un­rea­son­able re­quests. A sys­tem where you have to ask a hu­man’s per­mis­sion be­fore do­ing any­thing is an ass­hole filter, in Siderea’s ter­minol­ogy; it em­pow­ers ass­holes and dis­ad­van­tages ev­ery­one else.

The di­rect costs of a rule fall only on those who vi­o­late it (or wish they could); the com­pli­ance costs fall on ev­ery­one. A sys­tem of en­force­ment that prefer­en­tially in­hibits de­sired be­hav­ior (while not be­ing that re­li­able in re­strict­ing un­de­sired be­hav­ior) is even worse from an effi­ciency per­spec­tive than a high com­pli­ance cost on ev­ery­one.

Im­per­sonal Boundaries

An al­ter­na­tive is to in­stan­ti­ate your bound­aries in an inan­i­mate ob­ject — some­thing that can’t in­timi­date shy peo­ple or cave to pres­sure from en­ti­tled jerks. For in­stance:

  • a lock on a door is an inan­i­mate bound­ary on space

  • a set of pass­word-pro­tected per­mis­sions on a filesys­tem is an inan­i­mate bound­ary on in­for­ma­tion access

  • a de­part­men­tal bud­get and a credit card with a fixed spend­ing limit is an inan­i­mate bound­ary on spending

  • an elec­tric­ity source that shuts off au­to­mat­i­cally when you don’t pay your bill is an inan­i­mate bound­ary against theft

The key el­e­ment here isn’t in­for­ma­tion-the­o­retic sim­plic­ity, as in the de­bate over sim­ple rules vs. dis­cre­tion. Inan­i­mate bound­aries can be com­plex and opaque. They can be a black box to the user.

The key el­e­ments are that, un­like hu­mans, inan­i­mate bound­aries do not pun­ish re­quests that are re­fused (even so­cially, by wear­ing a dis­ap­pointed fa­cial ex­pres­sion), and they do not give in to re­peated or more force­ful re­quests.

An inan­i­mate bound­ary is, rather, like the ideal ver­sion of a hu­man main­tain­ing a bound­ary in an “as­sertive” fash­ion; it en­forces the bound­ary re­li­ably and pa­tiently and with­out emo­tion.

This way, it pro­duces less in­hi­bi­tion in shy or em­pa­thetic peo­ple (who hate to make re­quests that could make some­one un­happy) and is less vuln­er­a­ble to pushy peo­ple (who brow­beat oth­ers into com­pro­mis­ing on bound­aries.)

In fact, you can get some of the benefits of an inan­i­mate bound­ary with­out ac­tu­ally tak­ing a hu­man out of the loop, but just by re­duc­ing the band­width for so­cial sig­nals. By us­ing email in­stead of in-per­son com­mu­ni­ca­tion, for in­stance, or by us­ing for­mal­ized scripts and im­per­sonal ter­minol­ogy. Dis­tanc­ing tac­tics make it eas­ier to re­fuse re­quests and eas­ier to make re­quests; if these effects are roughly the same in mag­ni­tude, you get a sys­tem that se­lects more effec­tively for en­abling de­sired be­hav­ior and pre­vent­ing un­de­sired be­hav­ior. (Of course, when you have one per­mis­sion-granter and many per­mis­sion-seek­ers, the effects are not the same in ag­gre­gate mag­ni­tude; the per­mis­sion-granter can get spammed by tons of un­rea­son­able re­quests.)

Of course, if you’re try­ing to se­lect for trans­gres­sive­ness — if you want to re­ward peo­ple who are too savvy to fol­low the offi­cial rules and too stub­born to take no for an an­swer — you’d want to do the op­po­site; have an au­to­mated, im­per­sonal filter to block or in­timi­date the du­tiful, and an ex­tremely per­sonal, in­ti­mate, psy­cholog­i­cally gru­el­ing test for the ex­cep­tional. But in this case, what you’ve set up is a com­pet­i­tive test to differ­en­ti­ate be­tween peo­ple, not a rule or bound­ary which you’d like fol­lowed as widely as pos­si­ble.

Con­sen­sus and Do-Ocracy

So far, the sys­tems we’ve talked about are cen­tral­ized, and de­scribed from the per­spec­tive of an au­thor­ity figure. Given that you, the au­thor­ity, want to achieve some goal, how should you most effec­tively en­force or in­cen­tivize de­sired ac­tivity?

But, of course, that’s not the only per­spec­tive one might take. You could in­stead take the per­spec­tive that ev­ery­body has goals, with no a pri­ori rea­son to pre­fer one per­son’s goals to any­one else’s (with­out know­ing what the goals are), and model the situ­a­tion as a group de­liber­at­ing on how to make de­ci­sions.

Con­sen­sus rep­re­sents the egal­i­tar­ian-group ver­sion of per­mis­sion-ask­ing. Be­fore an ac­tion is taken, the group must dis­cuss it, and must agree (by ma­jor­ity vote, or unan­i­mous con­sent, or some other ag­gre­ga­tion mechanism) that it’s suffi­ciently widely ac­cepted.

This has all of the typ­i­cal flaws of ask­ing per­mis­sion from an au­thor­ity figure, with the added prob­lem that groups can take longer to come to con­sen­sus than a sin­gle au­thor­ity takes to make a go/​no-go de­ci­sion. Con­sen­sus de­ci­sion pro­cesses in­hibit ac­tion.

(Of course, some­times that’s ex­actly what you want. We have jury tri­als to pre­vent giv­ing crim­i­nal penalties lightly or with­out de­liber­a­tion.)

An al­ter­na­tive, equally egal­i­tar­ian struc­ture is what some hack­erspaces call do-oc­racy.

In a do-oc­racy, ev­ery­one has au­thor­ity to act, unilat­er­ally. If you think some­thing should be done, like re­ar­rang­ing the ta­bles in a shared space, you do it. No need to ask for per­mis­sion.

There might be dis­putes when some­one ob­jects to your ac­tions, which have to be re­solved in some way. But this is ba­si­cally the only situ­a­tion where gov­er­nance en­ters into a do-oc­racy. Con­sen­sus de­ci­sion­mak­ing is an in­for­mal ver­sion of a leg­is­la­tive or ex­ec­u­tive body; do-oc­racy is an in­for­mal ver­sion of a ju­di­cial sys­tem. In­stead of need­ing gov­er­nance ev­ery time some­one acts, in a ju­di­cial-only sys­tem you only need gov­er­nance ev­ery time some­one acts (or states an in­ten­tion to act) AND some­one else ob­jects.

The pri­mary ad­van­tage of do-oc­racy is that it doesn’t slow down ac­tions in the ma­jor­ity of cases where no­body minds. There’s no fric­tion, no bar­rier to tak­ing ini­ti­a­tive. You don’t have tasks ly­ing un­done be­cause no­body knows “whose job” they are. Ad­di­tion­ally, it grants the most power to the most ac­tive par­ti­ci­pants, which in­tu­itively has a kind of fair­ness to it, es­pe­cially in vol­un­tary clubs that have a lot of pas­sive mem­bers who barely en­gage at all.

The dis­ad­van­tages of do-oc­racy are ex­actly the same as its ad­van­tages. First of all, any ac­tion which is po­ten­tially harm­ful and hard to re­verse (in­clud­ing, of course, dan­ger­ous ac­ci­dents and vi­o­lence) can be unilat­er­ally ini­ti­ated, and do-oc­racy can­not pre­vent it, only re­me­di­ate it af­ter the fact (or pe­nal­ize the agent.) Do-oc­ra­cies don’t deal well with very se­vere, ir­re­versible risks. When they have to, they evolve per­mis­sion-based func­tions; for in­stance, the rules firms or in­surance com­pa­nies in­sti­tute to pre­vent risky ac­tivi­ties that could lead to law­suits.

Se­condly, do-oc­ra­cies grant the most power to the most ac­tive par­ti­ci­pants, which of­ten means those who have the most time on their hands, or who are clos­est to the ac­tion, at the ex­pense of ab­sent stake­hold­ers. This means, for in­stance, it fa­vors a firm’s ex­ec­u­tives (who en­gage in day-to-day ac­tivity) over in­vestors or donors or the gen­eral pub­lic; in vol­un­teer and poli­ti­cal or­ga­ni­za­tions it fa­vors those who have more free time to par­ti­ci­pate (re­tirees, stu­dents, the un­em­ployed, the in­de­pen­dently wealthy) over those who have less (work­ing adults, par­ents). The gen­eral phe­nomenon here is prin­ci­pal-agent prob­lems — theft, self-deal­ing, neg­li­gence, all cases where the peo­ple who are phys­i­cally there and act­ing take un­fair ad­van­tage of the peo­ple who are ab­sent and not in the loop, but de­pend on things re­main­ing okay.

A ju­di­cial sys­tem doesn’t help those who don’t know they’ve been wronged.

Con­sen­sus sys­tems, in fact, are de­signed to force gov­er­nance to in­clude or rep­re­sent all the stake­hold­ers — even those who would, by de­fault, not take the ini­ti­a­tive to par­ti­ci­pate.

Con­sumer-product com­pa­nies mostly have do-ocratic power over their users. It’s pos­si­ble to quit Face­book, with the touch of a but­ton. Face­book changes its al­gorithms, of­ten in ways users don’t like — but, in most cases, peo­ple don’t hate the changes enough to quit. Face­book makes use of per­sonal data — af­ter putting up a di­a­log box re­quest­ing per­mis­sion to use it. Yet, some peo­ple are dis­satis­fied and feel like Face­book is too pow­er­ful, like it’s hack­ing into their baser in­stincts, like this wasn’t what they’d wanted. But Face­book hasn’t harmed them in any way they didn’t, in a sense, con­sent to. The is­sue is that Face­book was do­ing things they didn’t re­flec­tively ap­prove of while they weren’t pay­ing at­ten­tion. Not se­cretly — none of this was se­cret, it just wasn’t on their minds, un­til sud­denly a big me­dia firestorm put it there.

You can get a lot of power to shape hu­man be­hav­ior just by show­ing up, know­ing what you want, and en­act­ing it be­fore any­one else has thought about it enough to ob­ject. That’s the side of do-oc­racy that freaks peo­ple out. Wher­ever in your life you’re run­ning on au­topi­lot, an ad­ver­sar­ial in­no­va­tor can take a bite out of you and get away with it long be­fore you no­tice some­thing’s wrong.

This is an­other part of the ap­peal of per­mis­sion-based sys­tems, whether egal­i­tar­ian or au­thor­i­tar­ian; if you have to make a high-touch, hu­man con­nec­tion with me and get my per­mis­sion be­fore act­ing, I’m more likely to no­tice changes that are bad in ways I didn’t have any prior model of. If I’m suffi­ciently cau­tious or pes­simistic, I might even be ok with the costs in terms of caus­ing a chilling effect on harm­less ac­tions, so long as I make sure I’m sen­si­tive to new kinds of shenani­gans that can’t be cap­tured in pre-ex­ist­ing rules. If I don’t know what I want ex­actly, but I ex­pect change is bad, I’m go­ing to be much more drawn to perms­sion-based sys­tems than if I know ex­actly what I want or if I ex­pect typ­i­cal ac­tions to be im­prove­ments.