Response to Glen Weyl on Technocracy and the Rationalist Community

Economist Glen Weyl has writ­ten a long es­say, “Why I Am Not A Tech­no­crat”, a ma­jor fo­cus of which is his differ­ences with the ra­tio­nal­ist com­mu­nity.

I feel like I’ve read a de­cent num­ber of out­sider cri­tiques of the ra­tio­nal­ist com­mu­nity at this point, and Glen’s cri­tique is pretty good. It has the typ­i­cal out­sider cri­tique weak­ness of not be­ing fully fa­mil­iar with the sub­ject of its crit­i­cism, bal­anced by the strength of see­ing the ra­tio­nal­ist com­mu­nity from a per­spec­tive we’re less fa­mil­iar with.

As I was read­ing Glen’s es­say, I took some quick notes. After­wards I turned them into this post.

Glen’s Strongest Points

The fun­da­men­tal prob­lem with tech­noc­racy on which I will fo­cus (as it is most eas­ily un­der­stood within the tech­no­cratic wor­ld­view) is that for­mal sys­tems of knowl­edge cre­ation always have their limits and bi­ases. They always leave out im­por­tant con­sid­er­a­tion that are only dis­cov­ered later and that of­ten turn out to have a sys­tem­atic re­la­tion­ship to the limited cul­tural and so­cial ex­pe­rience of the groups de­vel­op­ing them. They are thus sub­ject to a wide range of failure modes that can be in­ter­preted as re­flect­ing on a mix­ture of cor­rup­tion and in­com­pe­tence of the tech­no­cratic elite. Only sys­tems that leave a wide range of lat­i­tude for broader so­cial in­put can avoid these failure modes.

So far, this sounds a lot like dis­cus­sions I’ve seen pre­vi­ously of the book See­ing Like a State. But here’s where Glen goes fur­ther:

Yet al­low­ing such so­cial in­put re­quires sim­plifi­ca­tion, dis­til­la­tion, col­lab­o­ra­tion and a rel­a­tive re­duc­tion in the so­cial sta­tus and mon­e­tary re­wards al­lo­cated to tech­nocrats com­pared to the rest of the pop­u­la­tion, thereby run­ning di­rectly against the tech­no­cratic ide­ol­ogy. While tech­ni­cal knowl­edge, ap­pro­pri­ately com­mu­ni­cated and dis­til­led, has po­ten­tially great benefits in open­ing so­cial imag­i­na­tion, it can only achieve this po­ten­tial if it un­der­stands it­self as part of a broader demo­cratic con­ver­sa­tion.

...

Tech­ni­cal in­sights and de­signs are best able to avoid this prob­lem when, what­ever their an­a­lytic prove­nance, they can be con­veyed in a sim­ple and clear way to the pub­lic, al­low­ing them to be cri­tiqued, re­com­bined, and de­ployed by a va­ri­ety of mem­bers of the pub­lic out­side the tech­ni­cal class.

Tech­ni­cal ex­perts there­fore have a crit­i­cal role pre­cisely if they can make their tech­ni­cal in­sights part of a so­cial and demo­cratic con­ver­sa­tion that stretches well be­yond the role for demo­cratic par­ti­ci­pa­tion imag­ined by tech­nocrats. En­sur­ing this role can­not be sep­a­rated from the work of de­sign.

...

[When] in­su­la­tion is se­vere, even a deeply “well-in­ten­tioned” tech­no­cratic class is likely to have se­vere failures along the cor­rup­tion di­men­sion. Such a class is likely to de­velop a strong cul­ture of defend­ing its dis­tinc­tive class ex­per­tise and sta­tus and will be in­su­lated from ex­ter­nal con­cerns about the jus­tifi­ca­tion for this sta­tus.

...

Mar­ket de­sign­ers have, over the last 30 years de­signed auc­tions, school choice mechanisms, med­i­cal match­ing pro­ce­dures, and other so­cial in­sti­tu­tions us­ing tools like auc­tion and match­ing the­ory, adapted to a va­ri­ety of spe­cific in­sti­tu­tional set­tings by eco­nomic con­sul­tants. While the prin­ci­ples they use have an ap­pear­ance of ob­jec­tivity and fair­ness, they play out against the con­texts of so­cieties wildly differ­ent than those de­scribed in the mod­els. Match­ing the­ory uses prin­ci­ples of jus­tice in­tended to ap­ply to an en­tire so­ciety as a tem­plate for de­sign­ing the op­er­a­tion of a par­tic­u­lar match­ing mechanism within, for ex­am­ple, a given school dis­trict, thereby in prac­tice pri­mar­ily shut­ting down cru­cial de­bates about de­seg­re­ga­tion, bus­ing, taxes, and other ac­tions needed to achieve ed­u­ca­tional fair­ness with a sem­blance of for­mal truth. Auc­tion the­ory, based on static mod­els with­out product mar­ket com­pe­ti­tion and with ab­solute pri­vate prop­erty rights and as­sum­ing no co­or­di­na­tion of be­hav­ior across bid­ders, is used to de­sign auc­tions to gov­ern the in­cred­ibly dy­namic world of spec­trum al­lo­ca­tion, cre­at­ing hold­out prob­lems, re­duc­ing com­pe­ti­tion, and cre­at­ing huge pay­outs for those able to co­or­di­nate to game the auc­tions, of­ten them­selves mar­ket de­sign ex­perts friendly with the de­sign­ers. The com­plex­ities that arise in the pro­cess serve to make such mass-scale pri­va­ti­za­tions, of­ten pri­mar­ily to the benefit of these con­nected play­ers and at the ex­pense of the tax­payer, ap­pear the “ob­jec­tively” cor­rect and poli­ti­cally unim­peach­able solu­tion.

...

[Mechanism] de­sign­ers must ex­plic­itly rec­og­nize and de­sign for the fact that there is crit­i­cal in­for­ma­tion nec­es­sary to make their de­signs suc­ceed that a) lies in the minds of cit­i­zens out­side the tech­no­cratic/​de­signer class, b) will not be trans­lated into the lan­guage of this class soon enough to avoid dis­as­trous out­comes and c) does not fit into the thin for­mal­ism that de­sign­ers al­low for so­cietal in­put.

...

In or­der to al­low these failures to be cor­rected, it will be nec­es­sary for the de­signed sys­tem to be com­pre­hen­si­ble by those out­side the for­mal com­mu­nity, so they can in­cor­po­rate the un­for­mal­ized in­for­ma­tion through cri­tique, reuse, re­com­bi­na­tion and broader con­ver­sa­tion in in­for­mal lan­guage. Let us call this goal “leg­i­bil­ity”.

...

There will in gen­eral be a trade-off be­tween fidelity and leg­i­bil­ity, just as both will have to be traded off against op­ti­mal­ity. Sys­tems that are true to the world will tend to be­come com­pli­cated and thus illeg­ible.

...

Demo­cratic de­sign­ers thus must con­stantly at­tend, on equal foot­ing, in teams or in­di­vi­d­u­ally, to both the tech­ni­cal and com­mu­nica­tive as­pects of their work.

(Please let me know if you think I left out some­thing crit­i­cal)

A fa­mous quote about open source soft­ware de­vel­op­ment states that “given enough eye­balls, all bugs are shal­low”. Nowa­days, with crit­i­cal se­cu­rity bugs in open-source soft­ware like Heart­bleed, the spirit of this claim isn’t taken for granted any­more. One Hacker News user writes: “[De facto eye­ball short­age] be­comes even more dire when you look at code no one wants to touch. Like TLS. There were the Heart­bleed and goto fail bugs which ex­isted for, IIRC, a few years be­fore they were dis­cov­ered. Not sur­pris­ing, be­cause TLS code is gen­er­ally some of the worst code on the planet to stare at all day.”

In other words, if you want crit­i­cal feed­back on your open source pro­ject, it’s not enough just to put it out there and have lots of users. You also want to make the source code as ac­cessible as pos­si­ble—and this may mean com­pro­mis­ing on other as­pects of the de­sign.

Aca­demic or other in-group sta­tus games may en­courage the use of big words. But we’d be bet­ter off re­ward­ing sim­ple ex­pla­na­tions—not only are sim­ple ex­pla­na­tions more ac­cessible, they also demon­strate deeper un­der­stand­ing. If we ap­pre­ci­ated sim­plic­ity prop­erly:

  • We’d in­cen­tivize the cre­ation of more sim­ple ex­pla­na­tions, pro­mot­ing ac­cessibil­ity. And peo­ple wouldn’t dis­miss sim­ple ex­pla­na­tions for be­ing “too ob­vi­ous”.

  • In­tel­lec­tu­als would re­al­ize that even if a sim­ple idea re­quired lots of effort to dis­cover, it need not re­quire lots of effort to grasp. Ver­ifi­ca­tion is much quicker than search.

At the very least, I think, Glen wants our in­sti­tu­tions to be like highly us­able soft­ware: The in­ter­nals re­quire ex­per­tise to cre­ate and un­der­stand, but from a user’s per­spec­tive, it “just works” and does what you ex­pect.

Another point Glen makes well is that just be­cause you are in the in­sti­tu­tion de­sign busi­ness does not mean you’re im­mune to in­cen­tives. The im­por­tance of self-skep­ti­cism re­gard­ing one’s own in­cen­tives has been dis­cussed be­fore around here, but this re­cent post prob­a­bly comes closes to Glen’s po­si­tion, that you re­ally can’t be trusted to mon­i­tor your­self.

Fi­nally, Glen talks about the in­su­lar­ity of the ra­tio­nal­ist com­mu­nity it­self. I think this cri­tique was true in the past. I haven’t been in­ter­act­ing with the com­mu­nity in per­son as much over the past few years, so I hes­i­tate to talk about the pre­sent, but I think he’s plau­si­bly right. I also think there may be an in­ter­est­ing coun­ter­ar­gu­ment that the ra­tio­nal­ist com­mu­nity does a bet­ter job of in­te­grat­ing per­spec­tives across mul­ti­ple dis­ci­plines than your av­er­age aca­demic de­part­ment.

Pos­si­ble Points of Disagreement

Although I think Glen would find some com­mon ground with the re­cent post I linked, it’s pos­si­ble he would also find points of dis­agree­ment. In par­tic­u­lar, habryka writes:

High­light­ing ac­countabil­ity as a vari­able also high­lights one of the biggest er­ror modes of ac­countabil­ity and in­tegrity – choos­ing too broad of an au­di­ence to hold your­self ac­countable to.

There is trade­off be­tween the size of the group that you are be­ing held ac­countable by, and the com­plex­ity of the eth­i­cal prin­ci­ples you can act un­der. Too large of an au­di­ence, and you will be held ac­countable by the low­est com­mon de­nom­i­na­tor of your val­ues, which will rarely al­ign well with what you ac­tu­ally think is moral (if you’ve done any kind of real re­flec­tion on moral prin­ci­ples).

Too small or too memet­i­cally close of an au­di­ence, and you risk not enough peo­ple pay­ing at­ten­tion to what you do, to ac­tu­ally help you no­tice in­con­sis­ten­cies in your stated be­liefs and ac­tions. And, the smaller the group that is hold­ing you ac­countable is, the smaller your in­ner cir­cle of trust, which re­duces the amount of to­tal re­sources that can be co­or­di­nated un­der your shared prin­ci­ples.

I think a ma­jor mis­take that even many well-in­ten­tioned or­ga­ni­za­tions make is to try to be held ac­countable by some vague con­cep­tion of “the pub­lic”. As they make pub­lic state­ments, some­one in the pub­lic will mi­s­un­der­stand them, caus­ing a spiral of less com­mu­ni­ca­tion, re­sult­ing in more mi­s­un­der­stand­ings, re­sult­ing in even less com­mu­ni­ca­tion, cul­mi­nat­ing into an or­ga­ni­za­tion that is com­pletely opaque about any of its ac­tions and in­ten­tions, with the only com­mu­ni­ca­tion be­ing filtered by a PR de­part­ment that has lit­tle in­ter­est in the ob­servers ac­quiring any be­liefs that re­sem­ble re­al­ity.

I think a gen­er­ally bet­ter setup is to choose a much smaller group of peo­ple that you trust to eval­u­ate your ac­tions very closely, and ideally do so in a way that is it­self trans­par­ent to a broader au­di­ence. Com­mon ver­sions of this are au­di­tors, as well as non­profit boards that try to en­sure the in­tegrity of an or­ga­ni­za­tion.

Com­mon wis­dom is that it’s im­pos­si­ble to please ev­ery­one. And spe­cial­iza­tion of la­bor is a foun­da­tional prin­ci­ple of mod­ern so­ciety. If I took my role as a mem­ber of “the pub­lic” se­ri­ously and tried to provide mean­ingful and fair ac­countabil­ity to ev­ery­one, I wouldn’t have time to do any­thing else.

It’s in­ter­est­ing that Glen talks up the value of “leg­i­bil­ity”, be­cause from what I un­der­stand, See­ing Like a State em­pha­sizes its dis­ad­van­tages. See­ing Like a State dis­cusses leg­i­bil­ity in the eyes of state ad­minis­tra­tors, but Glen doesn’t ex­plain why we shouldn’t ex­pect similar failure modes when “the gen­eral pub­lic” is sub­sti­tuted for “state ad­minis­tra­tion”.

(It’s pos­si­ble that Glen doesn’t mean “leg­i­bil­ity” in the same sense the book does, and a differ­ent term like “in­sti­tu­tional leg­i­bil­ity” would pin­point what he’s get­ting at. But there’s still the ques­tion of whether we should ex­pect op­ti­miz­ing for “in­sti­tu­tional leg­i­bil­ity” to be risk-free, af­ter hav­ing ob­served that “so­cietal leg­i­bil­ity” has down­sides. Glen seems to in­ter­pret re­cent poli­ti­cal events as a re­sult of ex­cess tech­noc­racy, but they could also be seen as a re­sult of ex­cess pop­ulism—a leader’s charisma could be more “leg­ible” to the pub­lic than their com­pe­tence.)

Any­way, I as­sume Glen is aware of these is­sues and work­ing to solve them. I’m no ex­pert, but from what I’ve heard of Rad­i­calxChange, it seems like a re­ally cool pro­ject. I’ll offer my own un­in­formed out­sider’s per­spec­tive on in­sti­tu­tion de­sign, in the hope that the con­cep­tual raw ma­te­rial will prove use­ful to him or oth­ers.

My Take on In­sti­tu­tion Design

I think there’s an­other model which does a de­cent job of ex­plain­ing the data Glen pro­vides:

  • Hu­man sys­tems are com­pli­cated.

  • Greed finds & ex­ploits flaws in in­sti­tu­tions, caus­ing them to de­cay over time.

  • There are no silver bul­lets.

From the per­spec­tive of this model, Glen’s em­pha­sis on leg­i­bil­ity could be seen as yet an­other pur­ported silver bul­let. How­ever, I don’t see a com­pel­ling rea­son for it to suc­ceed where pre­vi­ous bul­lets failed. How, con­cretely, are ran­dom folks like me sup­posed to help ad­dress the cor­rup­tion Glen iden­ti­fies in the wire­less spec­trum al­lo­ca­tion pro­cess? There seems to be a bit of a dis­con­nect be­tween Glen’s de­scrip­tion of the prob­lem and his de­scrip­tion of the solu­tion. (Later Glen men­tions the value of “hu­man­i­ties, con­ti­nen­tal philos­o­phy, or hu­man­is­tic so­cial sci­ences”—I’d be in­ter­ested to hear spe­cific ideas from these ar­eas, which aren’t com­monly known, that he thinks are quite im­por­tant & rele­vant for in­sti­tu­tion de­sign pur­poses.)

As a re­cent & re­lated ex­am­ple, a decade or two ago many peo­ple were talk­ing about how the In­ter­net would re­vi­tal­ize & strengthen democ­racy; nowa­days I’d guess most would agree that the In­ter­net has failed as a silver bul­let in this re­gard. (In fact, some­times I get the im­pres­sion this is the only thing we can all agree on!)

Any­way… What do I think we should we do?

  • All untested in­sti­tu­tion de­signs have flaws.

  • The challenge of in­sti­tu­tion de­sign is to iden­tify & fix flaws as cheaply as pos­si­ble, ideally be­fore the de­sign goes into pro­duc­tion.

Un­der this frame­work, it’s not enough merely to have the ap­proval of a large num­ber of peo­ple. If these peo­ple have similar per­spec­tives, their in­abil­ity to iden­tify flaws offers limited ev­i­dence about the over­all ro­bust­ness of the de­sign.

Leg­i­bil­ity is use­ful for flaw dis­cov­ery in this frame­work, just as cleaner code could’ve been use­ful for sur­fac­ing flaws like Heart­bleed. But there are other strate­gies available too, like offer­ing bug boun­ties for the best available cri­tiques.

Ex­per­i­ments and field tri­als are a bit more ex­pen­sive, but it’s crit­i­cal to ac­tu­ally try things out, and re­solve dis­agree­ments among bug bounty par­ti­ci­pants. Then there’s the “re­sume-build­ing” stage of tri­al­ing one’s in­sti­tu­tion on an in­creas­ingly large scale in the real world. I’d ar­gue one should aim to have all the kinks worked out be­fore “re­sume-build­ing” starts, but of course, it’s im­por­tant to mon­i­tor the roll-out for prob­lems which might emerge—and ideally, the in­sti­tu­tion should it­self have means with which it can be patched “in pro­duc­tion” (which should get tested dur­ing ex­per­i­men­ta­tion & field tri­als).

The pro­cess I just de­scribed could it­self be seen as an untested in­sti­tu­tion which is prob­a­bly flawed and needs cri­tiques, ex­per­i­ments, and field test­ing. (For ex­am­ple, bug boun­ties don’t do any­thing on their own for leg­i­bil­ity—how can we in­cen­tivize the pro­duc­tion of clear ex­pla­na­tions of the in­sti­tu­tion de­sign in need of cri­tiques?) Tak­ing ev­ery­thing meta, and de­sign­ing an in­sti­tu­tional frame­work for in­tro­duc­ing new in­sti­tu­tions, is the real silver bul­let if you ask me :-)

Prob­a­ble Points of Disagreement

Given Glen’s be­lief in the difficulty of knowl­edge cre­ation, the im­por­tance of lo­cal knowl­edge, and the limi­ta­tions of out­side per­spec­tives, I hope he won’t be up­set to learn that I think he got a few things wrong about the ra­tio­nal­ist com­mu­nity. (I also think he got some things wrong about the EA com­mu­nity, but I be­lieve he’s work­ing to fix those is­sues, so I won’t ad­dress them.)

Glen writes:

if we want to have AIs that can play a pro­duc­tive role in so­ciety, our goal should not be ex­clu­sively or even pri­mar­ily to al­ign them with the goals of their cre­ators or the nar­row ra­tio­nal­ist com­mu­nity in­ter­ested in the AIAP.

This doesn’t ap­pear to be a differ­ence of opinion with the ra­tio­nal­ist com­mu­nity. In Eliezer’s CEV pa­per, he writes about the “co­her­ent ex­trap­o­lated vo­li­tion of hu­mankind”, not the “co­her­ent ex­trap­o­lated vo­li­tion of the ra­tio­nal­ist com­mu­nity”.

How­ever, now that MIRI’s re­search is non-dis­closed by de­fault, I won­der if it would be wise for them to pub­li­cly state that their re­search is for the benefit of all, in a char­ter like OpenAI has, rather than in a pa­per pub­lished in 2004.

Glen writes:

The in­sti­tu­tions likely to achieve [con­straints on an AI’s power] are pre­cisely the same sorts of in­sti­tu­tions nec­es­sary to con­strain ex­treme cap­i­tal­ist or state power.

An un­al­igned su­per­in­tel­li­gent AI which can build ad­vanced nan­otech­nol­ogy has no need to fol­low hu­man laws. On the flip side, an al­igned su­per­in­tel­li­gent AI can de­sign bet­ter in­sti­tu­tions for ag­gre­gat­ing our knowl­edge & prefer­ences than any hu­man could.

Glen writes:

A pri­mary goal of AI de­sign should be not just al­ign­ment, but leg­i­bil­ity, to en­sure that the hu­mans in­ter­act­ing with the AI know its goals and failure modes, al­low­ing cri­tique, reuse, con­straint etc. Such a fo­cus, while largely alien to re­search on AI and on AIAP

This ac­tu­ally ap­pears to me to be one of the pri­mary goals of AI al­ign­ment re­search. See 2.3 in this pa­per or this parable. It’s not alien to main­stream AI re­search ei­ther: see re­search on ex­plain­abil­ity and in­ter­pretabil­ity (pro tip: in­ter­pretabil­ity is bet­ter).

In any case, if the al­ign­ment prob­lem is ac­tu­ally solved, leg­i­bil­ity isn’t needed, be­cause we know ex­actly what the sys­tem’s goals are: The goals we gave it.

Conclusion

As I said pre­vi­ously, I have not in­ves­ti­gated Rad­i­calxChange in very much depth, but my su­perfi­cial im­pres­sion is that it is re­ally cool. I think it could be an ex­tremely high lev­er­age pro­ject in a world where AGI doesn’t come for a while, or gets in­vented slowly over time. My per­sonal fo­cus is on sce­nar­ios where AGI is in­vented rel­a­tively rapidly rel­a­tively soon, but some­times I won­der whether I should fo­cus on the kind of work Glen does. In any case, I am root­ing for him, and I hope his move­ment does an as­ton­ish­ing job of in­vent­ing and pop­u­lariz­ing nearly flawless in­sti­tu­tion de­signs.