Leto among the Machines

I’ve always been sur­prised that there’s not more dis­cus­sion of Dune in ra­tio­nal­ist cir­cles, es­pe­cially con­sid­er­ing that:


1. It’s a book all about peo­ple im­prov­ing their minds to the point where they be­come su­per­hu­man.

2. It’s set in a world where AI Goal Align­ment is­sues are not only widely un­der­stood, but are in­te­grated into the foun­da­tion of ev­ery so­ciety.

3. It’s ecolog­i­cal sci­ence fic­tion — ded­i­cated to “the dry-land ecol­o­gists, wher­ever they may be” — but what that se­cretly means is that it’s a se­ries of nov­els about ex­is­ten­tial risk, and con­sid­ers the prob­lem on a timescale of tens of thou­sands of years.


For those of you who are not fa­mil­iar, Dune is set about 20,000 years in the fu­ture. About 10,000 years be­fore the events of the first book, Strong Ar­tifi­cial In­tel­li­gence was de­vel­oped. As one might ex­pect, hu­man­ity nearly went ex­tinct. But we pul­led to­gether and waged a 100-year war against the ma­chines, a strug­gle known as the But­le­rian Jihad (This is why a but­ler is “one who de­stroys in­tel­li­gent ma­chines”). We suc­ceeded, but only barely, and the mem­ory of the strug­gle was em­bed­ded deep within the hu­man psy­che. Every re­li­gion and ev­ery cul­ture set up pro­hi­bi­tions against “think­ing ma­chines”. This was so suc­cess­ful that the next ten mil­len­nia saw ab­solutely no ad­vances in com­put­ing, and de­spite the huge po­ten­tial benefits of defec­tion, co­or­di­na­tion was strong enough to pre­vent any re­sur­gence of com­put­ing tech­nol­ogy.

Sur­pris­ingly, the pro­hi­bi­tion against “think­ing ma­chines” ap­pears to ex­tend not only to what we would con­sider to be Strong AI, but also to com­put­ers of all sorts. There is ev­i­dence that de­vices for record­ing jour­nals (via voice record­ing?) and do­ing ba­sic ar­ith­metic were out­lawed as well. The sug­ges­tion is that there is not a sin­gle me­chan­i­cal calcu­la­tor or elec­tronic mem­ory-stor­age de­vice in the en­tire Im­perium. There are ad­vanced tech­nolo­gies, but noth­ing re­motely like com­put­ers — the Orange Catholic Bible is printed on “fila­ment pa­per”, not stored on a Kin­dle.

While I ap­pre­ci­ate the ex­is­ten­tial threat posed by Strong AI, I’ve always been con­fused about the pro­scrip­tion against more ba­sic forms of au­toma­tion. The TI-81 is pretty helpful and not at all threat­en­ing. Stor­ing records on pa­per or fila­ment pa­per has se­ri­ous down­sides. Why does this so­ciety ham­string them­selves in this way?

The char­ac­ters have a good deal to say about the But­le­rian Jihad, but to me, their an­swers were always some­what con­fus­ing:

Once men turned their think­ing over to ma­chines in the hope that this would set them free. But that only per­mit­ted other men with ma­chines to en­slave them. (Rev­erend Mother Gaius He­len Mo­hiam)

And:

What do such ma­chines re­ally do? They in­crease the num­ber of things we can do with­out think­ing. Things we do with­out think­ing — there’s the real dan­ger. (Leto Atrei­des II)

This doesn’t sug­gest that literal ex­tinc­tion threat is the only rea­son for the Jihad. In fact, ac­cord­ing to these ma­jor char­ac­ters, it wasn’t even the pri­mary rea­son.

This is not to say that ex­tinc­tion risk isn’t on their mind. Here’s an­other idea dis­cussed in the books, and con­demned for its ob­vi­ous x-risk is­sues:

The Ix­i­ans con­tem­plated mak­ing a weapon—a type of hunter-seeker, self-pro­pel­led death with a ma­chine mind. It was to be de­signed as a self-im­prov­ing thing which would seek out life and re­duce that life to its in­or­ganic mat­ter. (Leto Atrei­des II)

Or, more ex­plic­itly:

Without me there would have been by now no peo­ple any­where, none what­so­ever. And the path to that ex­tinc­tion was more hideous than your wildest imag­in­ings. (Leto Atrei­des II)

But clearly ex­tinc­tion risk isn’t the only thing driv­ing the pro­scrip­tion against think­ing ma­chines. If it were, then we’d still have our pocket calcu­la­tors and still be able to in­dex our libraries us­ing elec­tronic databases. But this so­ciety has out­lawed even these rel­a­tively sim­ple ma­chines. Why?


Good­hart’s law states that when a mea­sure be­comes a tar­get, it ceases to be a good mea­sure.

What this means is that the act of defin­ing a stan­dard al­most always de­stroys the goal you were try­ing to define. If you make a test for ap­ti­tude, teach­ers and par­ents will teach to the test. The par­ents with the most re­sources — rich­est/​most in­tel­li­gent/​most well-con­nected — will find ways to get their chil­dren ahead at the ex­pense of ev­ery­one else. If you re­quire a spe­cific de­gree to get a par­tic­u­lar job, de­gree-grant­ing in­sti­tu­tions will com­pete to make the de­gree eas­ier and eas­ier to ac­quire, to the point where it no longer in­di­cates qual­ity. If sup­ply is at all limited, then the job-seek­ers who are rich­est/​most in­tel­li­gent/​most well-con­nected will be the ones who can get the de­gree. If you set a par­tic­u­lar crit­i­cal thresh­old for a statis­ti­cal mea­sure (*cough*), re­searchers will sac­ri­fice what­ever pos­i­tive qual­ities about the rest of their re­search they can in pur­suit of reach­ing that crit­i­cal thresh­old.

Govern­ments, if they en­dure, always tend in­creas­ingly to­ward aris­to­cratic forms. No gov­ern­ment in his­tory has been known to evade this pat­tern. And as the aris­toc­racy de­vel­ops, gov­ern­ment tends more and more to act ex­clu­sively in the in­ter­ests of the rul­ing class — whether that class be hered­i­tary roy­alty, oli­garchs of fi­nan­cial em­pires, or en­trenched bu­reau­cracy. (Bene Gesserit Train­ing Man­ual)

One of the most im­por­tant things we know from AI Align­ment work is that defin­ing a rule or stan­dard that can’t be mis­in­ter­preted is very tricky. An in­tel­li­gent agent will work very hard to max­i­mize its own util­ity func­tion, and will find clever ways around any rules you throw in its way.

One of the ways we have been short-sighted is in think­ing that this ap­plies only to strong or gen­eral ar­tifi­cial in­tel­li­gences. Hu­mans are strong gen­eral in­tel­li­gences; if you put rules or stan­dards in their way, they will work very hard to max­i­mize their own util­ity func­tions and will find clever ways around the rules. Good­hart’s law is the AI Align­ment prob­lem ap­plied to other peo­ple.

(“The real AI Align­ment prob­lemis other peo­ple?”)

It’s been pro­posed that this is­sue is the ser­pent gnaw­ing at the root of our cul­ture. The long and some­what con­fus­ing ver­sion of the ar­gu­ment is here. I would strongly re­comend that you read first (or in­stead) this sum­mary by Na­bil ad Da­j­jal. As Scott says, “if only there were some­thing in be­tween Na­bil’s length and Concierge’s”, but read­ing the two I think we can get a pretty good pic­ture.

Here are the first points, from Na­bil:

There is a four-step pro­cess which has in­fected and hol­lowed out the en­tirety of mod­ern so­ciety. It af­fects ev­ery­thing from school and work to friend­ships and dat­ing.
In step one, a bu­reau­crat or a com­puter needs to make a de­ci­sion be­tween two or more can­di­dates. It needs a leg­ible sig­nal. Sig­nal­ing (see Robin Han­son) means mak­ing a dis­play of a de­sired char­ac­ter­is­tic which is ex­pen­sive or oth­er­wise difficult to fake with­out that char­ac­ter­is­tic; leg­i­bil­ity (see James Scott) means that the dis­play is mea­surable and doesn’t re­quire lo­cal knowl­edge or con­text to in­ter­pret.

I will re­sist quot­ing it in full. Se­ri­ously, go read it, it’s pretty short.

When I finished read­ing this ex­pla­na­tion, I had a re­li­gious epiphany. This is what the But­le­rian Jihad was about. While AI may liter­ally pre­sent an ex­tinc­tion risk be­cause of its po­ten­tial de­sire to use the atoms in our bod­ies for its own pur­poses, lesser forms of AI — in­clud­ing some­thing as sim­ple as a de­vice that can com­pare two num­bers! — are dan­ger­ous be­cause of their need for leg­ible sig­nals.

In fact, the sim­pler the agent is, the more dan­ger­ous it is, be­cause sim­ple sys­tems need their sig­nals to be ex­tremely leg­ible. Agents that make de­ci­sions based on leg­ible sig­nals are ex­tra sus­cep­ti­ble to Good­hart’s law, and ac­cel­er­ate us on our way to the sig­nal­ing catastophe/​race to the bot­tom/​end of all that is right and good/​etc.

As Na­bil ad Da­j­jal points out, this is true for bu­reau­crats as well as for ma­chines. It doesn’t re­quire what we nor­mally think of as a “com­puter”. Any­thing that uses a leg­ible sig­nal to make a de­ci­sion with no or lit­tle flex­i­bil­ity will con­tribute to this prob­lem.

The tar­get of the Jihad was a ma­chine-at­ti­tude as much as the ma­chines. (Leto Atrei­des II)

As a strong ex­am­ple, con­sider Scott Aaron­son’s re­view of Inad­e­quate Equil­ibria, where he says:

In my own ex­pe­rience strug­gling against bu­reau­cra­cies that made life hellish for no rea­son, I’d say that about 23 of the time my quest for an­swers re­ally did ter­mi­nate at an iden­ti­fi­able “empty skull”: i.e., a sin­gle in­di­vi­d­ual who could unilat­er­ally solve the prob­lem at no cost to any­one, but chose not to. It sim­ply wasn’t the case, I don’t think, that I would’ve been equally ob­sti­nate in the bu­reau­crat’s place, or that any of my friends or col­leagues would’ve been. I sim­ply had to ac­cept that I was now face-to-face with an alien sub-in­tel­li­gence—i.e., with a mind that fetishized rules made up by not-very-thought­ful hu­mans over demon­stra­ble re­al­ities of the ex­ter­nal world.

To­gether, this sug­gests a sur­pris­ing con­clu­sion: Ra­tion­al­ists should be against au­toma­tion. I sus­pect that, for many of us, this is an un­com­fortable sug­ges­tion. Many ra­tio­nal­ists are pro­gram­mers or en­g­ineers. Those of us who are not are prob­a­bly still hack­ers of one sub­ject or an­other, and have as a re­sult in­ter­nal­ized the hacker ethic.

If you’re a hacker, you strongly be­lieve that no prob­lem should ever have to be solved twice, and that bore­dom and drudgery are evil. Th­ese are strong points, per­haps the strongest points, in fa­vor of au­toma­tion. The world is full of fas­ci­nat­ing prob­lems wait­ing to be solved, and we shouldn’t waste the tal­ents of the most gifted among us solv­ing the same prob­lems over and over. If you au­to­mate it once, and do it right, you can free up tal­ents to work on the next prob­lem. Re­peat this un­til you’ve hit all of hu­man­ity’s prob­lems, boom, utopia achieved.

The prob­lem is that the tal­ents of the most gifted are be­ing dou­ble-wasted in our cur­rent sys­tem. First, in­tel­li­gent peo­ple spend huge amounts of time and effort at­tempt­ing to au­to­mate a sys­tem. Given that we aren’t even close to be­ing able to solve the AI Align­ment prob­lem, the at­tempt to prop­erly au­to­mate the sys­tem always fails, and the de­sign­ers in­stead au­to­mate the sys­tem so that it uses one or more leg­ible sig­nals to make its judg­ment. Now that this sys­tem is in place, it is im­me­di­ately cap­tured by Good­hart’s law, and peo­ple be­gin in­vent­ing ways to get around it.

Se­cond, the in­tel­li­gent and gifted peo­ple — those peo­ple who are most qual­ified to make the judg­ment they are try­ing to au­to­mate — are spend­ing their time try­ing to au­to­mate a sys­tem that they are (pre­sum­ably) qual­ified to make judg­ments for! Couldn’t we just cut out the mid­dle­man, and when mak­ing de­ci­sions about the most im­por­tant is­sues that face our so­ciety, give in­tel­li­gent and gifted peo­ple these jobs di­rectly?

So we’re 1) wast­ing in­tel­lec­tual cap­i­tal, by 2) us­ing it to make the prob­lem it’s try­ing to solve sub­ject to Good­hart’s law and there­fore in­finitely worse.

Give me the judg­ment of bal­anced minds in prefer­ence to laws ev­ery time. Codes and man­u­als cre­ate pat­terned be­hav­ior. All pat­terned be­hav­ior tends to go un­ques­tioned, gath­er­ing de­struc­tive mo­men­tum. (Darwi Odrade)

At­tempts to solve this prob­lem with ma­chine learn­ing tech­niques are pos­si­bly worse. This is es­sen­tially just au­tomat­ing the task of find­ing a leg­ible sig­nal, with pre­dictable re­sults. It’s hard to say if a neu­ral net­work will tend to find a worse leg­ible sig­nal than the pro­gram­mer would find on their own, but it’s not a bet I would take. Fur­ther, it lets pro­gram­mers au­to­mate more de­ci­sions, lets them do it faster, and pre­vents them from un­der­stand­ing the leg­ible sig­nal(s) they se­lect. That doesn’t in­spire com­fort.

Please also con­sider the not-liter­ally-au­toma­tion ver­sion, as de­scribed by Scott Alexan­der in his day­care worker ex­am­ple:

Day­care com­pa­nies re­ally want to avoid hiring formerly-im­pris­oned crim­i­nals to take care of the kids. If they can ask whether a cer­tain em­ployee is crim­i­nal, this solves their prob­lem. If not, they’re left to guess. And if they’ve got two oth­er­wise equally qual­ified em­ploy­ees, and one is black and the other’s white, and they know that 28% of black men have been in prison com­pared to 4% of white men, they’ll shrug and choose the white guy.

Things like race, gen­der, and class are all ex­tremely leg­ible sig­nals. They’re hard to fake, and they’re easy to read. So if so­ciety seems more racist/​sex­ist/​clas­sist/​poli­ti­cally bi­goted than it was, con­sider the idea that it may be the re­sult of run­away au­toma­tion. Or ma­chine-at­ti­tude, as God Em­peror Leto II would say.

I men­tioned be­fore that, un­like prob­lems with Strong AI, this weak-in­tel­li­gence-Good­hart-prob­lem (Schwach­in­tel­li­gen­zGood­hart­prob­lem?) isn’t an ex­is­ten­tial risk. The bu­reau­crats and the stan­dard­ized tests aren’t go­ing to kill us in or­der to max­i­mize the amount of hy­dro­gen in the uni­verse. Right?

But if we con­sider crunches to be a form of x-risk, then this may be an x-risk af­ter all. This prob­lem has already in­fected “ev­ery­thing from school and work to friend­ships and dat­ing” mak­ing us “older, poorer, and more ex­hausted”. Not satis­fied with this, we’re cur­rently do­ing our best to make it more ‘effi­cient’ in the ar­eas it already holds sway, and work­ing hard to ex­tend it to new ar­eas. If taken to its log­i­cal con­clu­sion, we may suc­cess­fully au­to­mate nearly ev­ery­thing, and de­stroy our abil­ity to make any progress at all.

I’ll take this op­por­tu­nity to re­it­er­ate what I mean by au­toma­tion. I sus­pect that when I say “au­to­mate nearly ev­ery­thing”, many of you imag­ine some sort of as­cended econ­omy, with robotic work­ers and cor­po­rate AI cen­tral plan­ners. But part of the is­sue here is that Good­hart’s law is very flex­ible, and kicks in with the in­tro­duc­tion of most rules, even when the rules are very sim­ple.

Literal ma­chines make this eas­ier — a pro­gram that says “only for­ward job ap­pli­cants if they in­di­cated they have a Master’s De­gree and 2+ years of ex­pe­rience” is sim­ple, but po­ten­tially very dan­ger­ous. But on the other hand, a rule about what sorts of ap­pli­cants will be con­sid­ered can be iden­ti­cal when faith­fully ap­plied by a sin­gle-minded bu­reau­crat. The point is that the de­ci­sion has been au­to­mated to a leg­ible sig­nal. Machines just make this eas­ier, faster, and more difficult to ig­nore. All but the most ex­treme bu­reau­crats will oc­ca­sion­ally break pro­to­col. Au­toma­tion by ma­chine will never do so.


So we want two closely re­lated things:

1. We want to avoid the pos­si­ble x-risk from au­toma­tion.

2. We want to re­verse the whole “hol­lowed out the en­tirety of mod­ern so­ciety” thing and make life feel mean­ingful again.

The good news is that there are some rel­a­tively easy solu­tions.

First, stop au­tomat­ing things or sug­gest­ing that things should be au­to­mated. Re­v­erse au­toma­tion wher­ever pos­si­ble. (As sug­gested by Noita­mo­tua.)

There may be some ar­eas where au­toma­tion is safe and benefi­cial. But be­fore au­tomat­ing some­thing, please spend some time think­ing about whether or not the leg­ible sig­nal is too sim­ple, whether the au­toma­tion will be sus­cep­ti­ble to Good­hart’s law. Only in cases where the leg­ible sig­nal is effec­tively iden­ti­cal to the value you ac­tu­ally want, or where the cost of an er­ror is low (“you must be this tall to ride” is a leg­ible sig­nal) will this be ac­cept­able.

Se­cond, there are per­sonal and poli­ti­cal sys­tems which are de­signed to deal with this prob­lem. Good­hart’s law is pow­er­less against a rea­son­able per­son. While you or I might take some­one’s ed­u­ca­tion into con­sid­er­a­tion when de­cid­ing whether or not to offer them a job, we would weigh it in a com­plex, hard-to-define way against the other ev­i­dence available.

Let’s con­tinue the hiring de­ci­sion metaphor. More im­por­tant than ig­nor­ing cre­den­tials is the abil­ity to par­ti­ci­pate in the in­tel­lec­tual arms race, ex­actly the prob­lem fixed rules can­not fol­low (Good­hart’s law!). If I am in charge of hiring pro­gram­mers, I might want to give them a sim­ple cod­ing test as part of their in­ter­view. I might ask, “If I have two strings, how do I check if they are ana­grams of each other?” If I use the same cod­ing test ev­ery time (or I au­to­mate it, set­ting up a web­site ver­sion of the test to screen can­di­dates be­fore in-per­son in­ter­views), then any­one who knows my pat­tern can figure out or look up the an­swer ahead of time, and the ques­tion no longer screens for pro­gram­ming abil­ity — it screens for what­ever “can figure out or look up the an­swer ahead of time” is.

But when you are a real hu­man be­ing, not a bu­reau­crat or au­toma­ton, you can vary the test, ask ques­tions that haven’t been asked be­fore, and en­gage in the arms race. If you are very smart, you may in­vent a test which no one has ever thought of be­fore.

Ed­u­ca­tion is no sub­sti­tute for in­tel­li­gence. That elu­sive qual­ity is defined only in part by puz­zle-solv­ing abil­ity. It is in the cre­ation of new puz­zles re­flect­ing what your senses re­port that you round out the defi­ni­tions. (Men­tat Text One)

So by this view, the cor­rect thing to do is to re­place au­toma­tion with the most in­tel­li­gent peo­ple available, and have them per­son­ally en­gaged with their duty — rather than hav­ing them act­ing as an ad­minis­tra­tor, as of­ten hap­pens un­der our cur­rent sys­tem.


Some peo­ple ask, what genre is Dune? It’s set in the far fu­ture; there are space­ships, lasers, and nu­clear weapons. But most of the se­ries fo­cuses on liege-vas­sal re­la­tion­ships, schem­ing, and re­li­gious or­ders with magic pow­ers. This sounds a lot more like fan­tasy, right?

Clearly, Dune is Poli­ti­cal Science Fic­tion. Science fic­tion pro­poses spec­tac­u­lar ad­vances in sci­ence and, as a re­sult, tech­nol­ogy. But poli­ti­cal thought is also a tech­nol­ogy:

...while we don’t tend to think of it this way, philos­o­phy is a tech­nol­ogy—philoso­phers de­velop new modes of think­ing, new ways of or­ga­niz­ing the state, new eth­i­cal prin­ci­ples, and so on. War­time en­courages rulers to in­vest in Re­search and Devel­op­ment. So in the War­ring States pe­riod, a lot of philoso­phers found work in lo­cal courts, as a sort of men­tal R&D de­part­ment.

So what Dune has done is thought about wild new poli­ti­cal tech­nolo­gies, in the same way that most sci­ence fic­tion thinks about wild new phys­i­cal tech­nolo­gies (or chem­i­cal, or biolog­i­cal, etc.).

The Con­fu­cian Heuris­tic (which you should read, en­tirely on its own mer­its) de­scribes a poli­ti­cal sys­tem built on per­sonal re­la­tion­ships. Ac­cord­ing to this per­spec­tive, Con­fu­cius hated un­just in­equal­ity. Un­like the west­ern solu­tion, which is to de­stroy all forms of in­equal­ity, Con­fu­cius re­jected that as im­pos­si­ble. In­stead, he pro­posed that we rec­og­nize and pro­mote gifted in­di­vi­d­u­als, and make them ex­tremely aware of their du­ties to the rest of us. (Se­ri­ously, just read it!)

Good gov­ern­ment never de­pends upon laws, but upon the per­sonal qual­ities of those who gov­ern. The ma­chin­ery of gov­ern­ment is always sub­or­di­nate to the will of those who ad­minister that ma­chin­ery. (The Spac­ing Guild Man­ual)

In a way, Dune is Con­fu­cian as well, or per­haps Neo-Con­fu­cian, as Stephen­son might say. It pre­sents a so­ciety that has been sta­ble for 10,000 years, based largely on feu­dal prin­ci­ples, and which has ar­ranged it­self in such a way that it has kept a ma­jor, lurk­ing x-risk at bay.

It’s my con­tention that feu­dal­ism is a nat­u­ral con­di­tion of hu­man be­ings…not that it is the only con­di­tion or not that it is the right con­di­tion…that it is just a way we have of fal­ling into or­gani­sa­tions. I like to use the ex­am­ple of the Ber­lin Mu­seum Beavers.
Be­fore World War II there were a num­ber of fam­i­lies of beaver in the Ber­lin Mu­seum. They were Euro­pean beaver. They had been there, raised in cap­tivity for some­thing on the or­der of sev­enty beaver gen­er­a­tions, in cages. World War II came along and a bomb freed some of them into the coun­tryside. What did they do? They went out and they started build­ing dams. (Frank Her­bert)

One way of think­ing about Good­hart’s law is that it says that any au­to­mated sys­tem can and will be gamed as quickly and ruth­lessly as pos­si­ble. Us­ing hu­man au­thor­i­ties rather than rules is the only safe­guard, since the hu­man can par­ti­ci­pate in the in­tel­lec­tual arms race with the peo­ple try­ing to get around the reg­u­la­tion; they can in­ter­pret the rules in their spirit rather than in their let­ter. No one will get far rules-lawyer­ing the king.

The peo­ple who will be most effec­tive at Good­hart-gam­ing a sys­tem will be those with start­ing ad­van­tages. This in­cludes the rich, but also those with more in­tel­li­gence, bet­ter so­cial con­nec­tions, etc., etc. So one prob­lem with au­toma­tion is that it always fa­vors the aris­toc­racy. Who­ever has ad­van­tages will, on av­er­age, see them mag­nified by be­ing the best at gam­ing au­to­mated sys­tems.

The Con­fu­cian solu­tion to in­equal­ity is to tie aris­to­crats into mean­ingful per­sonal re­la­tion­ships with their in­fe­ri­ors. The prob­lem with au­toma­tion is that it un­fairly benefits aris­to­crats and de­stroys the very idea of a mean­ingful per­sonal re­la­tion­ship.

What you of the CHOAM di­rec­torate seem un­able to un­der­stand is that you sel­dom find real loy­alties in com­merce. When did you last hear of a clerk giv­ing his life for the com­pany? (A let­ter to CHOAM, At­tributed to The Preacher)

I’ve ar­gued that we need to use hu­man judg­ment in place of leg­ible sig­nals, and that we should re­cruit the most gifted peo­ple to do so. But giv­ing all the de­ci­sion-mak­ing power to an in­tel­lec­tual elite comes with its own prob­lems. If we’re go­ing to re­cruit elites to re­place our au­to­mated de­ci­sion-mak­ing, we should make use of a poli­ti­cal tech­nol­ogy speci­fi­cally de­signed to deal with this situ­a­tion.

I’m not say­ing that we need to in­tro­duce elec­tive fealty, per se. My short-term sug­ges­tion, how­ever, would be that you don’t let those in po­si­tions of power over you pre­tend that they are your equal. Choos­ing to at­tach your­self to some­one pow­er­ful in ex­change for pro­tec­tion is en­tirely left to your dis­cre­tion.

Of course, what I re­ally think we should do is bring back sump­tu­ary laws.

Sump­tu­ary laws keep in­di­vi­d­u­als of a cer­tain class from pur­chas­ing or us­ing cer­tain goods, in­clud­ing cloth­ing. Peo­ple tend to think of sump­tu­ary laws as keep­ing low-class peo­ple from pre­tend­ing to be high-class peo­ple, even if they’re rich enough to fake it. The story goes that this was a big prob­lem dur­ing the late mid­dle ages, be­cause mer­chants were of­ten richer than barons and counts, but you couldn’t let them get away with pre­tend­ing to be no­ble.

The Con­fu­cian view is that sump­tu­ary laws can keep high-class peo­ple from pre­tend­ing to be low-class peo­ple, and at­tempt­ing to dodge the re­spon­si­bil­ities that come with it. Think of the billion­aire chicken farmer wear­ing over­alls and a straw hat. Is he just ol’ Joe from up the road? Or was he born with a for­tune he doesn’t de­serve?

Con­fu­ci­ans would say that a ma­jor prob­lem with our cur­rent sys­tem is that elites are able to pre­tend that they aren’t elite. They see them­selves as, while per­son­ally gifted, equal in op­por­tu­nity to the rest of us, and as a re­sult on an equal play­ing field. They think that they don’t owe us any­thing, and try to con­vince us to feel the same way.

I like to think of this as the “Don­ald-Trump-should-be-forced-to-wear-gold-and-jew­els-wher­ever-he-goes” rule. Or if you’re of a slightly differ­ent poli­ti­cal bent, “If-Zucker­berg-wears-an­other-plain-grey-T-Shirt-I-will-throw-a-fit-who-does-he-think-he’s-fool­ing” rule.

This view­point also strikes a sur­pris­ing truce be­tween mis­take and con­flict the­o­rists. Mis­take the­o­rists are mak­ing the mis­take of think­ing there is no con­flict oc­cur­ring, of let­ting “elites” pre­tend that they’re part of the “peo­ple”. Con­flict the­o­rists are mak­ing the mis­take of think­ing that tear­ing down in­equal­ity is de­sir­able or even pos­si­ble.


If you found any of this in­ter­est­ing, I would sug­gest that you read Dune and its im­me­di­ate se­quels (up to Chap­ter­house, but not the junk Her­bert’s son wrote). If noth­ing else, con­sider that de­spite be­ing pub­lished in 1965, it pre­dicted AI threat and x-risk more gen­er­ally as a ma­jor con­cern for the fu­ture of hu­man­ity. I promise there are other top­ics of in­ter­est there.

If you found any of this con­vinc­ing, I strongly recom­mend that you fight against au­toma­tion and leg­ible sig­nals when­ever pos­si­ble. Only fully re­al­ized hu­man be­ings have the abil­ity to prag­mat­i­cally in­ter­pret a rule or guideline in the way it was in­tended. If we ever crack Strong AI, that may change — but safe to say, at that point we will have a new set of prob­lems!

And in re­gards to the ma­chines:

War to the death should be in­stantly pro­claimed against them. Every ma­chine of ev­ery sort should be de­stroyed by the well-wisher of his species. Let there be no ex­cep­tions made, no quar­ter shown; let us at once go back to the primeval con­di­tion of the race. (Sa­muel But­ler)