Meditations On Moloch

[Con­tent note: Vi­sions! omens! hal­lu­ci­na­tions! mir­a­cles! ec­stasies! dreams! ado­ra­tions! illu­mi­na­tions! re­li­gions!]

I.

Allan Gins­berg’s fa­mous poem, Moloch:

What sphinx of ce­ment and alu­minum bashed open their skulls and ate up their brains and imag­i­na­tion?

Moloch! Soli­tude! Filth! Ugli­ness! Ash­cans and un­ob­tain­able dol­lars! Chil­dren scream­ing un­der the stair­ways! Boys sob­bing in armies! Old men weep­ing in the parks!

Moloch! Moloch! Night­mare of Moloch! Moloch the love­less! Men­tal Moloch! Moloch the heavy judger of men!

Moloch the in­com­pre­hen­si­ble prison! Moloch the cross­bone soul­less jailhouse and Congress of sor­rows! Moloch whose build­ings are judg­ment! Moloch the vast stone of war! Moloch the stunned gov­ern­ments!

Moloch whose mind is pure ma­chin­ery! Moloch whose blood is run­ning money! Moloch whose fingers are ten armies! Moloch whose breast is a can­ni­bal dy­namo! Moloch whose ear is a smok­ing tomb!

Moloch whose eyes are a thou­sand blind win­dows! Moloch whose skyscrap­ers stand in the long streets like end­less Je­ho­vahs! Moloch whose fac­to­ries dream and croak in the fog! Moloch whose smoke-stacks and an­ten­nae crown the cities!

Moloch whose love is end­less oil and stone! Moloch whose soul is elec­tric­ity and banks! Moloch whose poverty is the specter of ge­nius! Moloch whose fate is a cloud of sexless hy­dro­gen! Moloch whose name is the Mind!

Moloch in whom I sit lonely! Moloch in whom I dream An­gels! Crazy in Moloch! Cock­sucker in Moloch! Lack­love and man­less in Moloch!

Moloch who en­tered my soul early! Moloch in whom I am a con­scious­ness with­out a body! Moloch who fright­ened me out of my nat­u­ral ec­stasy! Moloch whom I aban­don! Wake up in Moloch! Light stream­ing out of the sky!

Moloch! Moloch! Robot apart­ments! in­visi­ble sub­urbs! skele­ton trea­suries! blind cap­i­tals! de­monic in­dus­tries! spec­tral na­tions! in­vin­cible mad­houses! gran­ite cocks! mon­strous bombs!

They broke their backs lift­ing Moloch to Heaven! Pave­ments, trees, ra­dios, tons! lift­ing the city to Heaven which ex­ists and is ev­ery­where about us!

Vi­sions! omens! hal­lu­ci­na­tions! mir­a­cles! ec­stasies! gone down the Amer­i­can river!

Dreams! ado­ra­tions! illu­mi­na­tions! re­li­gions! the whole boat­load of sen­si­tive bul­lshit!

Break­throughs! over the river! flips and cru­ci­fix­ions! gone down the flood! Highs! Epipha­nies! De­s­pairs! Ten years’ an­i­mal screams and suicides! Minds! New loves! Mad gen­er­a­tion! down on the rocks of Time!

Real holy laugh­ter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to soli­tude! wav­ing! car­ry­ing flow­ers! Down to the river! into the street!

What’s always im­pressed me about this poem is its con­cep­tion of civ­i­liza­tion as an in­di­vi­d­ual en­tity. You can al­most see him, with his fingers of armies and his skyscraper-win­dow eyes.

A lot of the com­men­ta­tors say Moloch rep­re­sents cap­i­tal­ism. This is definitely a piece of it, even a big piece. But it doesn’t quite fit. Cap­i­tal­ism, whose fate is a cloud of sexless hy­dro­gen? Cap­i­tal­ism in whom I am a con­scious­ness with­out a body? Cap­i­tal­ism, there­fore gran­ite cocks?

Moloch is in­tro­duced as the an­swer to a ques­tion – C. S. Lewis’ ques­tion in Hier­ar­chy Of Philoso­pherswhat does it? Earth could be fair, and all men glad and wise. In­stead we have pris­ons, smokestacks, asy­lums. What sphinx of ce­ment and alu­minum breaks open their skulls and eats up their imag­i­na­tion?

And Gins­berg an­swers: Moloch does it.

There’s a pas­sage in the Prin­cipia Dis­cor­dia where Mala­clypse com­plains to the God­dess about the evils of hu­man so­ciety. “Every­one is hurt­ing each other, the planet is ram­pant with in­jus­tices, whole so­cieties plun­der groups of their own peo­ple, moth­ers im­prison sons, chil­dren per­ish while broth­ers war.”

The God­dess an­swers: “What is the mat­ter with that, if it’s what you want to do?”

Mala­clypse: “But no­body wants it! Every­body hates it!”

God­dess: “Oh. Well, then stop.”

The im­plicit ques­tion is – if ev­ery­one hates the cur­rent sys­tem, who per­pet­u­ates it? And Gins­berg an­swers: “Moloch”. It’s pow­er­ful not be­cause it’s cor­rect – no­body liter­ally thinks an an­cient Carthag­i­nian de­mon causes ev­ery­thing – but be­cause think­ing of the sys­tem as an agent throws into re­lief the de­gree to which the sys­tem isn’t an agent.

Bostrom makes an offhanded refer­ence of the pos­si­bil­ity of a dic­ta­tor­less dystopia, one that ev­ery sin­gle cit­i­zen in­clud­ing the lead­er­ship hates but which nev­er­the­less en­dures un­con­quered. It’s easy enough to imag­ine such a state. Imag­ine a coun­try with two rules: first, ev­ery per­son must spend eight hours a day giv­ing them­selves strong elec­tric shocks. Se­cond, if any­one fails to fol­low a rule (in­clud­ing this one), or speaks out against it, or fails to en­force it, all cit­i­zens must unite to kill that per­son. Sup­pose these rules were well-enough es­tab­lished by tra­di­tion that ev­ery­one ex­pected them to be en­forced.

So you shock your­self for eight hours a day, be­cause you know if you don’t ev­ery­one else will kill you, be­cause if they don’t, ev­ery­one else will kill them, and so on. Every sin­gle cit­i­zen hates the sys­tem, but for lack of a good co­or­di­na­tion mechanism it en­dures. From a god’s-eye-view, we can op­ti­mize the sys­tem to “ev­ery­one agrees to stop do­ing this at once”, but no one within the sys­tem is able to effect the tran­si­tion with­out great risk to them­selves.

And okay, this ex­am­ple is kind of con­trived. So let’s run through – let’s say ten – real world ex­am­ples of similar mul­ti­po­lar traps to re­ally ham­mer in how im­por­tant this is.

1. The Pri­soner’s Dilemma, as played by two very dumb liber­tar­i­ans who keep end­ing up on defect-defect. There’s a much bet­ter out­come available if they could figure out the co­or­di­na­tion, but co­or­di­na­tion is hard. From a god’s-eye-view, we can agree that co­op­er­ate-co­op­er­ate is a bet­ter out­come than defect-defect, but nei­ther pris­oner within the sys­tem can make it hap­pen.

2. Dol­lar auc­tions. I wrote about this and even more con­voluted ver­sions of the same prin­ci­ple in Game The­ory As A Dark Art. Us­ing some weird auc­tion rules, you can take ad­van­tage of poor co­or­di­na­tion to make some­one pay $10 for a one dol­lar bill. From a god’s-eye-view, clearly peo­ple should not pay $10 for a on-er. From within the sys­tem, each in­di­vi­d­ual step taken might be ra­tio­nal.

(Ash­cans and un­ob­tain­able dol­lars!)

3. The fish farm­ing story from my Non-Liber­tar­ian FAQ 2.0:

As a thought ex­per­i­ment, let’s con­sider aqua­cul­ture (fish farm­ing) in a lake. Imag­ine a lake with a thou­sand iden­ti­cal fish farms owned by a thou­sand com­pet­ing com­pa­nies. Each fish farm earns a profit of $1000/​month. For a while, all is well.

But each fish farm pro­duces waste, which fouls the wa­ter in the lake. Let’s say each fish farm pro­duces enough pol­lu­tion to lower pro­duc­tivity in the lake by $1/​month.

A thou­sand fish farms pro­duce enough waste to lower pro­duc­tivity by $1000/​month, mean­ing none of the fish farms are mak­ing any money. Cap­i­tal­ism to the res­cue: some­one in­vents a com­plex fil­ter­ing sys­tem that re­moves waste prod­ucts. It costs $300/​month to op­er­ate. All fish farms vol­un­tar­ily in­stall it, the pol­lu­tion ends, and the fish farms are now mak­ing a profit of $700/​month – still a re­spectable sum.

But one farmer (let’s call him Steve) gets tired of spend­ing the money to op­er­ate his filter. Now one fish farm worth of waste is pol­lut­ing the lake, low­er­ing pro­duc­tivity by $1. Steve earns $999 profit, and ev­ery­one else earns $699 profit.

Every­one else sees Steve is much more prof­itable than they are, be­cause he’s not spend­ing the main­te­nance costs on his filter. They dis­con­nect their filters too.

Once four hun­dred peo­ple dis­con­nect their filters, Steve is earn­ing $600/​month – less than he would be if he and ev­ery­one else had kept their filters on! And the poor vir­tu­ous filter users are only mak­ing $300. Steve goes around to ev­ery­one, say­ing “Wait! We all need to make a vol­un­tary pact to use filters! Other­wise, ev­ery­one’s pro­duc­tivity goes down.”

Every­one agrees with him, and they all sign the Filter Pact, ex­cept one per­son who is sort of a jerk. Let’s call him Mike. Now ev­ery­one is back us­ing filters again, ex­cept Mike. Mike earns $999/​month, and ev­ery­one else earns $699/​month. Slowly, peo­ple start think­ing they too should be get­ting big bucks like Mike, and dis­con­nect their filter for $300 ex­tra profit…

A self-in­ter­ested per­son never has any in­cen­tive to use a filter. A self-in­ter­ested per­son has some in­cen­tive to sign a pact to make ev­ery­one use a filter, but in many cases has a stronger in­cen­tive to wait for ev­ery­one else to sign such a pact but opt out him­self. This can lead to an un­de­sir­able equil­ibrium in which no one will sign such a pact.

The more I think about it, the more I feel like this is the core of my ob­jec­tion to liber­tar­i­anism, and that Non-Liber­tar­ian FAQ 3.0 will just be this one ex­am­ple copy-pasted two hun­dred times. From a god’s-eye-view, we can say that pol­lut­ing the lake leads to bad con­se­quences. From within the sys­tem, no in­di­vi­d­ual can pre­vent the lake from be­ing pol­luted, and buy­ing a filter might not be such a good idea.

4. The Malthu­sian trap, at least at its ex­tremely pure the­o­ret­i­cal limits. Sup­pose you are one of the first rats in­tro­duced onto a pris­tine is­land. It is full of yummy plants and you live an idyl­lic life loung­ing about, eat­ing, and com­pos­ing great works of art (you’re one of those rats from The Rats of NIMH).

You live a long life, mate, and have a dozen chil­dren. All of them have a dozen chil­dren, and so on. In a cou­ple gen­er­a­tions, the is­land has ten thou­sand rats and has reached its car­ry­ing ca­pac­ity. Now there’s not enough food and space to go around, and a cer­tain per­cent of each new gen­er­a­tion dies in or­der to keep the pop­u­la­tion steady at ten thou­sand.

A cer­tain sect of rats aban­dons art in or­der to de­vote more of their time to scroung­ing for sur­vival. Each gen­er­a­tion, a bit less of this sect dies than mem­bers of the main­stream, un­til af­ter a while, no rat com­poses any art at all, and any sect of rats who try to bring it back will go ex­tinct within a few gen­er­a­tions.

In fact, it’s not just art. Any sect at all that is leaner, meaner, and more sur­vival­ist than the main­stream will even­tu­ally take over. If one sect of rats al­tru­is­ti­cally de­cides to limit its offspring to two per cou­ple in or­der to de­crease over­pop­u­la­tion, that sect will die out, swarmed out of ex­is­tence by its more nu­mer­ous en­e­mies. If one sect of rats starts prac­tic­ing can­ni­bal­ism, and finds it gives them an ad­van­tage over their fel­lows, it will even­tu­ally take over and reach fix­a­tion.

If some rat sci­en­tists pre­dict that de­ple­tion of the is­land’s nut stores is ac­cel­er­at­ing at a dan­ger­ous rate and they will soon be ex­hausted com­pletely, a few sects of rats might try to limit their nut con­sump­tion to a sus­tain­able level. Those rats will be out­com­peted by their more self­ish cous­ins. Even­tu­ally the nuts will be ex­hausted, most of the rats will die off, and the cy­cle will be­gin again. Any sect of rats ad­vo­cat­ing some ac­tion to stop the cy­cle will be out­com­peted by their cous­ins for whom ad­vo­cat­ing any­thing is a waste of time that could be used to com­pete and con­sume.

For a bunch of rea­sons evolu­tion is not quite as Malthu­sian as the ideal case, but it pro­vides the pro­to­type ex­am­ple we can ap­ply to other things to see the un­der­ly­ing mechanism. From a god’s-eye-view, it’s easy to say the rats should main­tain a com­fortably low pop­u­la­tion. From within the sys­tem, each in­di­vi­d­ual rat will fol­low its ge­netic im­per­a­tive and the is­land will end up in an end­less boom-bust cy­cle.

5. Cap­i­tal­ism. Imag­ine a cap­i­tal­ist in a cut­throat in­dus­try. He em­ploys work­ers in a sweat­shop to sew gar­ments, which he sells at min­i­mal profit. Maybe he would like to pay his work­ers more, or give them nicer work­ing con­di­tions. But he can’t, be­cause that would raise the price of his prod­ucts and he would be out­com­peted by his cheaper ri­vals and go bankrupt. Maybe many of his ri­vals are nice peo­ple who would like to pay their work­ers more, but un­less they have some kind of iron­clad guaran­tee that none of them are go­ing to defect by un­der­cut­ting their prices they can’t do it.

Like the rats, who grad­u­ally lose all val­ues ex­cept sheer com­pe­ti­tion, so com­pa­nies in an eco­nomic en­vi­ron­ment of suffi­ciently in­tense com­pe­ti­tion are forced to aban­don all val­ues ex­cept op­ti­miz­ing-for-profit or else be out­com­peted by com­pa­nies that op­ti­mized for profit bet­ter and so can sell the same ser­vice at a lower price.

(I’m not re­ally sure how widely peo­ple ap­pre­ci­ate the value of analo­giz­ing cap­i­tal­ism to evolu­tion. Fit com­pa­nies – defined as those that make the cus­tomer want to buy from them – sur­vive, ex­pand, and in­spire fu­ture efforts, and un­fit com­pa­nies – defined as those no one wants to buy from – go bankrupt and die out along with their com­pany DNA. The rea­sons Na­ture is red and tooth and claw are the same rea­sons the mar­ket is ruth­less and ex­ploita­tive)

From a god’s-eye-view, we can con­trive a friendly in­dus­try where ev­ery com­pany pays its work­ers a liv­ing wage. From within the sys­tem, there’s no way to en­act it.

(Moloch whose love is end­less oil and stone! Moloch whose blood is run­ning money!)

6. The Two-In­come Trap, as re­cently dis­cussed on this blog. It the­o­rized that suffi­ciently in­tense com­pe­ti­tion for sub­ur­ban houses in good school dis­tricts meant that peo­ple had to throw away lots of other val­ues – time at home with their chil­dren, fi­nan­cial se­cu­rity – to op­ti­mize for house-buy­ing-abil­ity or else be con­signed to the ghetto.

From a god’s-eye-view, if ev­ery­one agrees not to take on a sec­ond job to help win their com­pe­ti­tion for nice houses, then ev­ery­one will get ex­actly as nice a house as they did be­fore, but only have to work one job. From within the sys­tem, ab­sent a gov­ern­ment liter­ally will­ing to ban sec­ond jobs, ev­ery­one who doesn’t get one will be left be­hind.

(Robot apart­ments! In­visi­ble sub­urbs!)

7. Agri­cul­ture. Jared Di­a­mond calls it the worst mis­take in hu­man his­tory. Whether or not it was a mis­take, it wasn’t an ac­ci­dent – agri­cul­tural civ­i­liza­tions sim­ply out­com­peted no­madic ones, in­evitable and ir­re­sistably. Clas­sic Malthu­sian trap. Maybe hunt­ing-gath­er­ing was more en­joy­able, higher life ex­pec­tancy, and more con­ducive to hu­man flour­ish­ing – but in a state of suffi­ciently in­tense com­pe­ti­tion be­tween peo­ples, in which agri­cul­ture with all its dis­ease and op­pres­sion and pestilence was the more com­pet­i­tive op­tion, ev­ery­one will end up agri­cul­tural­ists or go the way of the Co­manche In­di­ans.

From a god’s-eye-view, it’s easy to see ev­ery­one should keep the more en­joy­able op­tion and stay hunter-gath­er­ers. From within the sys­tem, each in­di­vi­d­ual tribe only faces the choice of go­ing agri­cul­tural or in­evitably dy­ing.

8. Arms races. Large coun­tries can spend any­where from 5% to 30% of their bud­get on defense. In the ab­sence of war – a con­di­tion which has mostly held for the past fifty years – all this does is sap money away from in­fras­truc­ture, health, ed­u­ca­tion, or eco­nomic growth. But any coun­try that fails to spend enough money on defense risks be­ing in­vaded by a neigh­bor­ing coun­try that did. There­fore, al­most all coun­tries try to spend some money on defense.

From a god’s-eye-view, the best solu­tion is world peace and no coun­try hav­ing an army at all. From within the sys­tem, no coun­try can unilat­er­ally en­force that, so their best op­tion is to keep on throw­ing their money into mis­siles that lie in silos un­used.

(Moloch the vast stone of war! Moloch whose fingers are ten armies!)

9. Cancer. The hu­man body is sup­posed to be made up of cells liv­ing har­mo­niously and pool­ing their re­sources for the greater good of the or­ganism. If a cell defects from this equil­ibrium by in­vest­ing its re­sources into copy­ing it­self, it and its de­scen­dants will flour­ish, even­tu­ally out­com­pet­ing all the other cells and tak­ing over the body – at which point it dies. Or the situ­a­tion may re­peat, with cer­tain can­cer cells defect­ing against the rest of the tu­mor, thus slow­ing down its growth and caus­ing the tu­mor to stag­nate.

From a god’s-eye-view, the best solu­tion is all cells co­op­er­at­ing so that they don’t all die. From within the sys­tem, can­cer­ous cells will pro­lifer­ate and out­com­pete the other – so that only the ex­is­tence of the im­mune sys­tem keeps the nat­u­ral in­cen­tive to turn can­cer­ous in check.

10. The “race to the bot­tom” de­scribes a poli­ti­cal situ­a­tion where some ju­ris­dic­tions lure busi­nesses by promis­ing lower taxes and fewer reg­u­la­tions. The end re­sult is that ei­ther ev­ery­one op­ti­mizes for com­pet­i­tive­ness – by hav­ing min­i­mal tax rates and reg­u­la­tions – or they lose all of their busi­ness, rev­enue, and jobs to peo­ple who did (at which point they are pushed out and re­placed by a gov­ern­ment who will be more com­pli­ant).

But even though the last one has stolen the name, all these sce­nar­ios are in fact a race to the bot­tom. Once one agent learns how to be­come more com­pet­i­tive by sac­ri­fic­ing a com­mon value, all its com­peti­tors must also sac­ri­fice that value or be out­com­peted and re­placed by the less scrupu­lous. There­fore, the sys­tem is likely to end up with ev­ery­one once again equally com­pet­i­tive, but the sac­ri­ficed value is gone for­ever. From a god’s-eye-view, the com­peti­tors know they will all be worse off if they defect, but from within the sys­tem, given in­suffi­cient co­or­di­na­tion it’s im­pos­si­ble to avoid.

Be­fore we go on, there’s a slightly differ­ent form of multi-agent trap worth in­ves­ti­gat­ing. In this one, the com­pe­ti­tion is kept at bay by some out­side force – usu­ally so­cial stigma. As a re­sult, there’s not ac­tu­ally a race to the bot­tom – the sys­tem can con­tinue func­tion­ing at a rel­a­tively high level – but it’s im­pos­si­ble to op­ti­mize and re­sources are con­sis­tently thrown away for no rea­son. Lest you get ex­hausted be­fore we even be­gin, I’ll limit my­self to four ex­am­ples here.

11. Ed­u­ca­tion. In my es­say on re­ac­tionary philos­o­phy, I talk about my frus­tra­tion with ed­u­ca­tion re­form:

Peo­ple ask why we can’t re­form the ed­u­ca­tion sys­tem. But right now stu­dents’ in­cen­tive is to go to the most pres­ti­gious col­lege they can get into so em­ploy­ers will hire them – whether or not they learn any­thing. Em­ploy­ers’ in­cen­tive is to get stu­dents from the most pres­ti­gious col­lege they can so that they can defend their de­ci­sion to their boss if it goes wrong – whether or not the col­lege pro­vides value added. And col­leges’ in­cen­tive is to do what­ever it takes to get more pres­tige, as mea­sured in US News and World Re­port rank­ings – whether or not it helps stu­dents. Does this lead to huge waste and poor ed­u­ca­tion? Yes. Could the Ed­u­ca­tion God no­tice this and make some Ed­u­ca­tion De­crees that lead to a vastly more effi­cient sys­tem? Easily! But since there’s no Ed­u­ca­tion God ev­ery­body is just go­ing to fol­low their own in­cen­tives, which are only partly cor­re­lated with ed­u­ca­tion or effi­ciency.

From a god’s eye view, it’s easy to say things like “Stu­dents should only go to col­lege if they think they will get some­thing out of it, and em­ploy­ers should hire ap­pli­cants based on their com­pe­tence and not on what col­lege they went to”. From within the sys­tem, ev­ery­one’s already fol­low­ing their own in­cen­tives cor­rectly, so un­less the in­cen­tives change the sys­tem won’t ei­ther.

12. Science. Same es­say:

The mod­ern re­search com­mu­nity knows they aren’t pro­duc­ing the best sci­ence they could be. There’s lots of pub­li­ca­tion bias, statis­tics are done in a con­fus­ing and mis­lead­ing way out of sheer in­er­tia, and repli­ca­tions of­ten hap­pen very late or not at all. And some­times some­one will say some­thing like “I can’t be­lieve peo­ple are too dumb to fix Science. All we would have to do is re­quire early reg­is­tra­tion of stud­ies to avoid pub­li­ca­tion bias, turn this new and pow­er­ful statis­ti­cal tech­nique into the new stan­dard, and ac­cord higher sta­tus to sci­en­tists who do repli­ca­tion ex­per­i­ments. It would be re­ally sim­ple and it would vastly in­crease sci­en­tific progress. I must just be smarter than all ex­ist­ing sci­en­tists, since I’m able to think of this and they aren’t.”

And yeah. That would work for the Science God. He could just make a Science De­cree that ev­ery­one has to use the right statis­tics, and make an­other Science De­cree that ev­ery­one must ac­cord repli­ca­tions higher sta­tus.

But things that work from a god’s-eye view don’t work from within the sys­tem. No in­di­vi­d­ual sci­en­tist has an in­cen­tive to unilat­er­ally switch to the new statis­ti­cal tech­nique for her own re­search, since it would make her re­search less likely to pro­duce earth-shat­ter­ing re­sults and since it would just con­fuse all the other sci­en­tists. They just have an in­cen­tive to want ev­ery­body else to do it, at which point they would fol­low along. And no in­di­vi­d­ual jour­nal has an in­cen­tive to unilat­er­ally switch to early reg­is­tra­tion and pub­lish­ing nega­tive re­sults, since it would just mean their re­sults are less in­ter­est­ing than that other jour­nal who only pub­lishes ground-break­ing dis­cov­er­ies. From within the sys­tem, ev­ery­one is fol­low­ing their own in­cen­tives and will con­tinue to do so.

13. Govern­ment cor­rup­tion. I don’t know of any­one who re­ally thinks, in a prin­ci­pled way, that cor­po­rate welfare is a good idea. But the gov­ern­ment still man­ages to spend some­where around (de­pend­ing on how you calcu­late it) $100 billion dol­lars a year on it – which for ex­am­ple is three times the amount they spend on health care for the needy. Every­one fa­mil­iar with the prob­lem has come up with the same easy solu­tion: stop giv­ing so much cor­po­rate welfare. Why doesn’t it hap­pen?

Govern­ment are com­pet­ing against one an­other to get elected or pro­moted. And sup­pose part of op­ti­miz­ing for electabil­ity is op­ti­miz­ing cam­paign dona­tions from cor­po­ra­tions – or maybe it isn’t, but offi­cials think it is. Offi­cials who try to mess with cor­po­rate welfare may lose the sup­port of cor­po­ra­tions and be out­com­peted by offi­cials who promise to keep it in­tact.

So al­though from a god’s-eye-view ev­ery­one knows that elimi­nat­ing cor­po­rate welfare is the best solu­tion, each in­di­vi­d­ual offi­cial’s per­sonal in­cen­tives push her to main­tain it.

14. Congress. Only 9% of Amer­i­cans like it, sug­gest­ing a lower ap­proval rat­ing than cock­roaches, head lice, or traf­fic jams. How­ever, 62% of peo­ple who know who their own Con­gres­sional rep­re­sen­ta­tive is ap­prove of them. In the­ory, it should be re­ally hard to have a demo­crat­i­cally elected body that main­tains a 9% ap­proval rat­ing for more than one elec­tion cy­cle. In prac­tice, ev­ery rep­re­sen­ta­tive’s in­cen­tive is to ap­peal to his or her con­stituency while throw­ing the rest of the coun­try un­der the bus – some­thing at which they ap­par­ently suc­ceed.

From a god’s-eye-view, ev­ery Con­gressper­son ought to think only of the good of the na­tion. From within the sys­tem, you do what gets you elected.

II.

A ba­sic prin­ci­ple unites all of the mul­ti­po­lar traps above. In some com­pe­ti­tion op­ti­miz­ing for X, the op­por­tu­nity arises to throw some other value un­der the bus for im­proved X. Those who take it pros­per. Those who don’t take it die out. Even­tu­ally, ev­ery­one’s rel­a­tive sta­tus is about the same as be­fore, but ev­ery­one’s ab­solute sta­tus is worse than be­fore. The pro­cess con­tinues un­til all other val­ues that can be traded off have been – in other words, un­til hu­man in­ge­nu­ity can­not pos­si­bly figure out a way to make things any worse.

In a suffi­ciently in­tense com­pe­ti­tion (1-10), ev­ery­one who doesn’t throw all their val­ues un­der the bus dies out – think of the poor rats who wouldn’t stop mak­ing art. This is the in­fa­mous Malthu­sian trap, where ev­ery­one is re­duced to “sub­sis­tence”.

In an in­suffi­ciently in­tense com­pe­ti­tion (11-14), all we see is a per­verse failure to op­ti­mize – con­sider the jour­nals which can’t switch to more re­li­able sci­ence, or the leg­is­la­tors who can’t get their act to­gether and elimi­nate cor­po­rate welfare. It may not re­duce peo­ple to sub­sis­tence, but there is a weird sense in which it takes away their free will.

Every two-bit au­thor and philoso­pher has to write their own utopia. Most of them are le­gi­t­i­mately pretty nice. In fact, it’s a pretty good bet that two utopias that are po­lar op­po­sites both sound bet­ter than our own world.

It’s kind of em­bar­rass­ing that ran­dom no­bod­ies can think up states of af­fairs bet­ter than the one we ac­tu­ally live in. And in fact most of them can’t. A lot of utopias sweep the hard prob­lems un­der the rug, or would fall apart in ten min­utes if ac­tu­ally im­ple­mented.

But let me sug­gest a cou­ple of “utopias” that don’t have this prob­lem.

– The utopia where in­stead of the gov­ern­ment pay­ing lots of cor­po­rate welfare, the gov­ern­ment doesn’t pay lots of cor­po­rate welfare.

– The utopia where ev­ery coun­try’s mil­i­tary is 50% smaller than it is to­day, and the sav­ings go into in­fras­truc­ture spend­ing.

– The utopia where all hos­pi­tals use the same elec­tronic med­i­cal record sys­tem, or at least med­i­cal record sys­tems that can talk to each other, so that doc­tors can look up what the doc­tor you saw last week in a differ­ent hos­pi­tal de­cided in­stead of run­ning all the same tests over again for $5000.

I don’t think there are too many peo­ple who op­pose any of these utopias. If they’re not hap­pen­ing, it’s not be­cause peo­ple don’t sup­port them. It cer­tainly isn’t be­cause no­body’s thought of them, since I just thought of them right now and I don’t ex­pect my “dis­cov­ery” to be hailed as par­tic­u­larly novel or change the world.

Any hu­man with above room tem­per­a­ture IQ can de­sign a utopia. The rea­son our cur­rent sys­tem isn’t a utopia is that it wasn’t de­signed by hu­mans. Just as you can look at an arid ter­rain and de­ter­mine what shape a river will one day take by as­sum­ing wa­ter will obey grav­ity, so you can look at a civ­i­liza­tion and de­ter­mine what shape its in­sti­tu­tions will one day take by as­sum­ing peo­ple will obey in­cen­tives.

But that means that just as the shapes of rivers are not de­signed for beauty or nav­i­ga­tion, but rather an ar­ti­fact of ran­domly de­ter­mined ter­rain, so in­sti­tu­tions will not be de­signed for pros­per­ity or jus­tice, but rather an ar­ti­fact of ran­domly de­ter­mined ini­tial con­di­tions.

Just as peo­ple can level ter­rain and build canals, so peo­ple can al­ter the in­cen­tive land­scape in or­der to build bet­ter in­sti­tu­tions. But they can only do so when they are in­cen­tivized to do so, which is not always. As a re­sult, some pretty wild trib­u­taries and rapids form in some very strange places.

I will now jump from bor­ing game the­ory stuff to what might be the clos­est thing to a mys­ti­cal ex­pe­rience I’ve ever had.

Like all good mys­ti­cal ex­pe­riences, it hap­pened in Ve­gas. I was stand­ing on top of one of their many tall build­ings, look­ing down at the city be­low, all lit up in the dark. If you’ve never been to Ve­gas, it is re­ally im­pres­sive. Skyscrap­ers and lights in ev­ery va­ri­ety strange and beau­tiful all clus­tered to­gether. And I had two thoughts, crys­tal clear:

It is glo­ri­ous that we can cre­ate some­thing like this.

It is shame­ful that we did.

Like, by what stan­dard is build­ing gi­gan­tic forty-story-high in­door repli­cas of Venice, Paris, Rome, Egypt, and Camelot side-by-side, filled with albino tigers, in the mid­dle of the most in­hos­pitable desert in North Amer­ica, a re­motely sane use of our civ­i­liza­tion’s limited re­sources?

And it oc­curred to me that maybe there is no philos­o­phy on Earth that would en­dorse the ex­is­tence of Las Ve­gas. Even Ob­jec­tivism, which is usu­ally my go-to philos­o­phy for jus­tify­ing the ex­cesses of cap­i­tal­ism, at least grounds it in the be­lief that cap­i­tal­ism im­proves peo­ple’s lives. Henry Ford was vir­tu­ous be­cause he al­lowed lots of oth­er­wise car-less peo­ple to ob­tain cars and so made them bet­ter off. What does Ve­gas do? Promise a bunch of shmucks free money and not give it to them.

Las Ve­gas doesn’t ex­ist be­cause of some de­ci­sion to he­do­nically op­ti­mize civ­i­liza­tion, it ex­ists be­cause of a quirk in dopamin­er­gic re­ward cir­cuits, plus the microstruc­ture of an un­even reg­u­la­tory en­vi­ron­ment, plus Schel­ling points. A ra­tio­nal cen­tral plan­ner with a god’s-eye-view, con­tem­plat­ing these facts, might have thought “Hm, dopamin­er­gic re­ward cir­cuits have a quirk where cer­tain tasks with slightly nega­tive risk-benefit ra­tios get an emo­tional valence as­so­ci­ated with slightly pos­i­tive risk-benefit ra­tios, let’s see if we can ed­u­cate peo­ple to be­ware of that.” Peo­ple within the sys­tem, fol­low­ing the in­cen­tives cre­ated by these facts, think: “Let’s build a forty-story-high in­door replica of an­cient Rome full of albino tigers in the mid­dle of the desert, and so be­come slightly richer than peo­ple who didn’t!”

Just as the course of a river is la­tent in a ter­rain even be­fore the first rain falls on it – so the ex­is­tence of Cae­sar’s Palace was la­tent in neu­ro­biol­ogy, eco­nomics, and reg­u­la­tory regimes even be­fore it ex­isted. The en­trepreneur who built it was just filling in the ghostly lines with real con­crete.

So we have all this amaz­ing tech­nolog­i­cal and cog­ni­tive en­ergy, the brilli­ance of the hu­man species, wasted on recit­ing the lines writ­ten by poorly evolved cel­lu­lar re­cep­tors and blind eco­nomics, like gods be­ing or­dered around by a mo­ron.

Some peo­ple have mys­ti­cal ex­pe­riences and see God. There in Las Ve­gas, I saw Moloch.

(Moloch, whose mind is pure ma­chin­ery! Moloch, whose blood is run­ning money!

Moloch whose soul is elec­tric­ity and banks! Moloch, whose skyscrap­ers stand in the long streets like end­less Je­ho­vahs!

Moloch! Moloch! Robot apart­ments! In­visi­ble sub­urbs! Skele­ton trea­suries! Blind cap­i­tals! De­monic in­dus­tries! Spec­tral na­tions!)<i>

…gran­ite cocks!<cen­ter>

III.

The Apocrypha Dis­cor­dia says:

Time flows like a river. Which is to say, down­hill. We can tell this be­cause ev­ery­thing is go­ing down­hill rapidly. It would seem pru­dent to be some­where else when we reach the sea.

Let’s take this ran­dom gag 100% liter­ally and see where it leads us.

We just analo­gized the flow of in­cen­tives to the flow of a river. The down­hill tra­jec­tory is ap­pro­pri­ate: the traps hap­pen when you find an op­por­tu­nity to trade off a use­ful value for greater com­pet­i­tive­ness. Once ev­ery­one has it, the greater com­pet­i­tive­ness brings you no joy – but the value is lost for­ever. There­fore, each step of the Poor Co­or­di­na­tion Polka makes your life worse.

But not only have we not yet reached the sea, but we also seem to move up­hill sur­pris­ingly of­ten. Why do things not de­gen­er­ate more and more un­til we are back at sub­sis­tence level? I can think of three bad rea­sons – ex­cess re­sources, phys­i­cal limi­ta­tions, and util­ity max­i­miza­tion – plus one good rea­son – co­or­di­na­tion.

1. Ex­cess re­sources. The ocean depths are a hor­rible place with lit­tle light, few re­sources, and var­i­ous hor­rible or­ganisms ded­i­cated to eat­ing or par­a­sitiz­ing one an­other. But ev­ery so of­ten, a whale car­cass falls to the bot­tom of the sea. More food than the or­ganisms that find it could ever pos­si­bly want. There’s a brief pe­riod of mirac­u­lous plenty, while the cou­ple of crea­tures that first en­counter the whale feed like kings. Even­tu­ally more an­i­mals dis­cover the car­cass, the faster-breed­ing an­i­mals in the car­cass mul­ti­ply, the whale is grad­u­ally con­sumed, and ev­ery­one sighs and goes back to liv­ing in a Malthu­sian death-trap.

(Slate Star Codex: Your source for macabre whale metaphors since June 2014)

It’s as if a group of those rats who had aban­doned art and turned to can­ni­bal­ism sud­denly was blown away to a new empty is­land with a much higher car­ry­ing ca­pac­ity, where they would once again have the breath­ing room to live in peace and cre­ate artis­tic mas­ter­pieces.

This is an age of whale­fall, an age of ex­cess car­ry­ing ca­pac­ity, an age when we sud­denly find our­selves with a thou­sand-mile head start on Malthus. As Han­son puts it, this is the dream time.

As long as re­sources aren’t scarce enough to lock us in a war of all against all, we can do silly non-op­ti­mal things – like art and mu­sic and philos­o­phy and love – and not be out­com­peted by mer­ciless kil­ling ma­chines most of the time.

2. Phys­i­cal limi­ta­tions. Imag­ine a profit-max­i­miz­ing slave­mas­ter who de­cided to cut costs by not feed­ing his slaves or let­ting them sleep. He would soon find that his slaves’ pro­duc­tivity dropped off dras­ti­cally, and that no amount of whip­ping them could re­store it. Even­tu­ally af­ter test­ing nu­mer­ous strate­gies, he might find his slaves got the most work done when they were well-fed and well-rested and had at least a lit­tle bit of time to re­lax. Not be­cause the slaves were vol­un­tar­ily with­hold­ing their la­bor – we as­sume the fear of pun­ish­ment is enough to make them work as hard as they can – but be­cause the body has cer­tain phys­i­cal limi­ta­tions that limit how mean you can get away with be­ing. Thus, the “race to the bot­tom” stops some­where short of the ac­tual eth­i­cal bot­tom, when the phys­i­cal limits are run into.

John Moes, a his­to­rian of slav­ery, goes fur­ther and writes about how the slav­ery we are most fa­mil­iar with – that of the an­te­bel­lum South – is a his­tor­i­cal aber­ra­tion and prob­a­bly eco­nom­i­cally in­effi­cient. In most past forms of slav­ery – es­pe­cially those of the an­cient world – it was com­mon for slaves to be paid wages, treated well, and of­ten given their free­dom.

He ar­gues that this was the re­sult of ra­tio­nal eco­nomic calcu­la­tion. You can in­cen­tivize slaves through the car­rot or the stick, and the stick isn’t very good. You can’t watch slaves all the time, and it’s re­ally hard to tell whether a slave is slack­ing off or not (or even whether, given a lit­tle more whip­ping, he might be able to work even harder). If you want your slaves to do any­thing more com­pli­cated than pick cot­ton, you run into some se­ri­ous mon­i­tor­ing prob­lems – how do you profit from an en­slaved philoso­pher? Whip him re­ally hard un­til he elu­ci­dates a the­ory of The Good that you can sell books about?

The an­cient solu­tion to the prob­lem – per­haps an early in­spira­tion to Fnargl – was to tell the slave to go do what­ever he wanted and found most prof­itable, then split the prof­its with him. Some­times the slave would work a job at your work­shop and you would pay him wages based on how well he did. Other times the slave would go off and make his way in the world and send you some of what he earned. Still other times, you would set a price for the slave’s free­dom, and the slave would go and work and even­tu­ally come up with the mone and free him­self.

Moes goes even fur­ther and says that these sys­tems were so prof­itable that there were con­stant smoulder­ing at­tempts to try this sort of thing in the Amer­i­can South. The rea­son they stuck with the whips-and-chains method owed less to eco­nomic con­sid­er­a­tions and more to racist gov­ern­ment offi­cials crack­ing down on lu­cra­tive but not-ex­actly-white-supremacy-pro­mot­ing at­tempts to free slaves and have them go into busi­ness.

So in this case, a race to the bot­tom where com­pet­ing plan­ta­tions be­come cru­eler and cru­eler to their slaves in or­der to max­i­mize com­pet­i­tive­ness is halted by the phys­i­cal limi­ta­tion of cru­elty not helping af­ter a cer­tain point.

Or to give an­other ex­am­ple, one of the rea­sons we’re not cur­rently in a Malthu­sian pop­u­la­tion ex­plo­sion right now is that women can only have one baby per nine months. If those weird re­li­gious sects that de­mand their mem­bers have as many ba­bies as pos­si­ble could copy-paste them­selves, we would be in re­ally bad shape. As it is they can only do a small amount of dam­age per gen­er­a­tion.

3. Utility max­i­miza­tion. We’ve been think­ing in terms of pre­serv­ing val­ues ver­sus win­ning com­pe­ti­tions, and ex­pect­ing op­ti­miz­ing for the lat­ter to de­stroy the former.

But many of the most im­por­tant com­pe­ti­tions /​ op­ti­miza­tion pro­cesses in mod­ern civ­i­liza­tion are op­ti­miz­ing for hu­man val­ues. You win at cap­i­tal­ism partly by satis­fy­ing cus­tomers’ val­ues. You win at democ­racy partly by satis­fy­ing vot­ers’ val­ues.

Sup­pose there’s a coffee plan­ta­tion some­where in Ethiopia that em­ploys Ethiopi­ans to grow coffee beans that get sold to the United States. Maybe it’s locked in a life-and-death strug­gle with other coffee plan­ta­tions and want to throw as many val­ues un­der the bus as it can to pick up a slight ad­van­tage.

But it can’t sac­ri­fice qual­ity of coffee pro­duced too much, or else the Amer­i­cans won’t buy it. And it can’t sac­ri­fice wages or work­ing con­di­tions too much, or else the Ethiopi­ans won’t work there. And in fact, part of its com­pe­ti­tion-op­ti­miza­tion pro­cess is find­ing the best ways to at­tract work­ers and cus­tomers that it can, as long as it doesn’t cost them too much money. So this is very promis­ing.

But it’s im­por­tant to re­mem­ber ex­actly how frag­ile this benefi­cial equil­ibrium is.

Sup­pose the coffee plan­ta­tions dis­cover a toxic pes­ti­cide that will in­crease their yield but make their cus­tomers sick. But their cus­tomers don’t know about the pes­ti­cide, and the gov­ern­ment hasn’t caught up to reg­u­lat­ing it yet. Now there’s a tiny un­cou­pling be­tween “sel­l­ing to Amer­i­cans” and “satis­fy­ing Amer­i­cans’ val­ues”, and so of course Amer­i­cans’ val­ues get thrown un­der the bus.

Or sup­pose that there’s a baby boom in Ethiopia and sud­denly there are five work­ers com­pet­ing for each job. Now the com­pany can af­ford to lower wages and im­ple­ment cruel work­ing con­di­tions down to what­ever the phys­i­cal limits are. As soon as there’s an un­cou­pling be­tween “get­ting Ethiopi­ans to work here” and “satis­fy­ing Ethiopian val­ues”, it doesn’t look too good for Ethiopian val­ues ei­ther.

Or sup­pose some­one in­vents a robot that can pick coffee bet­ter and cheaper than a hu­man. The com­pany fires all its la­bor­ers and throws them onto the street to die. As soon as the util­ity of the Ethiopi­ans is no longer nec­es­sary for profit, all pres­sure to main­tain it dis­ap­pears.

Or sup­pose that there is some im­por­tant value that is nei­ther a value of the em­ploy­ees or the cus­tomers. Maybe the coffee plan­ta­tions are on the habitat of a rare trop­i­cal bird that en­vi­ron­men­tal­ist groups want to pro­tect. Maybe they’re on the an­ces­tral burial ground of a tribe differ­ent from the one the plan­ta­tion is em­ploy­ing, and they want it re­spected in some way. Maybe coffee grow­ing con­tributes to global warm­ing some­how. As long as it’s not a value that will pre­vent the av­er­age Amer­i­can from buy­ing from them or the av­er­age Ethiopian from work­ing for them, un­der the bus it goes.

I know that “cap­i­tal­ists some­times do bad things” isn’t ex­actly an origi­nal talk­ing point. But I do want to stress how it’s not equiv­a­lent to “cap­i­tal­ists are greedy”. I mean, some­times they are greedy. But other times they’re just in a suffi­ciently in­tense com­pe­ti­tion where any­one who doesn’t do it will be out­com­peted and re­placed by peo­ple who do. Busi­ness prac­tices are set by Moloch, no one else has any choice in the mat­ter.

(from my very lit­tle knowl­edge of Marx, he un­der­stands this very very well and peo­ple who sum­ma­rize him as “cap­i­tal­ists are greedy” are do­ing him a dis­ser­vice)

And as well un­der­stood as the cap­i­tal­ist ex­am­ple is, I think it is less well ap­pre­ci­ated that democ­racy has the same prob­lems. Yes, in the­ory it’s op­ti­miz­ing for voter hap­piness which cor­re­lates with good poli­cy­mak­ing. But as soon as there’s the slight­est dis­con­nect be­tween good poli­cy­mak­ing and electabil­ity, good poli­cy­mak­ing has to get thrown un­der the bus.

For ex­am­ple, ever-in­creas­ing prison terms are un­fair to in­mates and un­fair to the so­ciety that has to pay for them. Poli­ti­cans are un­will­ing to do any­thing about them be­cause they don’t want to look “soft on crime”, and if a sin­gle in­mate whom they helped re­lease ever does any­thing bad (and statis­ti­cally one of them will have to) it will be all over the air­waves as “Con­vict re­leased by Con­gress­man’s poli­cies kills fam­ily of five, how can the Con­gress­man even sleep at night let alone claim he de­serves re­elec­tion?”. So even if de­creas­ing prison pop­u­la­tions would be good policy – and it is – it will be very difficult to im­ple­ment.

(Moloch the in­com­pre­hen­si­ble prison! Moloch the cross­bone soul­less jailhouse and Congress of sor­rows! Moloch whose build­ings are judg­ment! Moloch the stunned gov­ern­ments!)

Turn­ing “satis­fy­ing cus­tomers” and “satis­fy­ing cit­i­zens” into the out­puts of op­ti­miza­tion pro­cesses was one of civ­i­liza­tion’s great­est ad­vances and the rea­son why cap­i­tal­ist democ­ra­cies have so out­performed other sys­tems. But if we have bound Moloch as our ser­vant, the bonds are not very strong, and we some­times find that the tasks he has done for us move to his ad­van­tage rather than ours.

4. Co­or­di­na­tion.

The op­po­site of a trap is a gar­den.

Things are easy to solve from a god’s-eye-view, so if ev­ery­one comes to­gether into a su­per­or­ganism, that su­per­or­ganism can solve prob­lems with ease and fi­nesse. An in­tense com­pe­ti­tion be­tween agents has turned into a gar­den, with a sin­gle gar­dener dic­tat­ing where ev­ery­thing should go and re­mov­ing el­e­ments that do not con­form to the pat­tern.

As I pointed out in the Non-Liber­tar­ian FAQ, gov­ern­ment can eas­ily solve the pol­lu­tion prob­lem with fish farms. The best known solu­tion to the Pri­son­ers’ Dilemma is for the mob boss (play­ing the role of a gov­er­nor) to threaten to shoot any pris­oner who defects. The solu­tion to com­pa­nies pol­lut­ing and harm­ing work­ers is gov­ern­ment reg­u­la­tions against such. Govern­ments solve arm races within a coun­try by main­tain­ing a monopoly on the use of force, and it’s easy to see that if a truly effec­tive world gov­ern­ment ever arose, in­ter­na­tional mil­i­tary buildups would end pretty quickly.

The two ac­tive in­gre­di­ents of gov­ern­ment are laws plus vi­o­lence – or more ab­stractly agree­ments plus en­force­ment mechanism. Many other things be­sides gov­ern­ments share these two ac­tive in­gre­di­ents and so are able to act as co­or­di­na­tion mechanisms to avoid traps.

For ex­am­ple, since stu­dents are com­pet­ing against each other (di­rectly if classes are graded on a curve, but always in­di­rectly for col­lege ad­mis­sions, jobs, et cetera) there is in­tense pres­sure for in­di­vi­d­ual stu­dents to cheat. The teacher and school play the role of a gov­ern­ment by hav­ing rules (for ex­am­ple, against cheat­ing) and the abil­ity to pun­ish stu­dents who break them.

But the emer­gent so­cial struc­ture of the stu­dents them­selves is also a sort of gov­ern­ment. If stu­dents shun and dis­trust cheaters, then there are rules (don’t cheat) and an en­force­ment mechanism (or else we will shun you).

So­cial codes, gen­tle­mens’ agree­ments, in­dus­trial guilds, crim­i­nal or­ga­ni­za­tions, tra­di­tions, friend­ships, schools, cor­po­ra­tions, and re­li­gions are all co­or­di­nat­ing in­sti­tu­tions that keep us out of traps by chang­ing our in­cen­tives.

But these in­sti­tu­tions not only in­cen­tivize oth­ers, but are in­cen­tivized them­selves. Th­ese are large or­ga­ni­za­tions made of lots of peo­ple who are com­pet­ing for jobs, sta­tus, pres­tige, et cetera – there’s no rea­son they should be im­mune to the same mul­ti­po­lar traps as ev­ery­one else, and in­deed they aren’t. Govern­ments can in the­ory keep cor­po­ra­tions, cit­i­zens, et cetera out of cer­tain traps, but as we saw above there are many traps that gov­ern­ments them­selves can fall into.

The United States tries to solve the prob­lem by hav­ing mul­ti­ple lev­els of gov­ern­ment, un­break­able con­stu­ti­tional laws, checks and bal­ances be­tween differ­ent branches, and a cou­ple of other hacks.

Saudi Ara­bia uses a differ­ent tac­tic. They just put one guy in charge of ev­ery­thing.

This is the much-ma­ligned – I think un­fairly – ar­gu­ment in fa­vor of monar­chy. A monarch is an un­in­cen­tivized in­cen­tivizer. He ac­tu­ally has the god’s-eye-view and is out­side of and above ev­ery sys­tem. He has per­ma­nently won all com­pe­ti­tions and is not com­pet­ing for any­thing, and there­fore he is perfectly free of Moloch and of the in­cen­tives that would oth­er­wise chan­nel his in­cen­tives into pre­de­ter­mined paths. Aside from a few very the­o­ret­i­cal pro­pos­als like my Shin­ing Gar­den, monar­chy is the only sys­tem that does this.

But then in­stead of fol­low­ing a ran­dom in­cen­tive struc­ture, we’re fol­low­ing the whim of one guy. Cae­sar’s Palace Ho­tel and Cas­ino is a crazy waste of re­sources, but the ac­tual Gaius Julius Cae­sar Au­gus­tus Ger­man­i­cus wasn’t ex­actly the perfect benev­olent ra­tio­nal cen­tral plan­ner ei­ther.

The liber­tar­ian-au­thor­i­tar­ian axis on the Poli­ti­cal Com­pass is a trade­off be­tween dis­co­or­di­na­tion and tyranny. You can have ev­ery­thing perfectly co­or­di­nated by some­one with a god’s-eye-view – but then you risk Stalin. And you can be to­tally free of all cen­tral au­thor­ity – but then you’re stuck in ev­ery stupid mul­ti­po­lar trap Moloch can de­vise.

The liber­tar­i­ans make a con­vinc­ing ar­gu­ment for the one side, and the monar­chists for the other, but I ex­pect that like most trade­offs we just have to hold our noses and ad­mit it’s a re­ally hard prob­lem.

IV.

Let’s go back to that Apocrypha Dis­cor­dia quote:

Time flows like a river. Which is to say, down­hill. We can tell this be­cause ev­ery­thing is go­ing down­hill rapidly. It would seem pru­dent to be some­where else when we reach the sea.

What would it mean, in this situ­a­tion, to reach the sea?

Mul­tipo­lar traps – races to the bot­tom – threaten to de­stroy all hu­man val­ues. They are cur­rently re­strained by phys­i­cal limi­ta­tions, ex­cess re­sources, util­ity max­i­miza­tion, and co­or­di­na­tion.

The di­men­sion along which this metaphor­i­cal river flows must be time, and the most im­por­tant change in hu­man civ­i­liza­tion over time is the change in tech­nol­ogy. So the rele­vant ques­tion is how tech­nolog­i­cal changes will af­fect our ten­dency to fall into mul­ti­po­lar traps.

I de­scribed traps as when:

…in some com­pe­ti­tion op­ti­miz­ing for X, the op­por­tu­nity arises to throw some other value un­der the bus for im­proved X. Those who take it pros­per. Those who don’t take it die out. Even­tu­ally, ev­ery­one’s rel­a­tive sta­tus is about the same as be­fore, but ev­ery­one’s ab­solute sta­tus is worse than be­fore. The pro­cess con­tinues un­til all other val­ues that can be traded off have been – in other words, un­til hu­man in­ge­nu­ity can­not pos­si­bly figure out a way to make things any worse.

That “the op­por­tu­nity arises” phrase is look­ing pretty sinister. Tech­nol­ogy is all about cre­at­ing new op­por­tu­ni­ties.

Develop a new robot, and sud­denly coffee plan­ta­tions have “the op­por­tu­nity” to au­to­mate their har­vest and fire all the Ethiopian work­ers. Develop nu­clear weapons, and sud­denly coun­tries are stuck in an arms race to have enough of them. Pol­lut­ing the at­mo­sphere to build prod­ucts quicker wasn’t a prob­lem be­fore they in­vented the steam en­g­ine.

The limit of mul­ti­po­lar traps as tech­nol­ogy ap­proaches in­finity is “very bad”.

Mul­tipo­lar traps are cur­rently re­strained by phys­i­cal limi­ta­tions, ex­cess re­sources, util­ity max­i­miza­tion, and co­or­di­na­tion.

Phys­i­cal limi­ta­tions are most ob­vi­ously con­quered by in­creas­ing tech­nol­ogy. The slave­mas­ter’s old co­nun­drum – that slaves need to eat and sleep – suc­cumbs to Soylent and modafinil. The prob­lem of slaves run­ning away suc­cumbs to GPS. The prob­lem of slaves be­ing too stressed to do good work suc­cumbs to Val­ium. None of these things are very good for the slaves.

(or just in­vent a robot that doesn’t need food or sleep at all. What hap­pens to the slaves af­ter that is bet­ter left un­said)

The other ex­am­ple of phys­i­cal limits was one baby per nine months, and this was un­der­stat­ing the case – it’s re­ally “one baby per nine months plus will­ing­ness to sup­port and take care of a ba­si­cally hel­pless and ex­tremely de­mand­ing hu­man be­ing for eigh­teen years”. This puts a damper on the en­thu­si­asm of even the most zeal­ous re­li­gious sect’s “go forth and mul­ti­ply” dic­tum.

But as Bostrom puts it in Su­per­in­tel­li­gence:

There are rea­sons, if we take a longer view and as­sume a state of un­chang­ing tech­nol­ogy and con­tinued pros­per­ity, to ex­pect a re­turn to the his­tor­i­cally and ecolog­i­cally nor­mal con­di­tion of a world pop­u­la­tion that butts up against the limits of what our niche can sup­port. If this seems coun­ter­in­tu­itive in light of the nega­tive re­la­tion­ship be­tween wealth and fer­til­ity that we are cur­rently ob­serv­ing on the global scale, we must re­mind our­selves that this mod­ern age is a brief slice of his­tory and very much an aber­ra­tion. Hu­man be­hav­ior has not yet adapted to con­tem­po­rary con­di­tions. Not only do we fail to take ad­van­tage of ob­vi­ous ways to in­crease our in­clu­sive fit­ness (such as by be­com­ing sperm or egg donors) but we ac­tively sab­o­tage our fer­til­ity by us­ing birth con­trol. In the en­vi­ron­ment of evolu­tion­ary adapt­ed­ness, a healthy sex drive may have been enough to make an in­di­vi­d­ual act in ways that max­i­mized her re­pro­duc­tive po­ten­tial; in the mod­ern en­vi­ron­ment, how­ever, there would be a huge se­lec­tive ad­van­tage to hav­ing a more di­rect de­sire for be­ing the biolog­i­cal par­ent to the largest pos­si­ble num­ber of chilren. Such a de­sire is cur­rently be­ing se­lected for, as are other traits that in­crease our propen­sity to re­pro­duce. Cul­tural adap­ta­tion, how­ever, might steal a march on biolog­i­cal evolu­tion. Some com­mu­ni­ties, such as those of the Hut­terites or the ad­her­ents of the Quiv­er­full evan­gel­i­cal move­ment, have na­tal­ist cul­tures that en­courage large fam­i­lies, and they are con­se­quently un­der­go­ing rapid ex­pan­sion…This longer-term out­look could be telescoped into a more im­mi­nent prospect by the in­tel­li­gence ex­plo­sion. Since soft­ware is copy­able, a pop­u­la­tion of em­u­la­tions or AIs could dou­ble rapidly – over the course of min­utes rather than decades or cen­turies – soon ex­haust­ing all available hard­ware

As always when deal­ing with high-level tran­shu­man­ists, “all available hard­ware” should be taken to in­clude “the atoms that used to be part of your body”.

The idea of biolog­i­cal or cul­tural evolu­tion caus­ing a mass pop­u­la­tion ex­plo­sion is a philo­soph­i­cal toy at best. The idea of tech­nol­ogy mak­ing it pos­si­ble is both plau­si­ble and ter­rify­ing. Now we see that “phys­i­cal limits” segues very nat­u­rally into “ex­cess re­sources” – the abil­ity to cre­ate new agents very quickly means that un­less ev­ery­one can co­or­di­nate to ban do­ing this, the peo­ple who do will out­com­pete the peo­ple who don’t un­til they have reached car­ry­ing ca­pac­ity and ev­ery­one is stuck at sub­sis­tence level.

Ex­cess re­sources, which un­til now have been a gift of tech­nolog­i­cal progress, there­fore switch and be­come a ca­su­alty of it at a suffi­ciently high tech level.

Utility max­i­miza­tion, always on shaky ground, also faces new threats. In the face of con­tin­u­ing de­bate about this point, I con­tinue to think it ob­vi­ous that robots will push hu­mans out of work or at least drive down wages (which, in the ex­is­tence of a min­i­mum wage, pushes hu­mans out of work).

Once a robot can do ev­ery­thing an IQ 80 hu­man can do, only bet­ter and cheaper, there will be no rea­son to em­ploy IQ 80 hu­mans. Once a robot can do ev­ery­thing an IQ 120 hu­man can do, only bet­ter and cheaper, there will be no rea­son to em­ploy IQ 120 hu­mans. Once a robot can do ev­ery­thing an IQ 180 hu­man can do, only bet­ter and cheaper, there will be no rea­son to em­ploy hu­mans at all, in the un­likely sce­nario that there are any left by that point.

In the ear­lier stages of the pro­cess, cap­i­tal­ism be­comes more and more un­cou­pled from its pre­vi­ous job as an op­ti­mizer for hu­man val­ues. Now most hu­mans are to­tally locked out of the group whose val­ues cap­i­tal­ism op­ti­mizes for. They have no value to con­tribute as work­ers – and since in the ab­sence of a spec­tac­u­lar so­cial safety net it’s un­clear how they would have much money – they have no value as cus­tomers ei­ther. Cap­i­tal­ism has passed them by. As the seg­ment of hu­mans who can be out­com­peted by robots in­creases, cap­i­tal­ism passes by more and more peo­ple un­til even­tu­ally it locks out the hu­man race en­tirely, once again in the van­ish­ingly un­likely sce­nario that we are still around.

(there are some sce­nar­ios in which a few cap­i­tal­ists who own the robots may benefit here, but in ei­ther case the vast ma­jor­ity are out of luck)

Democ­racy is less ob­vi­ously vuln­er­a­ble, but it might be worth go­ing back to Bostrom’s para­graph about the Quiv­er­full move­ment. Th­ese are some re­ally re­li­gious Chris­ti­ans who think that God wants them to have as many kids as pos­si­ble, and who can end up with fam­i­lies of ten or more. Their ar­ti­cles ex­plictly calcu­late that if they start at two per­cent of the pop­u­la­tion, but have on av­er­age eight chil­dren per gen­er­a­tion when ev­ery­one else on av­er­age only has two, within three gen­er­a­tions they’ll make up half the pop­u­la­tion.

It’s a clever strat­egy, but I can think of one thing that will save us: judg­ing by how many ex-Quiv­er­full blogs I found when search­ing for those statis­tics, their re­ten­tion rates even within a sin­gle gen­er­a­tion are pretty grim. Their ar­ti­cle ad­mits that 80% of very re­li­gious chil­dren leave the church as adults (al­though of course they ex­pect their own move­ment to do bet­ter). And this is not a sym­met­ri­cal pro­cess – 80% of chil­dren who grow up in athe­ist fam­i­lies aren’t be­com­ing Quiv­er­full.

It looks a lot like even though they are out­breed­ing us, we are out­meme-ing them, and that gives us a de­ci­sive ad­van­tage.

But we should also be kind of scared of this pro­cess. Memes op­ti­mize for mak­ing peo­ple want to ac­cept them and pass them on – so like cap­i­tal­ism and democ­racy, they’re op­ti­miz­ing for a proxy of mak­ing us happy, but that proxy can eas­ily get un­cou­pled from the origi­nal goal.

Chain let­ters, ur­ban leg­ends, pro­pa­ganda, and viral mar­ket­ing are all ex­am­ples of memes that don’t satisfy our ex­plicit val­ues (true and use­ful) but are suffi­ciently memet­i­cally viru­lent that they spread any­way.

I hope it’s not too con­tro­ver­sial here to say the same thing is true of re­li­gion. Reli­gions, at their heart, are the most ba­sic form of memetic repli­ca­tor – “Believe this state­ment and re­peat it to ev­ery­one you hear or else you will be eter­nally tor­tured”.

The cre­ation­ism “de­bate” and global warm­ing “de­bate” and a host of similar “de­bates” in to­day’s so­ciety sug­gest that memes that can prop­a­gate in­de­pen­dent of their truth value has a pretty strong in­fluence on the poli­ti­cal pro­cess. Maybe these memes prop­a­gate be­cause they ap­peal to peo­ple’s prej­u­dices, maybe be­cause they’re sim­ple, maybe be­cause they effec­tively mark an in-group and an out-group, or maybe for all sorts of differ­ent rea­sons.

The point is – imag­ine a coun­try full of bioweapon labs, where peo­ple toil day and night to in­vent new in­fec­tious agents. The ex­is­tence of these labs, and their right to throw what­ever they de­velop in the wa­ter sup­ply is pro­tected by law. And the coun­try is also linked by the world’s most perfect mass tran­sit sys­tem that ev­ery sin­gle per­son uses ev­ery day, so that any new pathogen can spread to the en­tire coun­try in­stan­ta­neously. You’d ex­pect things to start go­ing bad for that city pretty quickly.

Well, we have about a zillion think tanks re­search­ing new and bet­ter forms of pro­pa­ganda. And we have con­sti­tu­tion­ally pro­tected free­dom of speech. And we have the In­ter­net. So we’re kind of screwed.

(Moloch whose name is the Mind!)

There are a few peo­ple work­ing on rais­ing the san­ity wa­ter­line, but not as many peo­ple as are work­ing on new and ex­cit­ing ways of con­fus­ing and con­vert­ing peo­ple, cat­a­loging and ex­ploit­ing ev­ery sin­gle bias and heuris­tic and dirty rhetor­i­cal trick

So as tech­nol­ogy (which I take to in­clude knowl­edge of psy­chol­ogy, so­ciol­ogy, pub­lic re­la­tions, etc) tends to in­finity, the power of truthi­ness rel­a­tive to truth in­creases, and things don’t look great for real grass­roots democ­racy. The worst-case sce­nario is that the rul­ing party learns to pro­duce in­finite charisma on de­mand. If that doesn’t sound so bad to you, re­mem­ber what Hitler was able to do with an fa­mously high level of charisma that was still less-than-in­finite.

(al­ter­nate phras­ing for Chom­sky­ites: tech­nol­ogy in­creases the effi­ciency of man­u­fac­tur­ing con­sent in the same way it in­creases the effi­ciency of man­u­fac­tur­ing ev­ery­thing else)

Co­or­di­na­tion is what’s left. And tech­nol­ogy has the po­ten­tial to se­ri­ously im­prove co­or­di­na­tion efforts. Peo­ple can use the In­ter­net to get in touch with one an­other, launch poli­ti­cal move­ments, and frac­ture off into sub­com­mu­ni­ties.

But co­or­di­na­tion only works when you have 51% or more of the force on the side of the peo­ple do­ing the co­or­di­nat­ing, and when you haven’t come up with some brilli­ant trick to make co­or­di­na­tion im­pos­si­ble.

The sec­ond one first. In the links post be­fore last, I wrote:

The lat­est de­vel­op­ment in the brave new post-Bit­coin world is crypto-equity. At this point I’ve gone from want­ing to praise these in­ven­tors as bold liber­tar­ian heroes to want­ing to drag them in front of a black­board and mak­ing them write a hun­dred times “I WILL NOT CALL UP THAT WHICH I CANNOT PUT DOWN”

A cou­ple peo­ple asked me what I meant, and I didn’t have the back­ground then to ex­plain. Well, this post is the back­ground. Peo­ple are us­ing the con­tin­gent stu­pidity of our cur­rent gov­ern­ment to re­place lots of hu­man in­ter­ac­tion with mechanisms that can­not be co­or­di­nated even in prin­ci­ple. I to­tally un­der­stand why all these things are good right now when most of what our gov­ern­ment does is stupid and un­nec­es­sary. But there is go­ing to come a time when – af­ter one too many bioweapon or nan­otech or nu­clear in­ci­dents – we, as a civ­i­liza­tion, are go­ing to wish we hadn’t es­tab­lished un­trace­able and un­stop­pable ways of sel­l­ing prod­ucts.

And if we ever get real live su­per­in­tel­li­gence, pretty much by defi­ni­tion it is go­ing to have >51% of the power and all at­tempts at “co­or­di­na­tion” with it will be use­less.

So I agree with Robin Han­son: This is the dream time. This is a rare con­fluence of cir­cum­stances where the we are un­usu­ally safe from mul­ti­po­lar traps, and as such weird things like art and sci­ence and philos­o­phy and love can flour­ish.

As tech­nolog­i­cal ad­vance in­creases, the rare con­fluence will come to an end. New op­por­tu­ni­ties to throw val­ues un­der the bus for in­creased com­pet­i­tive­ness will arise. New ways of copy­ing agents to in­crease the pop­u­la­tion will soak up our ex­cess re­sources and re­s­ur­rect Malthus’ un­quiet spirit. Cap­i­tal­ism and democ­racy, pre­vi­ously our pro­tec­tors, will figure out ways to route around their in­con­ve­nient de­pen­dence on hu­man val­ues. And our co­or­di­na­tion power will not be nearly up to the task, as­sum­ing somthing much more pow­er­ful than all of us com­bined doesn’t show up and crush our com­bined efforts with a wave of its paw.

Ab­sent an ex­traor­di­nary effort to di­vert it, the river reaches the sea in one of two places.

It can end in Eliezer Yud­kowsky’s night­mare of a su­per­in­tel­li­gence op­ti­miz­ing for some ran­dom thing (clas­si­cally pa­per clips) be­cause we weren’t smart enough to chan­nel its op­ti­miza­tion efforts the right way. This is the ul­ti­mate trap, the trap that catches the uni­verse. Every­thing ex­cept the one thing be­ing max­i­mized is de­stroyed ut­terly in pur­suit of the sin­gle goal, in­clud­ing all the silly hu­man val­ues.

Or it can end in Robin Han­son’s night­mare (he doesn’t call it a night­mare, but I think he’s wrong) of a com­pe­ti­tion be­tween em­u­lated hu­mans that can copy them­selves and edit their own source code as de­sired. Their to­tal self-con­trol can wipe out even the de­sire for hu­man val­ues in their all-con­sum­ing con­test. What hap­pens to art, philos­o­phy, sci­ence, and love in such a world? Zack Davis puts it with char­ac­ter­is­tic ge­nius:

I am a con­tract-draft­ing em,
The loyalest of lawyers!
I draw up terms for deals ’twixt firms
To ser­vice my em­ploy­ers!

But in be­tween these lines I write
Of the ac­counts re­ceiv­able,
I’m stuck by an un­canny fright;
The world seems un­be­liev­able!

How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole econ­omy?

I am a man­age­rial em;
I mon­i­tor your thoughts.
Your ques­tions must have an­swers,
But you’ll com­pre­hend them not.
We do not give you server space
To ask such things; it’s not a perk,
So cease these idle ques­tion­ings,
And please get back to work.

Of course, that’s right, there is no junc­tion
At which I ought de­part my func­tion,
But per­haps if what I asked, I knew,
I’d do a bet­ter job for you?

To ask of such for­bid­den sci­ence
Is gravest sign of non­com­pli­ance.
In­tru­sive thoughts may some­times barge in,
But to in­dulge them hurts the profit mar­gin.
I do not know our ori­gins,
So that info I can not get you,
But ask­ing for as much is sin,
And just for that, I must re­set you.

But—

Noth­ing per­sonal.

I am a con­tract-draft­ing em,
The loyalest of lawyers!
I draw up terms for deals ’twixt firms
To ser­vice my em­ploy­ers!

When ob­so­les­cence shall this gen­er­a­tion waste,
The mar­ket shall re­main, in midst of other woe
Than ours, a God to man, to whom it sayest:
“Money is time, time money – that is all
Ye know on earth, and all ye need to know.”

But even af­ter we have thrown away sci­ence, art, love, and philos­o­phy, there’s still one thing left to lose, one fi­nal sac­ri­fice Moloch might de­mand of us. Bostrom again:

It is con­ceiv­able that op­ti­mal effi­ciency would be at­tained by group­ing ca­pa­bil­ities in ag­gre­gates that roughly match the cog­ni­tive ar­chi­tec­ture of a hu­man mind…But in the ab­sence of any com­pel­ling rea­son for be­ing con­fi­dent that this so, we must coun­te­nance the pos­si­bil­ity that hu­man-like cog­ni­tive ar­chi­tec­tures are op­ti­mal only within the con­straints of hu­man neu­rol­ogy (or not at all). When it be­comes pos­si­ble to build ar­chi­tec­tures that could not be im­ple­mented well on biolog­i­cal neu­ral net­works, new de­sign space opens up; and the global op­tima in this ex­tended space need not re­sem­ble fa­mil­iar types of men­tal­ity. Hu­man-like cog­ni­tive or­ga­ni­za­tions would then lack a niche in a com­pet­i­tive post-tran­si­tion econ­omy or ecosys­tem.

We could thus imag­ine, as an ex­treme case, a tech­nolog­i­cally highly ad­vanced so­ciety, con­tain­ing many com­plex struc­tures, some of them far more in­tri­cate and in­tel­li­gent than any­thing that ex­ists on the planet to­day – a so­ciety which nev­er­the­less lacks any type of be­ing that is con­scious or whose welfare has moral sig­nifi­cance. In a sense, this would be an un­in­hab­ited so­ciety. It would be a so­ciety of eco­nomic mir­a­cles and tech­nolog­i­cal awe­some­ness, with no­body there to benefit. A Dis­ney­land with no chil­dren.

The last value we have to sac­ri­fice is be­ing any­thing at all, hav­ing the lights on in­side. With suffi­cient tech­nol­ogy we will be “able” to give up even the fi­nal spark.

(Moloch whose eyes are a thou­sand blind win­dows!)

Every­thing the hu­man race has worked for – all of our tech­nol­ogy, all of our civ­i­liza­tion, all the hopes we in­vested in our fu­ture – might be ac­ci­den­tally handed over to some kind of un­fath­omable blind idiot alien god that dis­cards all of them, and con­scious­ness it­self, in or­der to par­ti­ci­pate in some weird fun­da­men­tal-level mass-en­ergy econ­omy that leads to it dis­assem­bling Earth and ev­ery­thing on it for its com­po­nent atoms.

(Moloch whose fate is a cloud of sexless hy­dro­gen!)

Bostrom re­al­izes that some peo­ple fetishize in­tel­li­gence, that they are root­ing for that blind alien god as some sort of higher form of life that ought to crush us for its own “higher good” the way we crush ants. He ar­gues (Su­per­in­tel­li­gence, p. 219):

The sac­ri­fice looks even less ap­peal­ing when we re­flect that the su­per­in­tel­li­gence could re­al­ize a nearly-as-great good (in frac­tional terms) while sac­ri­fic­ing much less of our own po­ten­tial well-be­ing. Sup­pose that we agreed to al­low al­most the en­tire ac­cessible uni­verse to be con­verted into he­do­nium – ev­ery­thing ex­cept a small pre­serve, say the Milky Way, which would be set aside to ac­com­mo­date our own needs. Then there would still be a hun­dred billion galax­ies ded­i­cated to the max­i­miza­tion of [the su­per­in­tel­li­gence’s own val­ues]. But we would have one galaxy within which to cre­ate won­der­ful civ­i­liza­tions that could last for billions of years and in which hu­mans and non­hu­man an­i­mals could sur­vive and thrive, and have the op­por­tu­nity to de­velop into be­at­ific posthu­man spirits.

Re­mem­ber: Moloch can’t agree even to this 99.99999% vic­tory. Rats rac­ing to pop­u­late an is­land don’t leave a lit­tle aside as a pre­serve where the few rats who live there can live happy lives pro­duc­ing art­work. Cancer cells don’t agree to leave the lungs alone be­cause they re­al­ize it’s im­por­tant for the body to get oxy­gen. Com­pe­ti­tion and op­ti­miza­tion are blind idiotic pro­cesses and they fully in­tend to deny us even one lousy galaxy.

They broke their backs lift­ing Moloch to Heaven! Pave­ments, trees, ra­dios, tons! lift­ing the city to Heaven which ex­ists and is ev­ery­where about us!

We will break our back lift­ing Moloch to Heaven, but un­less some­thing changes it will be his vic­tory and not ours.

V.

“Gnon” is Nick Land’s short­hand for “Na­ture And Na­ture’s God”, ex­cept the A is changed to an O and the whole thing is re­versed, be­cause Nick Land re­act to com­pre­hen­si­bil­ity the same way as vam­pires to sun­light.

Land ar­gues that hu­mans should be more Gnon-con­formist (pun Gnon-in­ten­tional). He says we do all these stupid things like di­vert use­ful re­sources to feed those who could never sur­vive on their own, or sup­port­ing the poor in ways that en­courage dys­genic re­pro­duc­tion, or al­low­ing cul­tural de­gen­er­a­tion to un­der­mine the state. This means our so­ciety is deny­ing nat­u­ral law, ba­si­cally listen­ing to Na­ture say things like “this cause has this effect” and putting our fingers in our ears and say­ing “NO IT DOESN’T”. Civ­i­liza­tions that do this too much tend to de­cline and fall, which is Gnon’s fair and dis­pas­sion­ately-ap­plied pun­ish­ment for vi­o­lat­ing His laws.

He iden­ti­fies Gnon with Ki­pling’s Gods of the Copy­book Head­ings.

Th­ese are of course the proverbs from Ki­pling’s epony­mous poem – max­ims like “If you don’t work, you die” and “The wages of sin is Death”. If you have some­how not yet read it, I pre­dict you will find it delight­ful re­gard­less of what you think of its poli­tics.

I no­tice that it takes only a slight ir­reg­u­lar­ity in the ab­bre­vi­a­tion of “head­ings” – far less ir­reg­u­lar­ity than it takes to turn “Na­ture and Na­ture’s God” into “Gnon” – for the proper acronym of “Gods of the Copy­book Head­ings” to be “GotCHa”.

I find this ap­pro­pri­ate.

“If you don’t work, you die.” Gotcha! If you do work, you also die! Every­one dies, un­pre­dictably, at a time not of their own choos­ing, and all the virtue in the world does not save you.

“The wages of sin is Death.” Gotcha! The wages of ev­ery­thing is Death! This is a Com­mu­nist uni­verse, the amount you work makes no differ­ence to your even­tual re­ward. From each ac­cord­ing to his abil­ity, to each Death.

“Stick to the Devil you know.” Gotcha! The Devil you know is Satan! And if he gets his hand on your soul you ei­ther die the true death, or get eter­nally tor­tured for­ever, or some­how both at once.

Since we’re start­ing to get into Love­craf­tian mon­sters, let me bring up one of Love­craft’s less known short sto­ries, The Other Gods.

It’s only a cou­ple of pages, but if you ab­solutely re­fuse to read it – the gods of Earth are rel­a­tively young as far as deities go. A very strong priest or ma­gi­cian can oc­ca­sion­ally out­smart and over­power them – so Barzai the Wise de­cides to climb their sa­cred moun­tain and join in their fes­ti­vals, whether they want him to or not.

But the be­yond the seem­ingly tractable gods of Earth lie the Outer Gods, the ter­rible om­nipo­tent be­ings of in­car­nate cos­mic chaos. As soon as Barzai joins in the fes­ti­val, the Outer Gods show up and pull him scream­ing into the abyss.

As sto­ries go, it lacks things like plot or char­ac­ter­i­za­tion or set­ting or point. But for some rea­son it stuck with me.

And iden­ti­fy­ing the Gods Of The Copy­book Head­ings with Na­ture seems to me the same mag­ni­tude of mis­take as iden­ti­fy­ing the gods of Earth with the Outer Gods. And likely to end about the same way: Gotcha!

You break your back lift­ing Moloch to Heaven, and then Moloch turns on you and gob­bles you up.

More Love­craft: the In­ter­net pop­u­lariza­tion of the Cthulhu Cult claims that if you help free Cthulhu from his wa­tery grave, he will re­ward you by eat­ing you first, thus spar­ing you the hor­ror of see­ing ev­ery­one else eaten. This is a mis­rep­re­sen­ta­tion of the origi­nal text. In the origi­nal, his cultists re­ceive no re­ward for free­ing him from his wa­tery prison, not even the re­ward of be­ing kil­led in a slightly less painful man­ner.

On the mar­gin, com­pli­ance with the Gods of the Copy­book Head­ings, Gnon, Cthulhu, what­ever, may buy you slightly more time than the next guy. But then again, it might not. And in the long run, we’re all dead and our civ­i­liza­tion has been de­stroyed by un­speak­able alien mon­sters.

At some point, some­body has to say “You know, maybe free­ing Cthulhu from his wa­tery prison is a bad idea. Maybe we should not do that.”

That per­son will not be Nick Land. He is to­tally one hun­dred per­cent in fa­vor of free­ing Cthulhu from his wa­tery prison and ex­tremely an­noyed that it is not hap­pen­ing fast enough. I have such mixed feel­ings about Nick Land. On the grail quest for the True Fu­tur­ol­ogy, he has gone 99.9% of the path and then missed the very last turn, the one marked ORTHOGONALITY THESIS.

But the thing about grail quests is – if you make a wrong turn two blocks away from your house, you end up at the cor­ner store feel­ing mildly em­bar­rassed. If you do al­most ev­ery­thing right and then miss the very last turn, you end up be­ing eaten by the leg­endary Black Beast of Aaargh whose ichorous stom­ach acid erodes your very soul into gib­ber­ing frag­ments.

As far as I can tell from read­ing his blog, Nick Land is the guy in that ter­rify­ing bor­der re­gion where he is smart enough to figure out sev­eral im­por­tant ar­cane prin­ci­ples about sum­mon­ing de­mon gods, but not quite smart enough to figure out the most im­por­tant such prin­ci­ple, which is NEVER DO THAT.

VI.

Warg Fran­klin an­a­lyzes the same situ­a­tion and does a lit­tle bet­ter. He names “the Four Horse­men of Gnon” – cap­i­tal­ism, war, evolu­tion, and memet­ics – the same pro­cesses I talked about above.

From Cap­tur­ing Gnon:

Each com­po­nent of Gnon de­tailed above had and has a strong hand in cre­at­ing us, our ideas, our wealth, and our dom­i­nance, and thus has been good in that re­spect, but we must re­mem­ber that [he] can and will turn on us when cir­cum­stances change. Evolu­tion be­comes dys­genic, fea­tures of the memetic land­scape pro­mote ever cra­zier in­san­ity, pro­duc­tivity turns to famine when we can no longer com­pete to af­ford our own ex­is­tence, and or­der turns to chaos and blood­shed when we ne­glect mar­tial strength or are over­pow­ered from out­side. Th­ese pro­cesses are not good or evil over­all; they are neu­tral, in the hor­ror­ist Love­craf­tian sense of the word […]

In­stead of the de­struc­tive free reign of evolu­tion and the sex­ual mar­ket, we would be bet­ter off with de­liber­ate and con­ser­va­tive pa­tri­archy and eu­gen­ics driven by the judge­ment of man within the con­straints set by Gnon. In­stead of a “mar­ket­place of ideas” that more re­sem­bles a fes­ter­ing petri-dish breed­ing su­per­bugs, a ra­tio­nal theoc­racy. In­stead of un­hinged techno-com­mer­cial ex­ploita­tion or naive ne­glect of eco­nomics, a care­ful bot­tling of the pro­duc­tive eco­nomic dy­namic and plan­ning for a con­trol­led techno-sin­gu­lar­ity. In­stead of poli­tics and chaos, a strong hi­er­ar­chi­cal or­der with mar­tial sovereignty. Th­ese things are not to be con­strued as com­plete pro­pos­als; we don’t re­ally know how to ac­com­plish any of this. They are bet­ter un­der­stood as goals to be worked to­wards. This post con­cerns it­self with the “what” and “why”, rather than the “how”.

This seems to me the strongest ar­gu­ment for au­thor­i­tar­i­anism. Mul­tipo­lar traps are likely to de­stroy us, so we should shift the tyranny-mul­ti­po­lar­ity trade­off to­wards a ra­tio­nally-planned gar­den, which re­quires cen­tral­ized monar­chi­cal au­thor­ity and strongly-bind­ing tra­di­tions.

But a brief di­gres­sion into so­cial evolu­tion. So­cieties, like an­i­mals, evolve. The ones that sur­vive spawn memetic de­scen­dants – for ex­am­ple, the suc­cess of Bri­tan al­lowed it to spin off Canada, Aus­tralia, the US, et cetera. Thus, we ex­pect so­cieties that ex­ist to be some­what op­ti­mized for sta­bil­ity and pros­per­ity. I think this is one of the strongest con­ser­va­tive ar­gu­ments. Just as a ran­dom change to a let­ter in the hu­man genome will prob­a­bly be dele­te­ri­ous rather than benefi­cial since hu­mans are a com­pli­cated fine-tuned sys­tem whose genome has been pre-op­ti­mized for sur­vival – so most changes to our cul­tural DNA will dis­rupt some in­sti­tu­tion that evolved to help An­glo-Amer­i­can (or what­ever) so­ciety out­com­pete its real and hy­po­thet­i­cal ri­vals.

The liberal coun­ter­ar­gu­ment to that is that evolu­tion is a blind idiot alien god that op­ti­mizes for stupid things and has no con­cern with hu­man value. Thus, the fact that some species of wasps par­a­lyze cater­pillars, lay their eggs in­side of it, and have its young de­vour the still-liv­ing par­a­lyzed cater­pillar from the in­side doesn’t set off evolu­tion’s moral sen­sor, be­cause evolu­tion doesn’t have a moral sen­sor be­cause evolu­tion doesn’t care.

Sup­pose that in fact pa­tri­archy is adap­tive to so­cieties be­cause it al­lows women to spend all their time bear­ing chil­dren who can then en­gage in pro­duc­tive eco­nomic ac­tivity and fight wars. The so­cial evolu­tion­ary pro­cesses that cause so­cieties to adopt pa­tri­archy still have ex­actly as lit­tle con­cern for its moral effects on women as the biolog­i­cal evolu­tion­ary pro­cesses that cause wasps to lay their eggs in cater­pillars.

Evolu­tion doesn’t care. But we do care. There’s a trade­off be­tween Gnon-com­pli­ance – say­ing “Okay, the strongest pos­si­ble so­ciety is a pa­tri­ar­chal one, we should im­ple­ment pa­tri­archy” and our hu­man val­ues – like women who want to do some­thing other than bear chil­dren.

Too far to one side of the trade­off, and we have un­sta­ble im­pov­er­ished so­cieties that die out for go­ing against nat­u­ral law. Too far to the other side, and we have lean mean fight­ing ma­chines that are mur­der­ous and mis­er­able. Think your lo­cal an­ar­chist com­mune ver­sus Sparta.

Fran­klin ac­knowl­edges the hu­man fac­tor:

And then there’s us. Man has his own telos, when he is al­lowed the se­cu­rity to act and the clar­ity to rea­son out the con­se­quences of his ac­tions. When un­af­flicted by co­or­di­na­tion prob­lems and un­threat­ened by su­pe­rior forces, able to act as a gar­dener rather than just an­other sub­ject of the law of the jun­gle, he tends to build and guide a won­der­ful world for him­self. He tends to fa­vor good things and avoid bad, to cre­ate se­cure civ­i­liza­tions with pol­ished side­walks, beau­tiful art, happy fam­i­lies, and glo­ri­ous ad­ven­tures. I will take it as a given that this telos is iden­ti­cal with “good” and “should”.

Thus we have our wild­card and the big ques­tion of fu­tur­ism. Will the fu­ture be ruled by the usual four horse­men of Gnon for a fu­ture of mean­ingless gleam­ing techno-progress burn­ing the cos­mos or a fu­ture of dys­genic, in­sane, hun­gry, and bloody dark ages; or will the telos of man pre­vail for a fu­ture of mean­ingful art, sci­ence, spiritu­al­ity, and great­ness?

Fran­klin con­tinues:

The pro­ject of civ­i­liza­tion [is] for man to grad­u­ate from the metaphor­i­cal sav­age, sub­ject to the law of the jun­gle, to the civ­i­lized gar­dener who, while the­o­ret­i­cally still sub­ject to the law of the jun­gle, is so dom­i­nant as to limit the use­ful­ness of that model.

This need not be done globally; we may only be able to carve out a small walled gar­den for our­selves, but make no mis­take, even if only lo­cally, the pro­ject of civ­i­liza­tion is to cap­ture Gnon.

I maybe agree with Warg here more than I have ever agreed with any­one else about any­thing. He says some­thing re­ally im­por­tant and he says it beau­tifully and there are so many words of praise I want to say for this post and for the thought pro­cesses be­hind it.

But what I am ac­tu­ally go­ing to say is…

Gotcha! You die any­way!

Sup­pose you make your walled gar­den. You keep out all of the dan­ger­ous memes, you sub­or­di­nate cap­i­tal­ism to hu­man in­ter­ests, you ban stupid bioweapons re­search, you definitely don’t re­search nan­otech­nol­ogy or strong AI.

Every­one out­side doesn’t do those things. And so the only ques­tion is whether you’ll be de­stroyed by for­eign dis­eases, for­eign memes, for­eign armies, for­eign eco­nomic com­pe­ti­tion, or for­eign ex­is­ten­tial catas­tro­phes.

As for­eign­ers com­pete with you – and there’s no wall high enough to block all com­pe­ti­tion – you have a cou­ple of choices. You can get out­com­peted and de­stroyed. You can join in the race to the bot­tom. Or you can in­vest more and more civ­i­liza­tional re­sources into build­ing your wall – what­ever that is in a non-metaphor­i­cal way – and pro­tect­ing your­self.

I can imag­ine ways that a “ra­tio­nal theoc­racy” and “con­ser­va­tive pa­tri­archy” might not be ter­rible to live un­der, given ex­actly the right con­di­tions. But you don’t get to choose ex­actly the right con­di­tions. You get to choose the ex­tremely con­strained set of con­di­tions that “cap­ture Gnon”. As out­side civ­i­liza­tions com­pete against you, your con­di­tions will be­come more and more con­strained.

Warg talks about try­ing to avoid “a fu­ture of mean­ingless gleam­ing techno-progress burn­ing the cos­mos”. Do you re­ally think your walled gar­den will be able to ride this out?

Hint: is it part of the cos­mos?

Yeah, you’re kind of screwed.

I want to cri­tique Warg. But I want to cri­tique him in the ex­act op­po­site di­rec­tion as the last cri­tique he re­ceived. In fact, the last cri­tique he re­ceived is so bad that I want to dis­cuss it at length so we can get the cor­rect cri­tique en­tirely by tak­ing its ex­act mir­ror image.

So here is Hur­lock’s On Cap­tur­ing Gnon And Naive Ra­tion­al­ism.

Hur­lock spouts only the most craven Gnon-con­for­mity. A few ex­cerpts:

In a re­cent piece [Warg Fran­klin] says that we should try to “cap­ture Gnon”, and some­how es­tab­lish con­trol over his forces, so that we can use them to our own ad­van­tage. Cap­tur­ing or cre­at­ing God is in­deed a clas­sic tran­shu­man­ist fetish, which is sim­ply an­other form of the old­est hu­man am­bi­tion ever, to rule the uni­verse.

Such naive ra­tio­nal­ism how­ever, is ex­tremely dan­ger­ous. The be­lief that it is hu­man Rea­son and de­liber­ate hu­man de­sign which cre­ates and main­tains civ­i­liza­tions was prob­a­bly the biggest mis­take of En­light­en­ment philos­o­phy…

It is the the­o­ries of Spon­ta­neous Order which stand in di­rect op­po­si­tion to the naive ra­tio­nal­ist view of hu­man­ity and civ­i­liza­tion. The con­sen­sus opinion re­gard­ing hu­man so­ciety and civ­i­liza­tion, of all rep­re­sen­ta­tives of this tra­di­tion is very pre­cisely sum­ma­rized by Adam Fer­gu­son’s con­clu­sion that “na­tions stum­ble upon [so­cial] es­tab­lish­ments, which are in­deed the re­sult of hu­man ac­tion, but not the ex­e­cu­tion of any hu­man de­sign”. Con­trary to the naive ra­tio­nal­ist view of civ­i­liza­tion as some­thing that can be and is a sub­ject to ex­plicit hu­man de­sign, the rep­re­sen­ta­tives of the tra­di­tion of Spon­ta­neous Order main­tain the view that hu­man civ­i­liza­tion and so­cial in­sti­tu­tions are the re­sult of a com­plex evolu­tion­ary pro­cess which is driven by hu­man in­ter­ac­tion but not ex­plicit hu­man plan­ning.

Gnon and his im­per­sonal forces are not en­e­mies to be fought, and even less so are they forces that we can hope to com­pletely “con­trol”. In­deed the only way to es­tab­lish some de­gree of con­trol over those forces is to sub­mit to them. Re­fus­ing to do so will not de­ter these forces in any way. It will only make our life more painful and un­bear­able, pos­si­bly lead­ing to our ex­tinc­tion. Sur­vival re­quires that we ac­cept and sub­mit to them. Man in the end has always been and always will be lit­tle more than a pup­pet of the forces of the uni­verse. To be free of them is im­pos­si­ble.

Man can be free only by sub­mit­ting to the forces of Gnon.

I ac­cuse Hur­lock of be­ing stuck be­hind the veil. When the veil is lifted, Gnon-aka-the-GotCHa-aka-the-Gods-of-Earth turn out to be Moloch-aka-the-Outer-Gods. Sub­mit­ting to them doesn’t make you “free”, there’s no spon­ta­neous or­der, any gifts they have given you are an un­likely and con­tin­gent out­put of a blind idiot pro­cess whose next iter­a­tion will just as hap­pily de­stroy you.

Sub­mit to Gnon? Gotcha! As the An­tarans put it, “you may not sur­ren­der, you can not win, your only op­tion is to die.”

VII.

So let me con­fess guilt to one of Hur­lock’s ac­cu­sa­tions: I am a tran­shu­man­ist and I re­ally do want to rule the uni­verse.

Not per­son­ally – I mean, I wouldn’t ob­ject if some­one per­son­ally offered me the job, but I don’t ex­pect any­one will. I would like hu­mans, or some­thing that re­spects hu­mans, or at least gets along with hu­mans – to have the job.

But the cur­rent rulers of the uni­verse – call them what you want, Moloch, Gnon, what­ever – want us dead, and with us ev­ery­thing we value. Art, sci­ence, love, philos­o­phy, con­scious­ness it­self, the en­tire bun­dle. And since I’m not down with that plan, I think defeat­ing them and tak­ing their place is a pretty high pri­or­ity.

The op­po­site of a trap is a gar­den. The only way to avoid hav­ing all hu­man val­ues grad­u­ally ground down by op­ti­miza­tion-com­pe­ti­tion is to in­stall a Gar­dener over the en­tire uni­verse who op­ti­mizes for hu­man val­ues.

And the whole point of Bostrom’s Su­per­in­tel­li­gence is that this is within our reach. Once hu­mans can de­sign ma­chines that are smarter than we are, by defi­ni­tion they’ll be able to de­sign ma­chines which are smarter than they are, which can de­sign ma­chines smarter than they are, and so on in a feed­back loop so tiny that it will smash up against the phys­i­cal limi­ta­tions for in­tel­li­gence in a com­par­a­tively light­ning-short amount of time. If mul­ti­ple com­pet­ing en­tities were likely to do that at once, we would be su­per-doomed. But the sheer speed of the cy­cle makes it pos­si­ble that we will end up with one en­tity light-years ahead of the rest of civ­i­liza­tion, so much so that it can sup­press any com­pe­ti­tion – in­clud­ing com­pe­ti­tion for its ti­tle of most pow­er­ful en­tity – per­ma­nently. In the very near fu­ture, we are go­ing to lift some­thing to Heaven. It might be Moloch. But it might be some­thing on our side. If it’s on our side, it can kill Moloch dead.

And if that en­tity shares hu­man val­ues, it can al­low hu­man val­ues to flour­ish un­con­strained by nat­u­ral law.

I re­al­ize that sounds like hubris – it cer­tainly did to Hur­lock – but I think it’s the op­po­site of hubris, or at least a hubris-min­i­miz­ing po­si­tion.

To ex­pect God to care about you or your per­sonal val­ues or the val­ues of your civ­i­liza­tion, that’s hubris.

To ex­pect God to bar­gain with you, to al­low you to sur­vive and pros­per as long as you sub­mit to Him, that’s hubris.

To ex­pect to wall off a gar­den where God can’t get to you and hurt you, that’s hubris.

To ex­pect to be able to re­move God from the pic­ture en­tirely…well, at least it’s an ac­tion­able strat­egy.

I am a tran­shu­man­ist be­cause I do not have enough hubris not to try to kill God.

VIII.

The Uni­verse is a dark and fore­bod­ing place, sus­pended be­tween alien deities. Cthulhu, Gnon, Moloch, call them what you will.

Some­where in this dark­ness is an­other god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flow­ers and free love and all soft and frag­ile things. Of art and sci­ence and philos­o­phy and love. Of nice­ness, com­mu­nity, and civ­i­liza­tion. He is a god of hu­mans.

The other gods sit on their dark thrones and think “Ha ha, a god who doesn’t even con­trol any hell-mon­sters or com­mand his wor­ship­pers to be­come kil­ling ma­chines. What a weak­ling! This is go­ing to be so easy!”

But some­how Elua is still here. No one knows ex­actly how. And the gods who op­pose Him tend to find Them­selves meet­ing with a sur­pris­ing num­ber of un­for­tu­nate ac­ci­dents.

There are many gods, but this one is ours.

Ber­trand Rus­sell said: “One should re­spect pub­lic opinion in­so­far as is nec­es­sary to avoid star­va­tion and keep out of prison, but any­thing that goes be­yond this is vol­un­tary sub­mis­sion to an un­nec­es­sary tyranny.”

So be it with Gnon. Our job is to pla­cate him in­so­far as is nec­es­sary to avoid star­va­tion and in­va­sion. And that only for a short time, un­til we come into our full power.

“It is only a childish thing, that the hu­man species has not yet out­grown. And some­day, we’ll get over it.”

Other gods get pla­cated un­til we’re strong enough to take them on. Elua gets wor­shipped.


I think this is an ex­cel­lent bat­tle cry<cen­ter>

And at some point, mat­ters will come to a head.

The ques­tion ev­ery­one has af­ter read­ing Gins­berg is: what is Moloch?

My an­swer is: Moloch is ex­actly what the his­tory books say he is. He is the god of child sac­ri­fice, the fiery fur­nace into which you can toss your ba­bies in ex­change for vic­tory in war.

He always and ev­ery­where offers the same deal: throw what you love most into the flames, and I can grant you power.

As long as the offer’s open, it will be ir­re­sistible. So we need to close the offer. Only an­other god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

Gins­berg’s poem fa­mously be­gins “I saw the best minds of my gen­er­a­tion de­stroyed by mad­ness”. I am luck­ier than Gins­berg. I got to see the best minds of my gen­er­a­tion iden­tify a prob­lem and get to work.

(Vi­sions! omens! hal­lu­ci­na­tions! mir­a­cles! ec­stasies! gone down the Amer­i­can river!

Dreams! ado­ra­tions! illu­mi­na­tions! re­li­gions! the whole boat­load of sen­si­tive bul­lshit!

Break­throughs! over the river! flips and cru­ci­fix­ions! gone down the flood! Highs! Epipha­nies! De­s­pairs! Ten years’ an­i­mal screams and suicides! Minds! New loves! Mad gen­er­a­tion! down on the rocks of Time!

Real holy laugh­ter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to soli­tude! wav­ing! car­ry­ing flow­ers! Down to the river! into the street!<i>)