A Parable of Elites and Takeoffs

Let me tell you a parable of the fu­ture. Let’s say, 70 years from now, in a large Western coun­try we’ll call Nacirema.

One day far from now: sci­en­tific de­vel­op­ment has con­tinued apace, and a large gov­ern­ment pro­ject (with, un­sur­pris­ingly, a lot of mil­i­tary fund­ing) has taken the scat­tered pieces of cut­ting-edge re­search and put them to­gether into a sin­gle awe­some tech­nol­ogy, which could rev­olu­tionize (or at least, vastly im­prove) all sec­tors of the econ­omy. Lead­ing thinkers had long fore­cast that this area of sci­ence’s mys­ter­ies would even­tu­ally yield to progress, de­spite the­o­ret­i­cal con­fu­sion and per­haps-dis­ap­point­ing ini­tial re­sults and the scorn of more con­ser­va­tive types and the in­com­pre­hen­sion (or out­right dis­gust, for ‘play­ing god’) of the gen­eral pop­u­la­tion, and at last—it had! The fu­ture was bright.

Un­for­tu­nately, it was hur­riedly de­cided to use an early pro­to­type out­side the lab in an im­pov­er­ished for­eign coun­try. Whether out of ar­ro­gance, bu­reau­cratic in­er­tia, over­con­fi­dence on the part of the in­volved re­searchers, con­de­scend­ing racism, the need to jus­tify the billions of grant-dol­lars that cu­mu­la­tive went into the pro­ject over the years by show­ing some use of it—what­ever, the rea­sons no longer mat­tered af­ter the fi­nal or­der was signed. The tech­nol­ogy was used, but the con­se­quences turned out to be hor­rific: over a brief pe­riod of what seemed like mere days, en­tire cities col­lapsed and scores—hun­dreds—of thou­sands of peo­ple died. (Modern economies are ex­tremely in­ter­de­pen­dent and frag­ile, and small dis­rup­tions can have large con­se­quences; more peo­ple died in the chaos of the evac­u­a­tion of the ar­eas around Fukushima than will die of the ra­di­a­tion.)

An un­miti­gated dis­aster. Worse, the tech­nol­ogy didn’t even ac­com­plish the as­signed goal—that was thanks to a third party’s ac­tions! Ironic. But that’s how life goes: ‘Man Pro­poses, God Dis­poses’.

So, what to do with the tech? The pos­i­tive po­ten­tial was still there, but no one could doubt any­more that there was a hor­rific dark side: they had just seen what it could do if mi­sused, even if the au­thor­i­ties (as usual) were spin­ning the events as fu­ri­ously as pos­si­ble to avoid fright­en­ing the pub­lic. You could put it un­der heavy gov­ern­ment con­trol, and they did.

But what was to stop Nacirema’s ri­vals from copy­ing the tech­nol­ogy and us­ing it do­mes­ti­cally or as a weapon against Nacirema? In par­tic­u­lar, Nacirema’s enor­mous fu­ri­ously-in­dus­tri­al­iz­ing ri­val far to the East in Asia, which as­pired to re­gional hege­mony, had a long his­tory of be­ing an “ori­en­tal despo­tism” and still had a re­pres­sive poli­ti­cal sys­tem—ruled by an opaque cor­rupt oli­garchy—which ab­ro­gated ba­sic hu­man rights such as free speech, and was not a lit­tle racist/​xeno­pho­bic & an­gry at his­tor­i­cal in­terfer­ence in its do­mes­tic af­fairs by Seilla & Nacirema…

The ‘arms race’ was ob­vi­ous to any­one who thought about the is­sue. You had to ob­tain your own tech or be left in the dust. But an arms race was ter­rify­ingly dan­ger­ous—one power with the tech was bad enough, but if there were two hold­ers? A dozen? There was no rea­son to ex­pect all the wishes to be be­nign once ev­ery­one had their own ge­nie-in-a-bot­tle. It would not be hy­per­bolic to say that the fate of global civ­i­liza­tion was at stake (even if there were sur­vivors off-planet or in Han­son-style ‘dis­aster re­fuges’, they could hardly re­build civ­i­liza­tion on their own; not to men­tion that a lot of re­sources like hy­dro­car­bons have already been de­pleted be­yond the abil­ity of a small prim­i­tive group to ex­ploit) or maybe even the hu­man race it­self. If ever an x-risk was a clear and pre­sent dan­ger, this was it.

For­tu­nately, the ‘hard take-off’ sce­nario did not come to pass, as each time it took years to dou­ble the power of the tech; nor was it some­thing you could make in your bed­room, even if you knew the key in­sights (de­ducible by a grad stu­dent from pub­lished pa­pers, as con­cerned agen­cies in Nacirema proved). Rather, the ex­perts fore­cast a slower take-off, on a more hu­man time-scale, where the tech­nol­ogy es­ca­lated in power over the next two or three decades; im­por­tantly, they thought that the Eastern ri­val’s sci­en­tists would not be able to clone the tech­nol­ogy for an­other decade or per­haps longer.

So one of the in­volved re­searchers—a bona fide world-renowned ge­nius who had made sig­nal con­tri­bu­tions to the de­sign of the com­put­ers and soft­ware in­volved and had the ut­most cred­i­bil­ity—made the ob­vi­ous sug­ges­tion. Don’t let the arms race start. Don’t ex­pose hu­man­ity to an un­sta­ble equil­ibrium of the sort which has col­lapsed many times in hu­man his­tory. In­stead, Nacirema should boldly de­liver an ul­ti­ma­tum to the ri­val: sub­mit to ex­am­i­na­tion and ver­ifi­ca­tion that they were not de­vel­op­ing the tech, or be de­stroyed. Stop the con­ta­gion from spread­ing and root out the x-risk. Re­search in the area would be pro­scribed, as al­most all of it was in­her­ently dual-use.

Others dis­agreed, of course, with many al­ter­na­tive pro­pos­als: per­haps re­searchers could be trusted to self-reg­u­late; or, re­lated re­search could be reg­u­lated by a spe­cial UN agency; or the tech could be dis­tributed to all ma­jor coun­tries to reach an equil­ibrium im­me­di­ately; or, treaties could be signed; or Nacirema could vol­un­tar­ily aban­don the tech­nol­ogy, con­tinue to do things the old-fash­ioned way, and lead by moral au­thor­ity.

You might think that the poli­ti­ci­ans would do some­thing, even if they ig­nored the ge­nius: the prog­nos­ti­ca­tions of a few ob­scure re­searchers and of short sto­ries pub­lished in sci­ence fic­tion had turned out to be truth; the dan­gers had been re­al­ized in prac­tice, and there was no un­cer­tainty about what a war with the tech would en­tail; the logic of the arms race has been well-doc­u­mented by many in­stances to lead to in­sta­bil­ity and pro­pel coun­tries into war (con­sider the bat­tle­ship arms race lead­ing up to WWI); the pro­poser had im­pec­ca­ble cre­den­tials and deep do­main-spe­cific ex­per­tise and was far from alone in be­ing deeply con­cerned about the is­sue; there were mul­ti­ple years to cope with the crisis af­ter fair warn­ing had been given, so there was enough time; and so on. If the Nacire­man poli­ti­cal sys­tem were to ever be will­ing to take ma­jor ac­tion to pre­vent an x-risk, this would seem to be the ideal sce­nario. So did they?

Let’s step back a bit. One might have faith in the poli­ti­cal elites of this coun­try. Surely given the years of warn­ing as the tech be­came more so­phis­ti­cated, peo­ple would see that this time re­ally was differ­ent, this time it was the gravest threat hu­man­ity had faced, that the warn­ings of elite sci­en­tists of dooms­day would be taken se­ri­ously; surely ev­ery­one would see the truth of propo­si­tion X, lead­ing them to en­dorse Y and agree with the ‘ex­trem­ists’ about policy de­ci­sion Z (to con­dense our hopes into one for­mula); how can we doubt that policy-mak­ers and re­search fun­ders would be­gin to re­spond to the tech safety challenge? After all, we can point to some other in­stances where poli­cy­mak­ers reached good out­comes for minor prob­lems like CFC dam­ages to the at­mo­sphere.

So with all that in mind, in our lit­tle fu­ture world, did the Nacire­man poli­ti­cal sys­tem re­spond effec­tively?

I’m a bit cyn­i­cal, so let’s say the an­swer was… No. Of course not. They did not fol­low his plan.

And it’s not that they found a bet­ter plan, ei­ther. (Let’s face it, any plan call­ing for more war has to be con­sid­ered a last re­sort, even if you have a spe­cial new tech to help, and is likely to fail.) Noth­ing mean­ingful was done. “Man plans, God laughs.” The tra­jec­tory of events was in­dis­t­in­guish­able from bu­reau­cratic in­er­tia, self-serv­ing be­hav­ior by var­i­ous groups, and was the usual story. After all, what was in it for the poli­ti­ci­ans? Did such a strat­egy swell any cor­po­ra­tion’s prof­its? Or offer scope for fur­ther tax­a­tion & reg­u­la­tion? Or could it be used to ap­peal to any­one’s emo­tion-driven ethics by play­ing on dis­gust or pu­rity or in-group loy­alty? The strat­egy had no con­stituency ex­cept those who were con­cerned by an ab­stract threat in the fu­ture (per­haps, as their op­po­nents in­sinu­ated, they were neu­rotic ‘hawks’ hel­lbent on war). Be­sides, the Nacire­man peo­ple were ex­hausted from long years of war in mul­ti­ple for­eign coun­tries and a large do­mes­tic de­pres­sion whose scars re­mained. Time passed.

Even­tu­ally the ex­perts turned out to be wrong but in the worst pos­si­ble way: the ri­val took half the time pro­jected to de­velop their own tech, and the win­dow of op­por­tu­nity snapped shut. The arms race had be­gun, and hu­man­ity would trem­ble in fear, as it won­dered if it would live out the cen­tury or the un­think­able hap­pen.

Good luck, you peo­ple of the fu­ture! I wish you all the best, al­though I can’t be op­ti­mistic; if you sur­vive, it will be by the skin of your teeth, and I sus­pect that due to hind­sight bias and near-miss bias, you won’t even be able to ap­pre­ci­ate how dire the situ­a­tion was af­ter­wards and will for­get your peril or min­i­mize the dan­ger or rea­son that the tech couldn’t have been that dan­ger­ous since you sur­vived—which would be a sad & pa­thetic coda in­deed.

The End.

(Oh, I’m sorry. Did I write “70 years from now”? I meant: “70 years ago”. The tech­nol­ogy is, of course, nu­clear fis­sion which had many po­ten­tial ap­pli­ca­tions in civilian econ­omy—if noth­ing else, ev­ery sec­tor benefits from elec­tric­ity ‘too cheap to me­ter’; Nacirema is Amer­ica & the east­ern ri­val is Rus­sia; the ge­nius is John von Neu­mann, the SF sto­ries were by Hein­lein & Cart­mill among oth­ers—the lat­ter giv­ing rise to the As­tound­ing in­ci­dent; and we all know how the Cold War led civ­i­liza­tion to the brink of ther­monu­clear war. Why, did you think it was about some­thing else?)

This was writ­ten for a planned es­say on why com­pu­ta­tional com­plex­ity/​diminish­ing re­turns doesn’t im­ply AI will be safe, but who knows when I’ll finish that, so I thought I’d post it sep­a­rately.