Total Nano Domination

Fol­lowup to: En­gelbart: In­suffi­ciently Recursive

The com­puter rev­olu­tion had cas­cades and in­sights aplenty. Com­puter tools are rou­tinely used to cre­ate tools, from us­ing a C com­piler to write a Python in­ter­preter, to us­ing the­o­rem-prov­ing soft­ware to help de­sign com­puter chips. I would not yet rate com­put­ers as be­ing very deeply re­cur­sive—I don’t think they’ve im­proved our own think­ing pro­cesses even so much as the Scien­tific Revolu­tion—yet. But some of the ways that com­put­ers are used to im­prove com­put­ers, verge on be­ing re­peat­able (cyclic).

Yet no in­di­vi­d­ual, no lo­cal­ized group, nor even coun­try, man­aged to get a sus­tained ad­van­tage in com­put­ing power, com­pound the in­ter­est on cas­cades, and take over the world. There was never a Man­hat­tan mo­ment when a com­put­ing ad­van­tage tem­porar­ily gave one coun­try a supreme mil­i­tary ad­van­tage, like the US and its atomic bombs for that brief in­stant at the end of WW2. In com­put­ing there was no equiv­a­lent of “We’ve just crossed the sharp thresh­old of crit­i­cal­ity, and now our pile dou­bles its neu­tron out­put ev­ery two min­utes, so we can pro­duce lots of plu­to­nium and you can’t.”

Will the de­vel­op­ment of nan­otech­nol­ogy go the same way as com­put­ers—a smooth, steady de­vel­op­men­tal curve spread across many coun­tries, no one pro­ject tak­ing into it­self a sub­stan­tial frac­tion of the world’s whole progress? Will it be more like the Man­hat­tan Pro­ject, one coun­try gain­ing a (tem­po­rary?) huge ad­van­tage at huge cost? Or could a small group with an ini­tial ad­van­tage cas­cade and out­run the world?

Just to make it clear why we might worry about this for nan­otech, rather than say car man­u­fac­tur­ing—if you can build things from atoms, then the en­vi­ron­ment con­tains an un­limited sup­ply of perfectly ma­chined spare parts. If your molec­u­lar fac­tory can build so­lar cells, it can ac­quire en­ergy as well.

So full-fledged Drexle­rian molec­u­lar nan­otech­nol­ogy can plau­si­bly au­to­mate away much of the man­u­fac­tur­ing in its ma­te­rial sup­ply chain. If you already have nan­otech, you may not need to con­sult the out­side econ­omy for in­puts of en­ergy or raw ma­te­rial.

This makes it more plau­si­ble that a nan­otech group could lo­cal­ize off, and do its own com­pound in­ter­est, away from the global econ­omy. If you’re Dou­glas En­gelbart build­ing bet­ter soft­ware, you still need to con­sult In­tel for the hard­ware that runs your soft­ware, and the elec­tric com­pany for the elec­tric­ity that pow­ers your hard­ware. It would be a con­sid­er­able ex­pense to build your own fab lab for your chips (that makes chips as good as In­tel) and your own power sta­tion for elec­tric­ity (that sup­plies elec­tric­ity as cheaply as the util­ity com­pany).

It’s not just that this tends to en­tan­gle you with the for­tunes of your trade part­ners, but also that—as an UberTool Corp keep­ing your trade se­crets in-house—you can’t im­prove the hard­ware you get, or drive down the cost of elec­tric­ity, as long as these things are done out­side. Your cas­cades can only go through what you do lo­cally, so the more you do lo­cally, the more likely you are to get a com­pound in­ter­est ad­van­tage. (Mind you, I don’t think En­gelbart could have gone FOOM even if he’d made his chips lo­cally and sup­plied him­self with elec­tri­cal power—I just don’t think the com­pound ad­van­tage on us­ing com­put­ers to make com­put­ers is pow­er­ful enough to sus­tain k > 1.)

In gen­eral, the more ca­pa­bil­ities are lo­cal­ized into one place, the less peo­ple will de­pend on their trade part­ners, the more they can cas­cade lo­cally (ap­ply their im­prove­ments to yield fur­ther im­prove­ments), and the more a “crit­i­cal cas­cade” /​ FOOM sounds plau­si­ble.

Yet self-repli­cat­ing nan­otech is a very ad­vanced ca­pa­bil­ity. You don’t get it right off the bat. Sure, lots of biolog­i­cal stuff has this ca­pa­bil­ity, but this is a mis­lead­ing co­in­ci­dence—it’s not that self-repli­ca­tion is easy, but that evolu­tion, for its own alien rea­sons, tends to build it into ev­ery­thing. (Even in­di­vi­d­ual cells, which is ridicu­lous.)

In the run-up to nan­otech­nol­ogy, it seems not im­plau­si­ble to sup­pose a con­tinu­a­tion of the mod­ern world. To­day, many differ­ent labs work on small pieces of nan­otech­nol­ogy—for­tunes en­tan­gled with their trade part­ners, and much of their re­search ve­loc­ity com­ing from ad­vances in other lab­o­ra­to­ries. Cur­rent nan­otech labs are de­pen­dent on the out­side world for com­put­ers, equip­ment, sci­ence, elec­tric­ity, and food; any sin­gle lab works on a small frac­tion of the puz­zle, and con­tributes small frac­tions of the progress.

In short, so far nan­otech is go­ing just the same way as com­put­ing.

But it is a tad pre­ma­ture—I would even say that it crosses the line into the “silly” species of fu­tur­ism—to ex­hale a sigh of re­lief and say, “Ah, that set­tles it—no need to con­sider any fur­ther.”

We all know how ex­po­nen­tial mul­ti­pli­ca­tion works: 1 micro­scopic nanofac­tory, 2 micro­scopic nanofac­to­ries, 4 micro­scopic nanofac­to­ries… let’s say there’s 100 differ­ent groups work­ing on self-repli­cat­ing nan­otech­nol­ogy and one of those groups suc­ceeds one week ear­lier than the oth­ers. Rob Fre­itas has calcu­lated that some species of replibots could spread through the Earth in 2 days (even given what seem to me like highly con­ser­va­tive as­sump­tions in a con­text where con­ser­vatism is not ap­pro­pri­ate).

So, even if the race seems very tight, whichever group gets replibots first can take over the world given a mere week’s lead time -

Yet wait! Just hav­ing replibots doesn’t let you take over the world. You need fu­sion weapons, or surveillance bac­te­ria, or some other way to ac­tu­ally gov­ern. That’s a lot of mat­ter­ware—a lot of de­sign and en­g­ineer­ing work. A replibot ad­van­tage doesn’t equate to a weapons ad­van­tage, un­less, some­how, the plane­tary econ­omy has already pub­lished the open-source de­tails of fully de­bugged weapons that you can build with your newfound pri­vate replibots. Other­wise, a lead time of one week might not be any­where near enough.

Even more im­por­tantly—“self-repli­ca­tion” is not a bi­nary, 0-or-1 at­tribute. Things can be par­tially self-repli­cat­ing. You can have some­thing that man­u­fac­tures 25% of it­self, 50% of it­self, 90% of it­self, or 99% of it­self—but still needs one last ex­pen­sive com­puter chip to com­plete the set. So if you have twenty-five coun­tries rac­ing, shar­ing some of their re­sults and with­hold­ing oth­ers, there isn’t one morn­ing where you wake up and find that one coun­try has self-repli­ca­tion.

Bots be­come suc­ces­sively eas­ier to man­u­fac­ture; the fac­to­ries get suc­ces­sively cheaper. By the time one coun­try has bots that man­u­fac­ture them­selves from en­vi­ron­men­tal ma­te­ri­als, many other coun­tries have bots that man­u­fac­ture them­selves from feed­stock. By the time one coun­try has bots that man­u­fac­ture them­selves en­tirely from feed­stock, other coun­tries have pro­duced some bots us­ing as­sem­bly lines. The na­tions also have all their old con­ven­tional ar­se­nal, such as in­ter­con­ti­nen­tal mis­siles tipped with ther­monu­clear weapons, and these have de­ter­rent effects against crude nan­otech­nol­ogy. No one ever gets a dis­con­tin­u­ous mil­i­tary ad­van­tage, and the world is safe. (?)

At this point, I do feel obliged to re­call the no­tion of “bur­den­some de­tails”, that we’re spin­ning a story out of many con­junc­tive de­tails, any one of which could go wrong. This is not an ar­gu­ment in fa­vor of any­thing in par­tic­u­lar, just a re­minder not to be se­duced by sto­ries that are too spe­cific. When I con­tem­plate the sheer raw power of nan­otech­nol­ogy, I don’t feel con­fi­dent that the fabric of so­ciety can even sur­vive the suffi­ciently plau­si­ble prospect of its near-term ar­rival. If your in­tel­li­gence es­ti­mate says that Rus­sia (the new bel­liger­ent Rus­sia un­der Putin) is go­ing to get self-repli­cat­ing nan­otech­nol­ogy in a year, what does that do to Mu­tual As­sured Destruc­tion? What if Rus­sia makes a similar in­tel­li­gence as­sess­ment of the US? What hap­pens to the cap­i­tal mar­kets? I can’t even fore­see how our world will re­act to the prospect of var­i­ous nan­otech­nolog­i­cal ca­pa­bil­ities as they promise to be de­vel­oped in the fu­ture’s near fu­ture. Let alone en­vi­sion how so­ciety would ac­tu­ally change as full-fledged molec­u­lar nan­otech­nol­ogy was de­vel­oped, even if it were de­vel­oped grad­u­ally...

...but I sup­pose the Vic­to­ri­ans might say the same thing about nu­clear weapons or com­put­ers, and yet we still have a global econ­omy—one that’s ac­tu­ally lot more in­ter­de­pen­dent than theirs, thanks to nu­clear weapons mak­ing small wars less at­trac­tive, and com­put­ers helping to co­or­di­nate trade.

I’m will­ing to be­lieve in the pos­si­bil­ity of a smooth, grad­ual as­cent to nan­otech­nol­ogy, so that no one state—let alone any cor­po­ra­tion or small group—ever gets a dis­con­tin­u­ous ad­van­tage.

The main rea­son I’m will­ing to be­lieve this is be­cause of the difficul­ties of de­sign and en­g­ineer­ing, even af­ter all man­u­fac­tur­ing is solved. When I read Drexler’s Nanosys­tems, I thought: “Drexler uses prop­erly con­ser­va­tive as­sump­tions ev­ery­where I can see, ex­cept in one place—de­bug­ging. He as­sumes that any failed com­po­nent fails visi­bly, im­me­di­ately, and with­out side effects; this is not con­ser­va­tive.”

In prin­ci­ple, we have com­plete con­trol of our com­put­ers—ev­ery bit and byte is un­der hu­man com­mand—and yet it still takes an im­mense amount of en­g­ineer­ing work on top of that to make the bits do what we want. This, and not any difficul­ties of man­u­fac­tur­ing things once they are de­signed, is what takes an in­ter­na­tional sup­ply chain of mil­lions of pro­gram­mers.

But we’re still not out of the woods.

Sup­pose that, by a prov­i­den­tially in­cre­men­tal and dis­tributed pro­cess, we ar­rive at a world of full-scale molec­u­lar nan­otech­nol­ogy—a world where de­signs, if not finished ma­te­rial goods, are traded among par­ties. In a global econ­omy large enough that no one ac­tor, or even any one state, is do­ing more than a frac­tion of the to­tal en­g­ineer­ing.

It would be a very differ­ent world, I ex­pect; and it’s pos­si­ble that my es­say may have already de­gen­er­ated into non­sense. But even if we still have a global econ­omy af­ter get­ting this far—then we’re still not out of the woods.

Re­mem­ber those ems? The em­u­lated hu­mans-on-a-chip? The up­loads?

Sup­pose that, with molec­u­lar nan­otech­nol­ogy already in place, there’s an in­ter­na­tional race for re­li­able up­load­ing—with some re­sults shared, and some re­sults pri­vate—with many state and some non­state ac­tors.

And sup­pose the race is so tight, that the first state to de­velop work­ing re­searchers-on-a-chip, only has a one-day lead time over the other ac­tors.

That is—one day be­fore any­one else, they de­velop up­loads suffi­ciently un­dam­aged, or ca­pa­ble of suffi­cient re­cov­ery, that the ems can carry out re­search and de­vel­op­ment. In the do­main of, say, up­load­ing.

There are other teams work­ing on the prob­lem, but their up­loads are still a lit­tle off, suffer­ing seizures and hav­ing mem­ory faults and gen­er­ally hav­ing their cog­ni­tion de­graded to the point of not be­ing able to con­tribute. (NOTE: I think this whole fu­ture is a wrong turn and we should stay away from it; I am not en­dors­ing this.)

But this one team, though—their up­loads still have a few prob­lems, but they’re at least sane enough and smart enough to start… fix­ing their prob­lems them­selves?

If there’s already full-scale nan­otech­nol­ogy around when this hap­pens, then even with some in­effi­ciency built in, the first up­loads may be run­ning at ten thou­sand times hu­man speed. Nanocom­put­ers are pow­er­ful stuff.

And in an hour, or around a year of in­ter­nal time, the ems may be able to up­grade them­selves to a hun­dred thou­sand times hu­man speed, and fix some of the re­main­ing prob­lems.

And in an­other hour, or ten years of in­ter­nal time, the ems may be able to get the fac­tor up to a mil­lion times hu­man speed, and start work­ing on in­tel­li­gence en­hance­ment...

One could, of course, vol­un­tar­ily pub­lish the im­proved-up­load pro­to­cols to the world, and give ev­ery­one else a chance to join in. But you’d have to trust that not a sin­gle one of your part­ners were hold­ing back a trick that lets them run up­loads at ten times your own max­i­mum speed (once the bugs were out of the pro­cess). That kind of ad­van­tage could snow­ball quite a lot, in the first side­real day.

Now, if up­loads are grad­u­ally de­vel­oped at a time when com­put­ers are too slow to run them quickly—mean­ing, be­fore molec­u­lar nan­otech and nanofac­to­ries come along—then this whole sce­nario is averted; the first high-fidelity up­loads, run­ning at a hun­dredth of hu­man speed, will grant no spe­cial ad­van­tage. (As­sum­ing that no one is pul­ling any spec­tac­u­lar snow­bal­ling tricks with in­tel­li­gence en­hance­ment—but they would have to snow­ball fast and hard, to con­fer ad­van­tage on a small group run­ning at low speeds. The same could be said of brain-com­puter in­ter­faces, de­vel­oped be­fore or af­ter nan­otech­nol­ogy, if run­ning in a small group at merely hu­man speeds. I would credit their world takeover, but I sus­pect Robin Han­son wouldn’t at this point.)

Now, I don’t re­ally be­lieve in any of this—this whole sce­nario, this whole world I’m de­pict­ing. In real life, I’d ex­pect some­one to brute-force an unFriendly AI on one of those su­per-ul­ti­mate-nanocom­put­ers, fol­lowed in short or­der by the end of the world. But that’s a sep­a­rate is­sue. And this whole world seems too much like our own, af­ter too much tech­nolog­i­cal change, to be re­al­is­tic to me. World gov­ern­ment with an in­su­per­a­ble ad­van­tage? Ubiquitous surveillance? I don’t like the ideas, but both of them would change the game dra­mat­i­cally...

But the real point of this es­say is to illus­trate a point more im­por­tant than nan­otech­nol­ogy: as op­ti­miz­ers be­come more self-swal­low­ing, races be­tween them are more un­sta­ble.

If you sent a mod­ern com­puter back in time to 1950 - con­tain­ing many mod­ern soft­ware tools in com­piled form, but no fu­ture his­tory or declar­a­tively stored fu­ture sci­ence—I would guess that the re­cip­i­ent could not use it to take over the world. Even if the USSR got it. Our com­put­ing in­dus­try is a very pow­er­ful thing, but it re­lies on a sup­ply chain of chip fac­to­ries.

If some­one got a fu­ture nanofac­tory with a library of fu­ture nan­otech ap­pli­ca­tions—in­clud­ing de­signs for things like fu­sion power gen­er­a­tors and surveillance bac­te­ria—they might re­ally be able to take over the world. The nanofac­tory swal­lows its own sup­ply chain; it in­cor­po­rates repli­ca­tion within it­self. If the owner fails, it won’t be for lack of fac­to­ries. It will be for lack of abil­ity to de­velop new mat­ter­ware fast enough, and ap­ply ex­ist­ing mat­ter­ware fast enough, to take over the world.

I’m not say­ing that nan­otech will ap­pear from nowhere with a library of de­signs—just mak­ing a point about con­cen­trated power and the in­sta­bil­ity it im­plies.

Think of all the tech news that you hear about once—say, an ar­ti­cle on Slash­dot about yada yada 50% im­proved bat­tery tech­nol­ogy—and then you never hear about again, be­cause it was too ex­pen­sive or too difficult to man­u­fac­ture.

Now imag­ine a world where the news of a 50% im­proved bat­tery tech­nol­ogy comes down the wire, and the head of some coun­try’s defense agency is sit­ting down across from en­g­ineers and in­tel­li­gence officers and say­ing, “We have five min­utes be­fore all of our ri­val’s weapons are adapted to in­cor­po­rate this new tech­nol­ogy; how does that af­fect our bal­ance of power?” Imag­ine that hap­pen­ing as of­ten as “amaz­ing break­through” ar­ti­cles ap­pear on Slash­dot.

I don’t mean to doom­say—the Vic­to­ri­ans would prob­a­bly be pretty sur­prised we haven’t blown up the world with our ten-minute ICBMs, but we don’t live in their world—well, maybe doom­say just a lit­tle—but the point is: It’s less sta­ble. Im­prove­ments cas­cade faster once you’ve swal­lowed your man­u­fac­tur­ing sup­ply chain.

And if you sent back in time a sin­gle nanofac­tory, and a sin­gle up­load liv­ing in­side it—then the world might end in five min­utes or so, as we bios mea­sure time.

The point be­ing, not that an up­load will sud­denly ap­pear, but that now you’ve swal­lowed your sup­ply chain and your R&D chain.

And so this world is cor­re­spond­ingly more un­sta­ble, even if all the ac­tors start out in roughly the same place. Sup­pose a state man­ages to get one of those Slash­dot-like tech­nol­ogy im­prove­ments—only this one lets up­loads think 50% faster—and they get it fifty min­utes be­fore any­one else, at a point where up­loads are run­ning ten thou­sand times as fast as hu­man (50 mins = ~1 year) - and in that ex­tra half-year, the up­loads man­age to find an­other cou­ple of 50% im­prove­ments...

Now, you can sup­pose that all the ac­tors are all trad­ing all of their ad­van­tages and hold­ing noth­ing back, so ev­ery­one stays nicely syn­chro­nized.

Or you can sup­pose that enough trad­ing is go­ing on, that most of the re­search any group benefits from comes from out­side that group, and so a 50% ad­van­tage for a lo­cal group doesn’t cas­cade much.

But again, that’s not the point. The point is that in mod­ern times, with the mod­ern com­put­ing in­dus­try, where com­mer­cial­iz­ing an ad­vance re­quires build­ing a new com­puter fac­tory, a bright idea that has got­ten as far as show­ing a 50% im­prove­ment in the lab­o­ra­tory, is merely one more ar­ti­cle on Slash­dot.

If ev­ery­thing could in­stantly be re­built via nan­otech, that lab­o­ra­tory demon­stra­tion could pre­cip­i­tate an in­stant in­ter­na­tional mil­i­tary crisis.

And if there are up­loads around, so that a cute lit­tle 50% ad­vance­ment in a cer­tain kind of hard­ware, re­curses back to im­ply 50% greater speed at all fu­ture re­search—then this Slash­dot ar­ti­cle could be­come the key to world dom­i­na­tion.

As sys­tems get more self-swal­low­ing, they cas­cade harder; and even if all ac­tors start out equiv­a­lent, races be­tween them get much more un­sta­ble. I’m not claiming it’s im­pos­si­ble for that world to be sta­ble. The Vic­to­ri­ans might have thought that about ICBMs. But that sub­junc­tive world con­tains ad­di­tional in­sta­bil­ity com­pared to our own, and would need ad­di­tional cen­tripetal forces to end up as sta­ble as our own.

I ex­pect Robin to dis­agree with some part of this es­say, but I’m not sure which part or how.