For FAI: Is “Molecular Nanotechnology” putting our best foot forward?

Molec­u­lar nan­otech­nol­ogy, or MNT for those of you who love acronyms, seems to be a fairly com­mon trope on LW and re­lated liter­a­ture. It’s not re­ally clear to me why. In many of the ex­am­ples of “How could AI’s help us” or “How could AI’s rise to power” phrases like “cracks pro­tein fold­ing” or “mak­ing a block of di­a­mond is just as easy as mak­ing a block of coal” are thrown about in ways that make me very very un­com­fortable. Maybe it’s all true, maybe I’m just late to the tran­shu­man­ist party and the ob­vi­ous­ness of this in­for­ma­tion was with my in­vi­ta­tion that got lost in the mail, but see­ing all the physics swept un­der the rug like that sets off ev­ery crack­pot alarm I have.

I must post the dis­claimer that I have done a lit­tle bit of ma­te­ri­als sci­ence, so maybe I’m just an­noyed that you’re mak­ing me ob­so­lete, but I don’t see why this par­tic­u­lar pos­si­ble fu­ture gets so much at­ten­tion. Let us as­sume that a smarter than hu­man AI will be very difficult to con­trol and rep­re­sents a large pos­i­tive or nega­tive util­ity for the en­tirety of the hu­man race. Even given that as­sump­tion, it’s still not clear to me that MNT is a likely el­e­ment of the fu­ture. It isn’t clear to me than MNT is phys­i­cally prac­ti­cal. I don’t doubt that it can be done. I don’t doubt that very clever metastable ar­range­ments of atoms with novel prop­er­ties can be dreamed up. In­deed, that’s my day job, but I have a hard time be­liev­ing the only rea­son you can’t make a nanoassem­bler ca­pa­ble of ar­bi­trary ma­nipu­la­tions out of a hand­ful of bot­tles you or­dered from Sigma-Al­drich is be­cause we’re just not smart enough. Ma­nipu­lat­ing in­di­vi­d­u­als atoms means climb­ing huge bind­ing en­ergy curves, it’s an enor­mously steep, enor­mously com­pli­cated en­ergy land­scape, and the Schrod­inger Equa­tion scales very very poorly as you add ad­di­tional par­ti­cles and de­grees of free­dom. Build­ing molec­u­lar nan­otech­nol­ogy seems to me to be roughly equiv­a­lent to be­ing able to make ar­bi­trary lego struc­tures by shak­ing a large bin of lego in a par­tic­u­lar way while blind­folded. Maybe a su­per hu­man in­tel­li­gence is ca­pa­ble of do­ing so, but it’s not at all clear to me that it’s even pos­si­ble.

I as­sume the rea­son than MNT is added to a dis­cus­sion on AI is be­cause we’re try­ing to make the fu­ture sound more plau­si­ble via adding bur­den­some de­tails. I un­der­stand that AI and MNT is less prob­a­ble than AI or MNT alone, but that both is sup­posed to sound more plau­si­ble. This is pre­cisely where I have difficulty. I would es­ti­mate the prob­a­bil­ity of molec­u­lar nan­otech­nol­ogy (in the form of pro­grammable repli­ca­tors, grey goo, and the like) as lower than the prob­a­bil­ity of hu­man or su­per hu­man level AI. I can think of all sorts of ob­jec­tion to the former, but very few ob­jec­tions to the lat­ter. In­clud­ing MNT as a con­se­quence of AI, es­pe­cially in­clud­ing it with­out ad­dress­ing any of the fun­da­men­tal difficul­ties of MNT, I would ar­gue harms the cred­i­bil­ity of AI re­searchers. It makes me ner­vous about shar­ing FAI liter­a­ture with peo­ple I work with, and it con­tinues to bother me.

I am par­tic­u­larly both­ered by this be­cause it seems ir­rele­vant to FAI. I’m fully con­vinced that a smarter than hu­man AI could take con­trol of the Earth via less mag­i­cal means, us­ing time tested meth­ods such as ma­nipu­lat­ing hu­mans, rig­ging elec­tions, mak­ing friends, kil­ling its en­e­mies, and gen­er­ally only be­ing a marginally more clever and mo­ti­vated than a typ­i­cal hu­man leader. A smarter than hu­man AI could out-ma­nipu­late hu­man in­sti­tu­tions and out-plan hu­man op­po­nents with the sort of ruth­less effi­ciency that mod­ern com­put­ers beat hu­mans in chess. I don’t think con­vinc­ing peo­ple that smarter than hu­man AI’s have enor­mous po­ten­tial for good and evil is par­tic­u­larly difficult, once you can get them to con­cede that smarter than hu­man AIs are pos­si­ble. I do think that wav­ing your hands and say­ing su­per-in­tel­li­gence at things that may be phys­i­cally im­pos­si­ble makes the whole en­deavor seem less se­ri­ous. If I had read the chain of rea­son­ing smart com­puter->nanobots be­fore I had built up a store of good-will from read­ing the Se­quences, I would have al­most im­me­di­ately dis­missed the whole FAI move­ment a bunch of soft sci­ence fic­tion, and it would have been very difficult to get me to take a sec­ond look.

Put in LW par­lance, sug­gest­ing things not known to be pos­si­ble by mod­ern physics with­out de­tailed ex­pla­na­tions puts you in the refer­ence class “peo­ple on the in­ter­net who have their own ideas about physics”. It didn’t help, in my par­tic­u­lar case, that one of my first in­ter­ac­tions on LW was in fact with some­one who ap­pears to have their own view about a con­tin­u­ous ver­sion of quan­tum me­chan­ics.

And maybe it’s just me. Maybe this did not bother any­one else, and it’s an in­cred­ible short­cut for get­ting peo­ple to re­al­ize just how differ­ent a fu­ture a greater than hu­man in­tel­li­gence makes pos­si­ble and there is no bet­ter ex­am­ple. It does alarm me though, be­cause I think that physi­cists and the kind of peo­ple who no­tice and get un­com­fortable when you start in­vok­ing magic in your ex­pla­na­tions may be the kind of peo­ple FAI is try­ing to at­tract.