AI Researchers On AI Risk

I first be­came in­ter­ested in AI risk back around 2007. At the time, most peo­ple’s re­sponse to the topic was “Haha, come back when any­one be­lieves this be­sides ran­dom In­ter­net crack­pots.”

Over the next few years, a se­ries of ex­tremely bright and in­fluen­tial figures in­clud­ing Bill Gates, Stephen Hawk­ing, and Elon Musk pub­li­cally an­nounced they were con­cerned about AI risk, along with hun­dreds of other in­tel­lec­tu­als, from Oxford philoso­phers to MIT cos­mol­o­gists to Sili­con Valley tech in­vestors. So we came back.

Then the re­sponse changed to “Sure, a cou­ple of ran­dom aca­demics and busi­ness­peo­ple might be­lieve this stuff, but never real ex­perts in the field who know what’s go­ing on.”

Thus pieces like Pop­u­lar Science’s Bill Gates Fears AI, But AI Re­searchers Know Bet­ter:

When you talk to A.I. re­searchers—again, gen­uine A.I. re­searchers, peo­ple who grap­ple with mak­ing sys­tems that work at all, much less work too well—they are not wor­ried about su­per­in­tel­li­gence sneak­ing up on them, now or in the fu­ture. Con­trary to the spooky sto­ries that Musk seems in­tent on tel­ling, A.I. re­searchers aren’t fran­ti­cally in­stalled fire­walled sum­mon­ing cham­bers and self-de­struct count­downs.

And Fu­’s The Case Against Killer Robots From A Guy Ac­tu­ally Build­ing AI:

An­drew Ng builds ar­tifi­cial in­tel­li­gence sys­tems for a liv­ing. He taught AI at Stan­ford, built AI at Google, and then moved to the Chi­nese search en­g­ine gi­ant, Baidu, to con­tinue his work at the fore­front of ap­ply­ing ar­tifi­cial in­tel­li­gence to real-world prob­lems. So when he hears peo­ple like Elon Musk or Stephen Hawk­ing—peo­ple who are not in­ti­mately fa­mil­iar with to­day’s tech­nolo­gies—talk­ing about the wild po­ten­tial for ar­tifi­cial in­tel­li­gence to, say, wipe out the hu­man race, you can prac­ti­cally hear him fa­cepalming.

And now Ramez Naam of Marginal Revolu­tion is try­ing the same thing with What Do AI Re­searchers Think Of The Risk Of AI?:

Elon Musk, Stephen Hawk­ing, and Bill Gates have re­cently ex­pressed con­cern that de­vel­op­ment of AI could lead to a ‘kil­ler AI’ sce­nario, and po­ten­tially to the ex­tinc­tion of hu­man­ity. None of them are AI re­searchers or have worked sub­stan­tially with AI that I know of. What do ac­tual AI re­searchers think of the risks of AI?

It quotes the same cou­ple of cherry-picked AI re­searchers as all the other sto­ries – An­drew Ng, Yann LeCun, etc – then stops with­out men­tion­ing whether there are al­ter­nate opinions.

There are. AI re­searchers, in­clud­ing some of the lead­ers in the field, have been in­stru­men­tal in rais­ing is­sues about AI risk and su­per­in­tel­li­gence from the very be­gin­ning. I want to start by list­ing some of these peo­ple, as kind of a counter-list to Naam’s, then go into why I don’t think this is a “con­tro­versy” in the clas­si­cal sense that du­el­ing lists of lu­mi­nar­ies might lead you to ex­pect.

The crite­ria for my list: I’m only men­tion­ing the most pres­ti­gious re­searchers, ei­ther full pro­fes­sors at good schools with lots of highly-cited pa­pers, or else very-well re­spected sci­en­tists in in­dus­try work­ing at big com­pa­nies with good track records. They have to be in­volved in AI and ma­chine learn­ing. They have to have mul­ti­ple strong state­ments sup­port­ing some kind of view about a near-term sin­gu­lar­ity and/​or ex­treme risk from su­per­in­tel­li­gent AI. Some will have writ­ten pa­pers or books about it; oth­ers will have just gone on the record say­ing they think it’s im­por­tant and wor­thy of fur­ther study.

If any­one dis­agrees with the in­clu­sion of a figure here, or knows some­one im­por­tant I for­got, let me know and I’ll make the ap­pro­pri­ate changes:

* * * * * * * * * *

Stu­art Rus­sell (wiki) is Pro­fes­sor of Com­puter Science at Berkeley, win­ner of the IJCAI Com­put­ers And Thought Award, Fel­low of the As­so­ci­a­tion for Com­put­ing Machin­ery, Fel­low of the Amer­i­can Academy for the Ad­vance­ment of Science, Direc­tor of the Cen­ter for In­tel­li­gent Sys­tems, Blaise Pas­cal Chair in Paris, etc, etc. He is the co-au­thor of Ar­tifi­cial In­tel­li­gence: A Modern Ap­proach, the clas­sic text­book in the field used by 1200 uni­ver­si­ties around the world. On his web­site, he writes:

The field [of AI] has op­er­ated for over 50 years on one sim­ple as­sump­tion: the more in­tel­li­gent, the bet­ter. To this must be con­joined an over­rid­ing con­cern for the benefit of hu­man­ity. The ar­gu­ment is very sim­ple:

1. AI is likely to suc­ceed.
2. Un­con­strained suc­cess brings huge risks and huge benefits.
3. What can we do now to im­prove the chances of reap­ing the benefits and avoid­ing the risks?

Some or­ga­ni­za­tions are already con­sid­er­ing these ques­tions, in­clud­ing the Fu­ture of Hu­man­ity In­sti­tute at Oxford, the Cen­tre for the Study of Ex­is­ten­tial Risk at Cam­bridge, the Ma­chine In­tel­li­gence Re­search In­sti­tute in Berkeley, and the Fu­ture of Life In­sti­tute at Har­vard/​MIT. I serve on the Ad­vi­sory Boards of CSER and FLI.

Just as nu­clear fu­sion re­searchers con­sider the prob­lem of con­tain­ment of fu­sion re­ac­tions as one of the pri­mary prob­lems of their field, it seems in­evitable that is­sues of con­trol and safety will be­come cen­tral to AI as the field ma­tures. The re­search ques­tions are be­gin­ning to be for­mu­lated and range from highly tech­ni­cal (foun­da­tional is­sues of ra­tio­nal­ity and util­ity, prov­able prop­er­ties of agents, etc.) to broadly philo­soph­i­cal.

He makes a similar point on, writ­ing:

As Steve Omo­hun­dro, Nick Bostrom, and oth­ers have ex­plained, the com­bi­na­tion of value mis­al­ign­ment with in­creas­ingly ca­pa­ble de­ci­sion-mak­ing sys­tems can lead to prob­lems—per­haps even species-end­ing prob­lems if the ma­chines are more ca­pa­ble than hu­mans. Some have ar­gued that there is no con­ceiv­able risk to hu­man­ity for cen­turies to come, per­haps for­get­ting that the in­ter­val of time be­tween Rutherford’s con­fi­dent as­ser­tion that atomic en­ergy would never be fea­si­bly ex­tracted and Szilárd’s in­ven­tion of the neu­tron-in­duced nu­clear chain re­ac­tion was less than twenty-four hours.

He has also tried to serve as an am­bas­sador about these is­sues to other aca­demics in the field, writ­ing:

What I’m find­ing is that se­nior peo­ple in the field who have never pub­li­cly evinced any con­cern be­fore are pri­vately think­ing that we do need to take this is­sue very se­ri­ously, and the sooner we take it se­ri­ously the bet­ter.

David McAllester (wiki) is pro­fes­sor and Chief Aca­demic Officer at the U Chicago-af­fili­tated Toy­ota Tech­nolog­i­cal In­sti­tute, and formerly served on the fac­ulty of MIT and Cor­nell. He is a fel­low of the Amer­i­can As­so­ci­a­tion of Ar­tifi­cial In­tel­li­gence, has au­thored over a hun­dred pub­li­ca­tions, has done re­search in ma­chine learn­ing, pro­gram­ming lan­guage the­ory, au­to­mated rea­son­ing, AI plan­ning, and com­pu­ta­tional lin­guis­tics, and was a ma­jor in­fluence on the al­gorithms for fa­mous chess com­puter Deep Blue. Ac­cord­ing to an ar­ti­cle in the Pitts­burgh Tribune Re­view:

Chicago pro­fes­sor David McAllester be­lieves it is in­evitable that fully au­to­mated in­tel­li­gent ma­chines will be able to de­sign and build smarter, bet­ter ver­sions of them­selves, an event known as the Sin­gu­lar­ity. The Sin­gu­lar­ity would en­able ma­chines to be­come in­finitely in­tel­li­gent, and would pose an ‘in­cred­ibly dan­ger­ous sce­nario’, he says.

On his per­sonal blog Ma­chine Thoughts, he writes:

Most com­puter sci­ence aca­demics dis­miss any talk of real suc­cess in ar­tifi­cial in­tel­li­gence. I think that a more ra­tio­nal po­si­tion is that no one can re­ally pre­dict when hu­man level AI will be achieved. John McCarthy once told me that when peo­ple ask him when hu­man level AI will be achieved he says be­tween five and five hun­dred years from now. McCarthy was a smart man. Given the un­cer­tain­ties sur­round­ing AI, it seems pru­dent to con­sider the is­sue of friendly AI…

The early stages of ar­tifi­cial gen­eral in­tel­li­gence (AGI) will be safe. How­ever, the early stages of AGI will provide an ex­cel­lent test bed for the ser­vant mis­sion or other ap­proaches to friendly AI. An ex­per­i­men­tal ap­proach has also been pro­moted by Ben Go­ertzel in a nice blog post on friendly AI. If there is a com­ing era of safe (not too in­tel­li­gent) AGI then we will have time to think fur­ther about later more dan­ger­ous eras.

He at­tended the AAAI Panel On Long-Term AI Fu­tures, where he chaired the panel on Long-Term Con­trol and was de­scribed as say­ing:

McAllester chat­ted with me about the up­com­ing ‘Sin­gu­lar­ity’, the event where com­put­ers out think hu­mans. He wouldn’t com­mit to a date for the sin­gu­lar­ity but said it could hap­pen in the next cou­ple of decades and will definitely hap­pen even­tu­ally. Here are some of McAllester’s views on the Sin­gu­lar­ity. There will be two mile­stones: Oper­a­tional Sen­tience, when we can eas­ily con­verse with com­put­ers, and the AI Chain Re­ac­tion, when a com­puter can boot­strap it­self to a bet­ter self and re­peat. We’ll no­tice the first mile­stone in au­to­mated help sys­tems that will gen­uinely be helpful. Later on com­put­ers will ac­tu­ally be fun to talk to. The point where com­puter can do any­thing hu­mans can do will re­quire the sec­ond mile­stone.

Hans Mo­ravec (wiki) is a former pro­fes­sor at the Robotics In­sti­tute of Carnegie Mel­lon Univer­sity, name­sake of Mo­ravec’s Para­dox, and founder of the SeeGrid Cor­po­ra­tion for in­dus­trial robotic vi­sual sys­tems. His Sen­sor Fu­sion in Cer­tainty Grids for Mo­bile Robots has been cited over a thou­sand times, and he was in­vited to write the En­cy­clo­pe­dia Bri­tan­nica ar­ti­cle on robotics back when en­cy­clo­pe­dia ar­ti­cles were writ­ten by the world ex­pert in a field rather than by hun­dreds of anony­mous In­ter­net com­menters.

He is also the au­thor of Robot: Mere Ma­chine to Tran­scen­dent Mind, which Ama­zon de­scribes as:

In this com­pel­ling book, Hans Mo­ravec pre­dicts ma­chines will at­tain hu­man lev­els of in­tel­li­gence by the year 2040, and that by 2050, they will sur­pass us. But even though Mo­ravec pre­dicts the end of the dom­i­na­tion by hu­man be­ings, his is not a bleak vi­sion. Far from railing against a fu­ture in which ma­chines rule the world, Mo­ravec em­braces it, tak­ing the startling view that in­tel­li­gent robots will ac­tu­ally be our evolu­tion­ary heirs.” Mo­ravec goes fur­ther and states that by the end of this pro­cess “the im­men­si­ties of cy­berspace will be teem­ing with un­hu­man su­per­minds, en­gaged in af­fairs that are to hu­man con­cerns as ours are to those of bac­te­ria”.

Shane Legg is co-founder of Deep­Mind Tech­nolo­gies (wiki), an AI startup that was bought for Google in 2014 for about $500 mil­lion. He earned his PhD at the Dalle Molle In­sti­tute for Ar­tifi­cial In­tel­li­gence in Switzer­land and also worked at the Gatsby Com­pu­ta­tional Neu­ro­science Unit in Lon­don. His dis­ser­ta­tion Ma­chine Su­per­in­tel­li­gence con­cludes:

If there is ever to be some­thing ap­proach­ing ab­solute power, a su­per­in­tel­li­gent ma­chine would come close. By defi­ni­tion, it would be ca­pa­ble of achiev­ing a vast range of goals in a wide range of en­vi­ron­ments. If we care­fully pre­pare for this pos­si­bil­ity in ad­vance, not only might we avert dis­aster, we might bring about an age of pros­per­ity un­like any­thing seen be­fore.

In a later in­ter­view, he states:

AI is now where the in­ter­net was in 1988. De­mand for ma­chine learn­ing skills is quite strong in spe­cial­ist ap­pli­ca­tions (search com­pa­nies like Google, hedge funds and bio-in­for­mat­ics) and is grow­ing ev­ery year. I ex­pect this to be­come no­tice­able in the main­stream around the mid­dle of the next decade. I ex­pect a boom in AI around 2020 fol­lowed by a decade of rapid progress, pos­si­bly af­ter a mar­ket cor­rec­tion. Hu­man level AI will be passed in the mid 2020’s, though many peo­ple won’t ac­cept that this has hap­pened. After this point the risks as­so­ci­ated with ad­vanced AI will start to be­come prac­ti­cally im­por­tant…I don’t know about a “sin­gu­lar­ity”, but I do ex­pect things to get re­ally crazy at some point af­ter hu­man level AGI has been cre­ated. That is, some time from 2025 to 2040.

He and his co-founders Demis Hass­abis and Mustafa Suley­man have signed the Fu­ture of Life In­sti­tute pe­ti­tion on AI risks, and one of their con­di­tions for join­ing Google was that the com­pany agree to set up an AI Ethics Board to in­ves­ti­gate these is­sues.

Steve Omo­hun­dro (wiki) is a former Pro­fes­sor of Com­puter Science at Univer­sity of Illinois, founder of the Vi­sion and Learn­ing Group and the Cen­ter for Com­plex Sys­tems Re­search, and in­ven­tor of var­i­ous im­por­tant ad­vances in ma­chine learn­ing and ma­chine vi­sion. His work in­cludes lip-read­ing robots, the StarLisp par­allel pro­gram­ming lan­guage, and ge­o­met­ric learn­ing al­gorithms. He cur­rently runs Self-Aware Sys­tems, “a think-tank work­ing to en­sure that in­tel­li­gent tech­nolo­gies are benefi­cial for hu­man­ity”. His pa­per Ba­sic AI Drives helped launch the field of ma­chine ethics by point­ing out that su­per­in­tel­li­gent sys­tems will con­verge upon cer­tain po­ten­tially dan­ger­ous goals. He writes:

We have shown that all ad­vanced AI sys­tems are likely to ex­hibit a num­ber of ba­sic drives. It is es­sen­tial that we un­der­stand these drives in or­der to build tech­nol­ogy that en­ables a pos­i­tive fu­ture for hu­man­ity. Yud­kowsky has called for the cre­ation of ‘friendly AI’. To do this, we must de­velop the sci­ence un­der­ly­ing ‘util­ity en­g­ineer­ing’, which will en­able us to de­sign util­ity func­tions that will give rise to the con­se­quences we de­sire…The rapid pace of tech­nolog­i­cal progress sug­gests that these is­sues may be­come of crit­i­cal im­por­tance soon.”

See also his sec­tion here on “Ra­tional AI For The Greater Good”.

Mur­ray Shana­han (site) earned his PhD in Com­puter Science from Cam­bridge and is now Pro­fes­sor of Cog­ni­tive Robotics at Im­pe­rial Col­lege Lon­don. He has pub­lished pa­pers in ar­eas in­clud­ing robotics, logic, dy­namic sys­tems, com­pu­ta­tional neu­ro­science, and philos­o­phy of mind. He is cur­rently writ­ing a book The Tech­nolog­i­cal Sin­gu­lar­ity which will be pub­lished in Au­gust; Ama­zon’s blurb says:

Shana­han de­scribes tech­nolog­i­cal ad­vances in AI, both biolog­i­cally in­spired and en­g­ineered from scratch. Once hu­man-level AI — the­o­ret­i­cally pos­si­ble, but difficult to ac­com­plish — has been achieved, he ex­plains, the tran­si­tion to su­per­in­tel­li­gent AI could be very rapid. Shana­han con­sid­ers what the ex­is­tence of su­per­in­tel­li­gent ma­chines could mean for such mat­ters as per­son­hood, re­spon­si­bil­ity, rights, and iden­tity. Some su­per­hu­man AI agents might be cre­ated to benefit hu­mankind; some might go rogue. (Is Siri the tem­plate, or HAL?) The sin­gu­lar­ity pre­sents both an ex­is­ten­tial threat to hu­man­ity and an ex­is­ten­tial op­por­tu­nity for hu­man­ity to tran­scend its limi­ta­tions. Shana­han makes it clear that we need to imag­ine both pos­si­bil­ities if we want to bring about the bet­ter out­come.

Mar­cus Hut­ter (wiki) is a pro­fes­sor in the Re­search School of Com­puter Science at Aus­tralian Na­tional Univer­sity. He has pre­vi­ously worked with the Dalle Molle In­sti­tute for Ar­tifi­cial In­tel­li­gence and Na­tional ICT Aus­tralia, and done work on re­in­force­ment learn­ing, Bayesian se­quence pre­dic­tion, com­plex­ity the­ory, Solomonoff in­duc­tion, com­puter vi­sion, and ge­nomic pro­filing. He has also writ­ten ex­ten­sively on the Sin­gu­lar­ity. In Can In­tel­li­gence Ex­plode?, he writes:

This cen­tury may wit­ness a tech­nolog­i­cal ex­plo­sion of a de­gree de­serv­ing the name sin­gu­lar­ity. The de­fault sce­nario is a so­ciety of in­ter­act­ing in­tel­li­gent agents in a vir­tual world, simu­lated on com­put­ers with hy­per­bol­i­cally in­creas­ing com­pu­ta­tional re­sources. This is in­evitably ac­com­panied by a speed ex­plo­sion when mea­sured in phys­i­cal time units, but not nec­es­sar­ily by an in­tel­li­gence ex­plo­sion…if the vir­tual world is in­hab­ited by in­ter­act­ing free agents, evolu­tion­ary pres­sures should breed agents of in­creas­ing in­tel­li­gence that com­pete about com­pu­ta­tional re­sources. The end-point of this in­tel­li­gence evolu­tion/​ac­cel­er­a­tion (whether it de­serves the name sin­gu­lar­ity or not) could be a so­ciety of these max­i­mally in­tel­li­gent in­di­vi­d­u­als. Some as­pect of this sin­gu­lar­i­tar­ian so­ciety might be the­o­ret­i­cally stud­ied with cur­rent sci­en­tific tools. Way be­fore the sin­gu­lar­ity, even when set­ting up a vir­tual so­ciety in our imag­ine, there are likely some im­me­di­ate differ­ence, for ex­am­ple that the value of an in­di­vi­d­ual life sud­denly drops, with dras­tic con­se­quences.

Jur­gen Sch­mid­hu­ber (wiki) is Pro­fes­sor of Ar­tifi­cial In­tel­li­gence at the Univer­sity of Lugano and former Pro­fes­sor of Cog­ni­tive Robotics at the Tech­nische Univer­si­tat Munchen. He makes some of the most ad­vanced neu­ral net­works in the world, has done fur­ther work in evolu­tion­ary robotics and com­plex­ity the­ory, and is a fel­low of the Euro­pean Academy of Sciences and Arts. In Sin­gu­lar­ity Hy­pothe­ses, Sch­mid­hu­ber ar­gues that “if fu­ture trends con­tinue, we will face an in­tel­li­gence ex­plo­sion within the next few decades”. When asked di­rectly about AI risk on a Red­dit AMA thread, he an­swered:

Stu­art Rus­sell’s con­cerns [about AI risk] seem rea­son­able. So can we do any­thing to shape the im­pacts of ar­tifi­cial in­tel­li­gence? In an an­swer hid­den deep in a re­lated thread I just pointed out: At first glance, re­cur­sive self-im­prove­ment through Gödel Machines seems to offer a way of shap­ing fu­ture su­per­in­tel­li­gences. The self-mod­ifi­ca­tions of Gödel Machines are the­o­ret­i­cally op­ti­mal in a cer­tain sense. A Gödel Ma­chine will ex­e­cute only those changes of its own code that are prov­ably good, ac­cord­ing to its ini­tial util­ity func­tion. That is, in the be­gin­ning you have a chance of set­ting it on the “right” path. Others, how­ever, may equip their own Gödel Machines with differ­ent util­ity func­tions. They will com­pete. In the re­sult­ing ecol­ogy of agents, some util­ity func­tions will be more com­pat­i­ble with our phys­i­cal uni­verse than oth­ers, and find a niche to sur­vive. More on this in a pa­per from 2012.

Richard Sut­ton (wiki) is pro­fes­sor and iCORE chair of com­puter sci­ence at Univer­sity of Alberta. He is a fel­low of the As­so­ci­a­tion for the Ad­vance­ment of Ar­tifi­cial In­tel­li­gence, co-au­thor of the most-used text­book on re­in­force­ment learn­ing, and dis­cov­erer of tem­po­ral differ­ence learn­ing, one of the most im­por­tant meth­ods in the field.

In his talk at the Fu­ture of Life In­sti­tute’s Fu­ture of AI Con­fer­ence, Sut­ton states that there is “cer­tainly a sig­nifi­cant chance within all of our ex­pected life­times” that hu­man-level AI will be cre­ated, then goes on to say the AIs “will not be un­der our con­trol”, “will com­pete and co­op­er­ate with us”, and that “if we make su­per­in­tel­li­gent slaves, then we will have su­per­in­tel­li­gent ad­ver­saries”. He con­cludes that “We need to set up mechanisms (so­cial, le­gal, poli­ti­cal, cul­tural) to en­sure that this works out well” but that “in­evitably, con­ven­tional hu­mans will be less im­por­tant.” He has also men­tioned these is­sues at a pre­sen­ta­tion to the Gadsby In­sti­tute in Lon­don and in (of all things) a Glenn Beck book: “Richard Sut­ton, one of the biggest names in AI, pre­dicts an in­tel­li­gence ex­plo­sion near the mid­dle of the cen­tury”.

An­drew Dav­i­son (site) is Pro­fes­sor of Robot Vi­sion at Im­pe­rial Col­lege Lon­don, leader of the Robot Vi­sion Re­search Group and Dyson Robotics Lab­o­ra­tory, and in­ven­tor of the com­put­er­ized lo­cal­iza­tion-map­ping sys­tem MonoSLAM. On his web­site, he writes:

At the risk of go­ing out on a limb in the proper sci­en­tific cir­cles to which I hope I be­long(!), since 2006 I have be­gun to take very se­ri­ously the idea of the tech­nolog­i­cal sin­gu­lar­ity: that ex­po­nen­tially in­creas­ing tech­nol­ogy might lead to su­per-hu­man AI and other de­vel­op­ments that will change the world ut­terly in the sur­pris­ingly near fu­ture (i.e. per­haps the next 20–30 years). As well as from read­ing books like Kurzweil’s ‘The Sin­gu­lar­ity is Near’ (which I find sen­sa­tional but on the whole ex­tremely com­pel­ling), this view comes from my own overview of in­cred­ible re­cent progress of sci­ence and tech­nol­ogy in gen­eral and speci­fi­cially in the fields of com­puter vi­sion and robotics within which I am per­son­ally work­ing. Modern in­fer­ence, learn­ing and es­ti­ma­tion meth­ods based on Bayesian prob­a­bil­ity the­ory (see Prob­a­bil­ity The­ory: The Logic of Science or free on­line ver­sion, highly recom­mended), com­bined with the ex­po­nen­tially in­creas­ing ca­pa­bil­ities of cheaply available com­puter pro­ces­sors, are be­com­ing ca­pa­ble of amaz­ing hu­man-like and su­per-hu­man feats, par­tic­u­larly in the com­puter vi­sion do­main.

It is hard to even start think­ing about all of the im­pli­ca­tions of this, pos­i­tive or nega­tive, and here I will just try to state facts and not offer much in the way of opinions (though I should say that I am definitely not in the su­per-op­ti­mistic camp). I strongly think that this is some­thing that sci­en­tists and the gen­eral pub­lic should all be talk­ing about. I’ll make a list here of some ‘sin­gu­lar­ity in­di­ca­tors’ I come across and try to up­date it reg­u­larly. Th­ese are lit­tle bits of tech­nol­ogy or news that I come across which gen­er­ally serve to re­in­force my view that tech­nol­ogy is pro­gress­ing in an ex­traor­di­nary, faster and faster way that will have con­se­quences few peo­ple are yet re­ally think­ing about.

Alan Tur­ing and I. J. Good (wiki, wiki) are men who need no in­tro­duc­tion. Tur­ing in­vented the math­e­mat­i­cal foun­da­tions of com­put­ing and shares his name with Tur­ing ma­chines, Tur­ing com­plete­ness, and the Tur­ing Test. Good worked with Tur­ing at Bletch­ley Park, helped build some of the first com­put­ers, and in­vented var­i­ous land­mark al­gorithms like the Fast Fourier Trans­form. In his pa­per “Can Digi­tal Machines Think?”, Tur­ing writes:

Let us now as­sume, for the sake of ar­gu­ment, that these ma­chines are a gen­uine pos­si­bil­ity, and look at the con­se­quences of con­struct­ing them. To do so would of course meet with great op­po­si­tion, un­less we have ad­vanced greatly in re­li­gious tol­er­ance since the days of Gal­ileo. There would be great op­po­si­tion from the in­tel­lec­tu­als who were afraid of be­ing put out of a job. It is prob­a­ble though that the in­tel­lec­tu­als would be mis­taken about this. There would be plenty to do in try­ing to keep one’s in­tel­li­gence up to the stan­dards set by the ma­chines, for it seems prob­a­ble that once the ma­chine think­ing method had started, it would not take long to out­strip our fee­ble pow­ers…At some stage there­fore we should have to ex­pect the ma­chines to take con­trol.

Dur­ing his time at the At­las Com­puter Lab­o­ra­tory in the 60s, Good ex­panded on this idea in Spec­u­la­tions Con­cern­ing The First Ul­train­tel­li­gent Ma­chine, which ar­gued:

Let an ul­train­tel­li­gent ma­chine be defined as a ma­chine that can far sur­pass all the in­tel­lec­tual ac­tivi­ties of any man how­ever clever. Since the de­sign of ma­chines is one of these in­tel­lec­tual ac­tivi­ties, an ul­train­tel­li­gent ma­chine could de­sign even bet­ter ma­chines; there would then un­ques­tion­ably be an ‘in­tel­li­gence ex­plo­sion,’ and the in­tel­li­gence of man would be left far be­hind. Thus the first ul­train­tel­li­gent ma­chine is the last in­ven­tion that man need ever make

* * * * * * * * * *

I worry this list will make it look like there is some sort of big “con­tro­versy” in the field be­tween “be­liev­ers” and “skep­tics” with both sides lam­bast­ing the other. This has not been my im­pres­sion.

When I read the ar­ti­cles about skep­tics, I see them mak­ing two points over and over again. First, we are nowhere near hu­man-level in­tel­li­gence right now, let alone su­per­in­tel­li­gence, and there’s no ob­vi­ous path to get there from here. Se­cond, if you start de­mand­ing bans on AI re­search then you are an idiot.

I agree whole-heart­edly with both points. So do the lead­ers of the AI risk move­ment.

A sur­vey of AI re­searchers (Mul­ler & Bostrom, 2014) finds that on av­er­age they ex­pect a 50% chance of hu­man-level AI by 2040 and 90% chance of hu­man-level AI by 2075. On av­er­age, 75% be­lieve that su­per­in­tel­li­gence (“ma­chine in­tel­li­gence that greatly sur­passes the perfor­mance of ev­ery hu­man in most pro­fes­sions”) will fol­low within thirty years of hu­man-level AI. There are some rea­sons to worry about sam­pling bias based on eg peo­ple who take the idea of hu­man-level AI se­ri­ously be­ing more likely to re­spond (though see the at­tempts made to con­trol for such in the sur­vey) but taken se­ri­ously it sug­gests that most AI re­searchers think there’s a good chance this is some­thing we’ll have to worry about within a gen­er­a­tion or two.

But out­go­ing MIRI di­rec­tor Luke Muehlhauser and Fu­ture of Hu­man­ity In­sti­tute di­rec­tor Nick Bostrom are both on record say­ing they have sig­nifi­cantly later timelines for AI de­vel­op­ment than the sci­en­tists in the sur­vey. If you look at Stu­art Arm­strong’s AI Timeline Pre­dic­tion Data there doesn’t seem to be any gen­eral law that the es­ti­mates from AI risk be­liev­ers are any ear­lier than those from AI risk skep­tics. In fact, the lat­est es­ti­mate on the en­tire table is from Arm­strong him­self; Arm­strong nev­er­the­less cur­rently works at the Fu­ture of Hu­man­ity In­sti­tute rais­ing aware­ness of AI risk and re­search­ing su­per­in­tel­li­gence goal al­ign­ment.

The differ­ence be­tween skep­tics and be­liev­ers isn’t about when hu­man-level AI will ar­rive, it’s about when we should start prepar­ing.

Which brings us to the sec­ond non-dis­agree­ment. The “skep­tic” po­si­tion seems to be that, al­though we should prob­a­bly get a cou­ple of bright peo­ple to start work­ing on pre­limi­nary as­pects of the prob­lem, we shouldn’t panic or start try­ing to ban AI re­search.

The “be­liev­ers”, mean­while, in­sist that al­though we shouldn’t panic or start try­ing to ban AI re­search, we should prob­a­bly get a cou­ple of bright peo­ple to start work­ing on pre­limi­nary as­pects of the prob­lem.

Yann LeCun is prob­a­bly the most vo­cal skep­tic of AI risk. He was heav­ily fea­tured in the Pop­u­lar Science ar­ti­cle, was quoted in the Marginal Revolu­tion post, and spoke to KDNuggets and IEEE on “the in­evitable sin­gu­lar­ity ques­tions”, which he de­scribes as “so far out that we can write sci­ence fic­tion about it”. But when asked to clar­ify his po­si­tion a lit­tle more, he said:

Elon [Musk] is very wor­ried about ex­is­ten­tial threats to hu­man­ity (which is why he is build­ing rock­ets with the idea of send­ing hu­mans colonize other planets). Even if the risk of an A.I. up­ris­ing is very un­likely and very far in the fu­ture, we still need to think about it, de­sign pre­cau­tion­ary mea­sures, and es­tab­lish guidelines. Just like bio-ethics pan­els were es­tab­lished in the 1970s and 1980s, be­fore ge­netic en­g­ineer­ing was widely used, we need to have A.I.-ethics pan­els and think about these is­sues. But, as Yoshua [Ben­gio] wrote, we have quite a bit of time

Eric Horvitz is an­other ex­pert of­ten men­tioned as a lead­ing voice of skep­ti­cism and re­straint. His views have been pro­filed in ar­ti­cles like Out Of Con­trol AI Will Not Kill Us, Believes Microsoft Re­search Chief and Noth­ing To Fear From Ar­tifi­cial In­tel­li­gence, Says Microsoft’s Eric Horvitz. But here’s what he says in a longer in­ter­view with NPR:

KASTE: Horvitz doubts that one of these vir­tual re­cep­tion­ists could ever lead to some­thing that takes over the world. He says that’s like ex­pect­ing a kite to evolve into a 747 on its own. So does that mean he thinks the sin­gu­lar­ity is ridicu­lous?

Mr. HORVITZ: Well, no. I think there’s been a mix of views, and I have to say that I have mixed feel­ings my­self.

KASTE: In part be­cause of ideas like the sin­gu­lar­ity, Horvitz and other A.I. sci­en­tists have been do­ing more to look at some of the eth­i­cal is­sues that might arise over the next few years with nar­row A.I. sys­tems. They’ve also been ask­ing them­selves some more fu­tur­is­tic ques­tions. For in­stance, how would you go about de­sign­ing an emer­gency off switch for a com­puter that can re­design it­self?

Mr. HORVITZ: I do think that the stakes are high enough where even if there was a low, small chance of some of these kinds of sce­nar­ios, that it’s worth in­vest­ing time and effort to be proac­tive.

Which is pretty much the same po­si­tion as a lot of the most zeal­ous AI risk pro­po­nents. With en­e­mies like these, who needs friends?

A Slate ar­ti­cle called Don’t Fear Ar­tifi­cial In­tel­li­gence also gets a sur­pris­ing amount right:

As Musk him­self sug­gests el­se­where in his re­marks, the solu­tion to the prob­lem [of AI risk] lies in sober and con­sid­ered col­lab­o­ra­tion be­tween sci­en­tists and poli­cy­mak­ers. How­ever, it is hard to see how talk of “demons” ad­vances this no­ble goal. In fact, it may ac­tively hin­der it.

First, the idea of a Skynet sce­nario it­self has enor­mous holes. While com­puter sci­ence re­searchers think Musk’s mus­ings are “not com­pletely crazy,” they are still awfully re­mote from a world in which AI hype masks less ar­tifi­cially in­tel­li­gent re­al­ities that our na­tion’s com­puter sci­en­tists grap­ple with:

Yann LeCun, the head of Face­book’s AI lab, summed it up in a Google+ post back in 2013: “Hype is dan­ger­ous to AI. Hype kil­led AI four times in the last five decades. AI Hype must be stopped.”…LeCun and oth­ers are right to fear the con­se­quences of hype. Failure to live up to sci-fi–fueled ex­pec­ta­tions, af­ter all, of­ten re­sults in harsh cuts to AI re­search bud­gets.

AI sci­en­tists are all smart peo­ple. They have no in­ter­est in fal­ling into the usual poli­ti­cal traps where they di­vide into sides that ac­cuse each other of be­ing in­sane alarmists or os­triches with their heads stuck in the sand. It looks like they’re try­ing to bal­ance the need to start some pre­limi­nary work on a threat that looms way off in the dis­tance ver­sus the risk of en­gen­der­ing so much hype that it starts a gi­ant back­lash.

This is not to say that there aren’t very se­ri­ous differ­ences of opinion in how quickly we need to act. Th­ese seem to hinge mostly on whether it’s safe to say “We’ll deal with the prob­lem when we come to it” or whether there will be some kind of “hard take­off” which will take events out of con­trol so quickly that we’ll want to have done our home­work be­fore­hand. I con­tinue to see less ev­i­dence than I’d like that most AI re­searchers with opinions un­der­stand the lat­ter pos­si­bil­ity, or re­ally any of the tech­ni­cal work in this area. Heck, the Marginal Revolu­tion ar­ti­cle quotes an ex­pert as say­ing that su­per­in­tel­li­gence isn’t a big risk be­cause “smart com­put­ers won’t cre­ate their own goals”, even though any­one who has read Bostrom knows that this is ex­actly the prob­lem.

There is still a lot of work to be done. But cherry-picked ar­ti­cles about how “real AI re­searchers don’t worry about su­per­in­tel­li­gence” aren’t it.

[thanks to some peo­ple from MIRI and FLI for help with and sug­ges­tions on this post]

EDIT: In­ves­ti­gate for pos­si­ble in­clu­sion: Fred­kin, Minsky

No comments.