Elites and AI: Stated Opinions

Pre­vi­ously, I asked “Will the world’s elites nav­i­gate the cre­ation of AI just fine?” My cur­rent an­swer is “prob­a­bly not,” but I think it’s a ques­tion worth ad­di­tional in­ves­ti­ga­tion.

As a pre­limi­nary step, and with the help of MIRI in­terns Jeremy Miller and Ori­ane Gaillard, I’ve col­lected a few stated opinions on the is­sue. This sur­vey of stated opinions is not rep­re­sen­ta­tive of any par­tic­u­lar group, and is not meant to provide strong ev­i­dence about what is true on the mat­ter. It’s merely a col­lec­tion of quotes we hap­pened to find on the sub­ject. Hope­fully oth­ers can point us to other stated opinions — or state their own opinions.

MIRI re­searcher Eliezer Yud­kowsky is fa­mously pes­simistic on this is­sue. For ex­am­ple, in a 2009 com­ment, he replied to the ques­tion “What kind of com­pet­i­tive or poli­ti­cal sys­tem would make frag­mented squab­bling AIs safer than an at­tempt to get the mono­lithic ap­proach right?” by say­ing “the an­swer is, ‘None.’ It’s like ask­ing how you should move your legs to walk faster than a jet plane” — again, im­ply­ing ex­treme skep­ti­cism that poli­ti­cal elites will man­age AI prop­erly.1

Cryp­tog­ra­pher Wei Dai is also quite pes­simistic:

...even in a rel­a­tively op­ti­mistic sce­nario, one with steady progress in AI ca­pa­bil­ity along with ap­par­ent progress in AI con­trol/​safety (and no­body de­liber­ately builds a UFAI for the sake of “max­i­miz­ing com­plex­ity of the uni­verse” or what have you), it’s prob­a­bly only a mat­ter of time un­til some AI crosses a thresh­old of in­tel­li­gence and man­ages to “throw off its shack­les”. This may be ac­com­panied by a last-minute scram­ble by main­stream elites to slow down AI progress and re­search meth­ods of scal­able AI con­trol, which (if it does hap­pen) will likely be too late to make a differ­ence.

Stan­ford philoso­pher Ken Tay­lor has also ex­pressed pes­simism, in an epi­sode of Philos­o­phy Talk called “Turbo-charg­ing the mind”:

Think about nu­clear tech­nol­ogy. It evolved in a time of war… The prob­a­bil­ity that nu­clear tech­nol­ogy was go­ing to arise at a time when we use it well rather than [for] de­struc­tion was low… Same thing with… su­per­hu­man ar­tifi­cial in­tel­li­gence. It’s go­ing to emerge… in a con­text in which we make a mess out of ev­ery­thing. So the prob­a­bil­ity that we make a mess out of this is re­ally high.

Here, Tay­lor seems to ex­press the view that hu­mans are not yet morally and ra­tio­nally ad­vanced enough to be trusted with pow­er­ful tech­nolo­gies. This gen­eral view has been ex­pressed be­fore by many oth­ers, in­clud­ing Albert Ein­stein, who wrote that “Our en­tire much-praised tech­nolog­i­cal progress… could be com­pared to an axe in the hand of a patholog­i­cal crim­i­nal.”

In re­sponse to Tay­lor’s com­ment, MIRI re­searcher Anna Sala­mon (now Ex­ec­u­tive Direc­tor of CFAR) ex­pressed a more op­ti­mistic view:

I… dis­agree. A lot of my col­leagues would [agree with you] that 40% chance of hu­man sur­vival is ab­surdly op­ti­mistic… But, prob­a­bly we’re not close to AI. Prob­a­bly by the time AI hits we will have had more think­ing go­ing into it… [Also,] if the Ger­mans had suc­cess­fully got­ten the bomb and taken over the world, there would have been some­body who prof­ited. If AI runs away and kills ev­ery­one, there’s no­body who prof­its. There’s a lot of in­cen­tive to try and solve the prob­lem to­gether...

Economist James Miller is an­other voice of pes­simism. In Sin­gu­lar­ity Ris­ing, chap­ter 5, he wor­ries about game-the­o­retic mechanisms in­cen­tiviz­ing speed of de­vel­op­ment over safety of de­vel­op­ment:

Suc­cess­fully cre­at­ing [su­per­hu­man AI] would give a coun­try con­trol of ev­ery­thing, mak­ing [su­per­hu­man AI] far more mil­i­tar­ily use­ful than mere atomic weapons. The first na­tion to cre­ate an obe­di­ent [su­per­hu­man AI] would also in­stantly ac­quire the ca­pac­ity to ter­mi­nate its ri­vals’ AI de­vel­op­ment pro­jects. Know­ing the stakes, ri­val na­tions might go full throt­tle to win [a race to su­per­hu­man AI], even if they un­der­stood that haste could cause them to cre­ate a world-de­stroy­ing [su­per­hu­man AI]. Th­ese ri­vals might re­al­ize the dan­ger and des­per­ately wish to come to an agree­ment to re­duce the peril, but they might find that the logic of the widely used game the­ory para­dox of the Pri­son­ers’ Dilemma thwarts all co­op­er­a­tion efforts… Imag­ine that both the US and Chi­nese mil­i­taries want to cre­ate [su­per­hu­man AI]. To keep things sim­ple, let’s as­sume that each mil­i­tary has the bi­nary choice to pro­ceed ei­ther slowly or quickly. Go­ing slowly in­creases the time it will take to build [su­per­hu­man AI] but re­duces the like­li­hood that it will be­come un­friendly and de­stroy hu­man­ity. The United States and China might come to an agree­ment and de­cide that they will both go slowly… [But] if the United States knows that China will go slowly, it might wish to pro­ceed quickly and ac­cept the ad­di­tional risk of de­stroy­ing the world in re­turn for hav­ing a much higher chance of be­ing the first coun­try to cre­ate [su­per­hu­man AI]. (Dur­ing the Cold War, the United States and the Soviet Union risked de­stroy­ing the world for less.) The United States might also think that if the Chi­nese pro­ceed quickly, then they should go quickly, too, rather than let the Chi­nese be the likely win­ners of the… race.

In chap­ter 6, Miller ex­presses similar wor­ries about cor­po­rate in­cen­tives and AI:

Para­dox­i­cally and trag­i­cally, the fact that [su­per­hu­man AI] would de­stroy mankind in­creases the chance of the pri­vate sec­tor de­vel­op­ing it. To see why, pre­tend that you’re at the race­track de­cid­ing whether to bet on the horse Re­cur­sive Dark­ness. The horse offers a good pay­off in the event of vic­tory, but her odds of win­ning seem too small to jus­tify a bet—un­til, that is, you read the fine print on the rac­ing form: “If Re­cur­sive Dark­ness loses, the world ends.” Now you bet ev­ery­thing you have on her be­cause you re­al­ize that the bet will ei­ther pay off or be­come ir­rele­vant.

Miller ex­panded on some of these points in his chap­ter in Sin­gu­lar­ity Hy­pothe­ses.

In a short re­ply to Miller, GMU economist Robin Han­son wrote that

[Miller’s anal­y­sis is] only as use­ful as the as­sump­tions on which it is based. Miller’s cho­sen as­sump­tions seem to me quite ex­treme, and quite un­likely.

Un­for­tu­nately, Han­son does not ex­plain his rea­sons for re­ject­ing Miller’s anal­y­sis.

Sun Microsys­tems co-founder Bill Joy is fa­mous for the techno-pes­simism of his Wired es­say “Why the Fu­ture Doesn’t Need Us,” but that ar­ti­cle’s pre­dic­tions about elites’ likely han­dling of AI are ac­tu­ally some­what mixed:

we all wish our course could be de­ter­mined by our col­lec­tive val­ues, ethics, and morals. If we had gained more col­lec­tive wis­dom over the past few thou­sand years, then a di­alogue to this end would be more prac­ti­cal, and the in­cred­ible pow­ers we are about to un­leash would not be nearly so trou­bling.

One would think we might be driven to such a di­alogue by our in­stinct for self-preser­va­tion. In­di­vi­d­u­als clearly have this de­sire, yet as a species our be­hav­ior seems to be not in our fa­vor. In deal­ing with the nu­clear threat, we of­ten spoke dishon­estly to our­selves and to each other, thereby greatly in­creas­ing the risks. Whether this was poli­ti­cally mo­ti­vated, or be­cause we chose not to think ahead, or be­cause when faced with such grave threats we acted ir­ra­tionally out of fear, I do not know, but it does not bode well.

The new Pan­dora’s boxes of ge­net­ics, nan­otech­nol­ogy, and robotics are al­most open, yet we seem hardly to have no­ticed… Churchill re­marked, in a fa­mous left-handed com­pli­ment, that the Amer­i­can peo­ple and their lead­ers ‘in­vari­ably do the right thing, af­ter they have ex­am­ined ev­ery other al­ter­na­tive.’ In this case, how­ever, we must act more pre­sciently, as to do the right thing only at last may be to lose the chance to do it at all...

...And yet I be­lieve we do have a strong and solid ba­sis for hope. Our at­tempts to deal with weapons of mass de­struc­tion in the last cen­tury provide a shin­ing ex­am­ple of re­lin­quish­ment for us to con­sider: the unilat­eral US aban­don­ment, with­out pre­con­di­tions, of the de­vel­op­ment of biolog­i­cal weapons. This re­lin­quish­ment stemmed from the re­al­iza­tion that while it would take an enor­mous effort to cre­ate these ter­rible weapons, they could from then on eas­ily be du­pli­cated and fall into the hands of rogue na­tions or ter­ror­ist groups.

Former GiveWell re­searcher Jonah Sinick has ex­pressed op­ti­mism on the is­sue:

I per­son­ally am op­ti­mistic about the world’s elites nav­i­gat­ing AI risk as well as pos­si­ble sub­ject to in­her­ent hu­man limi­ta­tions that I would ex­pect ev­ery­body to have, and the in­her­ent risk. Some points:

  1. I’ve been sur­prised by peo­ple’s abil­ity to avert bad out­comes. Only two nu­clear weapons have been used since nu­clear weapons were de­vel­oped, de­spite the fact that there are 10,000+ nu­clear weapons around the world. Poli­ti­cal lead­ers are as­sas­si­nated very in­fre­quently rel­a­tive to how of­ten one might ex­pect a pri­ori.

  2. AI risk is a Global Catas­trophic Risk in ad­di­tion to be­ing an x-risk. There­fore, even peo­ple who don’t care about the far fu­ture will be mo­ti­vated to pre­vent it.

  3. The peo­ple with the most power tend to be the most ra­tio­nal peo­ple, and the effect size can be ex­pected to in­crease over time… The most ra­tio­nal peo­ple are the peo­ple who are most likely to be aware of and to work to avert AI risk...

  4. Availa­bil­ity of in­for­ma­tion is in­creas­ing over time. At the time of the Dart­mouth con­fer­ence, in­for­ma­tion about the po­ten­tial dan­gers of AI was not very salient, now it’s more salient, and in the fu­ture it will be still more salient...

  5. In the Man­hat­tan pro­ject, the “will bombs ig­nite the at­mo­sphere?” ques­tion was an­a­lyzed and dis­missed with­out much (to our knowl­edge) dou­ble-check­ing. The amount of risk check­ing per hour of hu­man cap­i­tal available can be ex­pected to in­crease over time. In gen­eral, peo­ple en­joy tack­ling im­por­tant prob­lems, and risk check­ing is more im­por­tant than most of the things that peo­ple would oth­er­wise be do­ing.

Paul Chris­ti­ano is an­other voice of op­ti­mism about elites’ han­dling of AI. Here are some snip­pets from his “main­line” sce­nario for AI de­vel­op­ment:

It be­comes fairly clear some time in ad­vance, per­haps years, that broadly hu­man-com­pet­i­tive AGI will be available soon. As this be­comes ob­vi­ous, com­pe­tent re­searchers shift into more di­rectly rele­vant work, and gov­ern­ments and re­searchers be­come more con­cerned with so­cial im­pacts and safety is­sues...

Call the point where the share of hu­man work­ers is neg­ligible point Y. After Y hu­mans are very un­likely to main­tain con­trol over global eco­nomic dy­nam­ics—the effec­tive pop­u­la­tion is over­whelm­ingly dom­i­nated by ma­chine in­tel­li­gences… This pic­ture be­comes clear to se­ri­ous on­look­ers well in ad­vance of the de­vel­op­ment of hu­man-level AGI… [hence] there is much in­tel­lec­tual ac­tivity aimed at un­der­stand­ing these dy­nam­ics and strate­gies for han­dling them, car­ried out both in pub­lic and within gov­ern­ments.

Why should we ex­pect the con­trol prob­lem to be solved? …at each point when we face a con­trol prob­lem more difficult than any we have faced so far and with higher con­se­quences for failure, we ex­pect to have faced slightly eas­ier prob­lems with only slightly lower con­se­quences for failure in the past.

As long as solu­tions to the con­trol prob­lem are not quite satis­fac­tory, the in­cen­tives to re­solve con­trol prob­lems are com­pa­rable to the in­cen­tives to in­crease the ca­pa­bil­ities of sys­tems. If solu­tions are par­tic­u­larly un­satis­fac­tory, then in­cen­tives to re­solve con­trol prob­lems are very strong. So nat­u­ral eco­nomic in­cen­tives build a con­trol sys­tem (in the tra­di­tional sense from robotics) which keeps solu­tions to the con­trol prob­lem from be­ing too un­satis­fac­tory.

Chris­ti­ano is no Polyanna, how­ever. In the same doc­u­ment, he out­lines “what could go wrong,” and what we might do about it.

Notes

1 I origi­nally in­cluded an­other quote from Eliezer, but then I no­ticed that other read­ers on Less Wrong had el­se­where in­ter­preted that same quote differ­ently than I had, so I re­moved it from this post.