International cooperation vs. AI arms race

Summary

I think there’s a de­cent chance that gov­ern­ments will be the first to build ar­tifi­cial gen­eral in­tel­li­gence (AI). In­ter­na­tional hos­tility, es­pe­cially an AI arms race, could ex­ac­er­bate risk-tak­ing, hos­tile mo­ti­va­tions, and er­rors of judg­ment when cre­at­ing AI. If so, then in­ter­na­tional co­op­er­a­tion could be an im­por­tant fac­tor to con­sider when eval­u­at­ing the flow-through effects of char­i­ties. That said, we may not want to pop­u­larize the arms-race con­sid­er­a­tion too openly lest we ac­cel­er­ate the race.

Will gov­ern­ments build AI first?

AI poses a na­tional-se­cu­rity threat, and un­less the mil­i­taries of pow­er­ful coun­tries are very naive, it seems to me un­likely they’d al­low AI re­search to pro­ceed in pri­vate in­definitely. At some point the US mil­i­tary would con­fis­cate the pro­ject from Google or Gold­man Sachs, if the US mil­i­tary isn’t already ahead of them in se­cret by that point. (DARPA already funds a lot of pub­lic AI re­search.)

There are some sce­nar­ios in which pri­vate AI re­search wouldn’t be na­tion­al­ized:

  • An un­ex­pected AI foom be­fore any­one re­al­izes what was com­ing.

  • The pri­vate de­vel­op­ers stay un­der­ground for long enough not to be caught. This be­comes less likely the more gov­ern­ment surveillance im­proves (see “Arms Con­trol and In­tel­li­gence Ex­plo­sions”).

  • AI de­vel­op­ers move to a “safe haven” coun­try where they can’t be taken over. (It seems like the in­ter­na­tional com­mu­nity might pre­vent this, how­ever, in the same way it now seeks to sup­press ter­ror­ism in other coun­tries.)

Each of these sce­nar­ios could hap­pen, but it seems most likely to me that gov­ern­ments would ul­ti­mately con­trol AI de­vel­op­ment.
AI arms races
Govern­ment AI de­vel­op­ment could go wrong in sev­eral ways. Prob­a­bly most on LW feel the pre­vailing sce­nario is that gov­ern­ments would botch the pro­cess by not re­al­iz­ing the risks at hand. It’s also pos­si­ble that gov­ern­ments would use the AI for malev­olent, to­tal­i­tar­ian pur­poses.

It seems that both of these bad sce­nar­ios would be ex­ac­er­bated by in­ter­na­tional con­flict. Greater hos­tility means coun­tries are more in­clined to use AI as a weapon. In­deed, who­ever builds the first AI can take over the world, which makes build­ing AI the ul­ti­mate arms race. A USA-China race is one rea­son­able pos­si­bil­ity.

Arms races en­courage risk-tak­ing—be­ing will­ing to skimp on safety mea­sures to im­prove your odds of win­ning (“Rac­ing to the Precipice”). In ad­di­tion, the weaponiza­tion of AI could lead to worse ex­pected out­comes in gen­eral. CEV seems to have less hope of suc­cess in a Cold War sce­nario. (“What? You want to in­clude the evil Chi­nese in your CEV??”) (ETA: With a pure CEV, pre­sum­ably it would even­tu­ally count Chi­nese val­ues even if it started with just Amer­i­cans, be­cause peo­ple would be­come more en­light­ened dur­ing the pro­cess. How­ever, when we imag­ine more crude demo­cratic de­ci­sion out­comes, this be­comes less likely.)

Ways to avoid an arms race

Avert­ing an AI arms race seems to be an im­por­tant topic for re­search. It could be partly in­formed by the Cold War and other nu­clear arms races, as well as by other efforts at non­pro­lifer­a­tion of chem­i­cal and biolog­i­cal weapons.

Apart from more ro­bust arms con­trol, other fac­tors might help:

  • Im­proved in­ter­na­tional in­sti­tu­tions like the UN, al­low­ing for bet­ter en­force­ment against defec­tion by one state.

  • In the long run, a sce­nario of global gov­er­nance (i.e., a Le­viathan or sin­gle­ton) would likely be ideal for strength­en­ing in­ter­na­tional co­op­er­a­tion, just like na­tion states re­duce in­tra-state vi­o­lence.

  • Bet­ter con­struc­tion and en­force­ment of non­pro­lifer­a­tion treaties.

  • Im­proved game the­ory and in­ter­na­tional-re­la­tions schol­ar­ship on the causes of arms races and how to avert them. (For in­stance, arms races have some­times been mod­eled as iter­ated pris­oner’s dilem­mas with im­perfect in­for­ma­tion.)

  • How to im­prove ver­ifi­ca­tion, which has his­tor­i­cally been a weak point for nu­clear arms con­trol. (The con­cern is that if you haven’t ver­ified well enough, the other side might be arm­ing while you’re not.)

  • Mo­ral tol­er­ance and mul­ti­cul­tural per­spec­tive, aiming to re­duce peo­ple’s sense of na­tion­al­ism. (In the limit where nei­ther Amer­i­cans nor Chi­nese cared which gov­ern­ment won the race, there would be no point in hav­ing the race.)

  • Im­proved trade, democ­racy, and other forces that his­tor­i­cally have re­duced the like­li­hood of war.

Are these efforts cost-effec­tive?

World peace is hardly a goal unique to effec­tive al­tru­ists (EAs), so we shouldn’t nec­es­sar­ily ex­pect low-hang­ing fruit. On the other hand, pro­jects like nu­clear non­pro­lifer­a­tion seem rel­a­tively un­der­funded even com­pared with anti-poverty char­i­ties.

I sus­pect more di­rect MIRI-type re­search has higher ex­pected value, but among EAs who don’t want to fund MIRI speci­fi­cally, en­courag­ing dona­tions to­ward in­ter­na­tional co­op­er­a­tion could be valuable, since it’s cer­tainly a more main­stream cause. I won­der if GiveWell would con­sider study­ing global co­op­er­a­tion speci­fi­cally be­yond its in­di­rect re­la­tion­ship with catas­trophic risks.

Should we pub­li­cize AI arms races?

When I men­tioned this topic to a friend, he pointed out that we might not want the idea of AI arms races too widely known, be­cause then gov­ern­ments might take the con­cern more se­ri­ously and there­fore start the race ear­lier—giv­ing us less time to pre­pare and less time to work on FAI in the mean­while. From David Chalmers, “The Sin­gu­lar­ity: A Philo­soph­i­cal Anal­y­sis” (foot­note 14):

When I dis­cussed these is­sues with cadets and staff at the West Point Mili­tary Academy, the ques­tion arose as to whether the US mil­i­tary or other branches of the gov­ern­ment might at­tempt to pre­vent the cre­ation of AI or AI+, due to the risks of an in­tel­li­gence ex­plo­sion. The con­sen­sus was that they would not, as such pre­ven­tion would only in­crease the chances that AI or AI+ would first be cre­ated by a for­eign power. One might even ex­pect an AI arms race at some point, once the po­ten­tial con­se­quences of an in­tel­li­gence ex­plo­sion are reg­istered. Ac­cord­ing to this rea­son­ing, al­though AI+ would have risks from the stand­point of the US gov­ern­ment, the risks of Chi­nese AI+ (say) would be far greater.

We should take this in­for­ma­tion-haz­ard con­cern se­ri­ously and re­mem­ber the unilat­er­al­ist’s curse. If it proves to be fatal for ex­plic­itly dis­cussing AI arms races, we might in­stead en­courage in­ter­na­tional co­op­er­a­tion with­out ex­plain­ing why. For­tu­nately, it wouldn’t be hard to en­courage in­ter­na­tional co­op­er­a­tion on grounds other than AI arms races if we wanted to do so.
ETA: Also note that a gov­ern­ment-level arms race might be prefer­able to a Wild West race among a dozen pri­vate AI de­vel­op­ers where co­or­di­na­tion and com­pro­mise would be not just difficult but po­ten­tially im­pos­si­ble.