Superintelligence 7: Decisive strategic advantage

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come. This week we dis­cuss the sev­enth sec­tion in the read­ing guide: De­ci­sive strate­gic ad­van­tage. This cor­re­sponds to Chap­ter 5.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble and I re­mem­ber, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: Chap­ter 5 (p78-91)


Summary

  1. Ques­tion: will a sin­gle ar­tifi­cial in­tel­li­gence pro­ject get to ‘dic­tate the fu­ture’? (p78)

  2. We can ask, will a pro­ject at­tain a ‘de­ci­sive strate­gic ad­van­tage’ and will they use this to make a ‘sin­gle­ton’?

    1. ‘De­ci­sive strate­gic ad­van­tage’ = a level of tech­nolog­i­cal and other ad­van­tages suffi­cient for com­plete world dom­i­na­tion (p78)

    2. ‘Sin­gle­ton’ = a sin­gle global de­ci­sion-mak­ing agency strong enough to solve all ma­jor global co­or­di­na­tion prob­lems (p78, 83)

  3. A pro­ject will get a de­ci­sive strate­gic ad­van­tage if there is a big enough gap be­tween its ca­pa­bil­ity and that of other pro­jects.

  4. A faster take­off would make this gap big­ger. Other fac­tors would too, e.g. diffu­sion of ideas, reg­u­la­tion or ex­pro­pri­a­tion of win­nings, the ease of stay­ing ahead once you are far enough ahead, and AI solu­tions to loy­alty is­sues (p78-9)

  5. For some his­tor­i­cal ex­am­ples, lead­ing pro­jects have a gap of a few months to a few years with those fol­low­ing them. (p79)

  6. Even if a sec­ond pro­ject starts tak­ing off be­fore the first is done, the first may emerge de­ci­sively ad­van­ta­geous. If we imag­ine take­off ac­cel­er­at­ing, a pro­ject that starts out just be­hind the lead­ing pro­ject might still be far in­fe­rior when the lead­ing pro­ject reaches su­per­in­tel­li­gence. (p82)

  7. How large would a suc­cess­ful pro­ject be? (p83) If the route to su­per­in­tel­li­gence is not AI, the pro­ject prob­a­bly needs to be big. If it is AI, size is less clear. If lots of in­sights are ac­cu­mu­lated in open re­sources, and can be put to­gether or finished by a small team, a suc­cess­ful AI pro­ject might be quite small (p83).

  8. We should dis­t­in­guish the size of the group work­ing on the pro­ject, and the size of the group that con­trols the pro­ject (p83-4)

  9. If large pow­ers an­ti­ci­pate an in­tel­li­gence ex­plo­sion, they may want to mon­i­tor those in­volved and/​or take con­trol. (p84)

  10. It might be easy to mon­i­tor very large pro­jects, but hard to trace small pro­jects de­signed to be se­cret from the out­set. (p85)

  11. Author­i­ties may just not no­tice what’s go­ing on, for in­stance if poli­ti­cally mo­ti­vated firms and aca­demics fight against their re­search be­ing seen as dan­ger­ous. (p85)

  12. Var­i­ous con­sid­er­a­tions sug­gest a su­per­in­tel­li­gence with a de­ci­sive strate­gic ad­van­tage would be more likely than a hu­man group to use the ad­van­tage to form a sin­gle­ton (p87-89)

Another view
This week, Paul Chris­ti­ano con­tributes a guest sub-post on an al­ter­na­tive per­spec­tive:

Typ­i­cally new tech­nolo­gies do not al­low small groups to ob­tain a “de­ci­sive strate­gic ad­van­tage”—they usu­ally diffuse through­out the whole world, or per­haps are limited to a sin­gle coun­try or coal­i­tion dur­ing war. This is con­sis­tent with in­tu­ition: a small group with a tech­nolog­i­cal ad­van­tage will still do fur­ther re­search slower than the rest of the world, un­less their tech­nolog­i­cal ad­van­tage over­whelms their smaller size.

The re­sult is that small groups will be over­taken by big groups. Usu­ally the small group will sell or lease their tech­nol­ogy to so­ciety at large first, since a tech­nol­ogy’s use­ful­ness is pro­por­tional to the scale at which it can be de­ployed. In ex­treme cases such as war these gains might be offset by the cost of em­pow­er­ing the en­emy. But even in this case we ex­pect the dy­nam­ics of coal­i­tion-for­ma­tion to in­crease the scale of tech­nol­ogy-shar­ing un­til there are at most a hand­ful of com­pet­ing fac­tions.

So any dis­cus­sion of why AI will lead to a de­ci­sive strate­gic ad­van­tage must nec­es­sar­ily be a dis­cus­sion of why AI is an un­usual tech­nol­ogy.

In the case of AI, the main differ­ence Bostrom high­lights is the pos­si­bil­ity of an abrupt in­crease in pro­duc­tivity. In or­der for a small group to ob­tain such an ad­van­tage, their tech­nolog­i­cal lead must cor­re­spond to a large pro­duc­tivity im­prove­ment. A team with a billion dol­lar bud­get would need to se­cure some­thing like a 10,000-fold in­crease in pro­duc­tivity in or­der to out­com­pete the rest of the world. Such a jump is con­ceiv­able, but I con­sider it un­likely. There are other con­ceiv­able mechanisms dis­tinc­tive to AI; I don’t think any of them have yet been ex­plored in enough depth to be per­sua­sive to a skep­ti­cal au­di­ence.

Notes

1. Ex­treme AI ca­pa­bil­ity does not im­ply strate­gic ad­van­tage. An AI pro­gram could be very ca­pa­ble—such that the sum of all in­stances of that AI wor­ld­wide were far su­pe­rior (in ca­pa­bil­ity, e.g. eco­nomic value) to the rest of hu­man­ity’s joint efforts—and yet the AI could fail to have a de­ci­sive strate­gic ad­van­tage, be­cause it may not be a strate­gic unit. In­stances of the AI may be con­trol­led by differ­ent par­ties across so­ciety. In fact this is the usual out­come for tech­nolog­i­cal de­vel­op­ments.

2. On gaps be­tween the best AI pro­ject and the sec­ond best AI pro­ject (p79) A large gap might de­velop ei­ther be­cause of an abrupt jump in ca­pa­bil­ity or ex­tremely fast progress (which is much like an abrupt jump), or from one pro­ject hav­ing con­sis­tent faster growth than other pro­jects for a time. Con­sis­tently faster progress is a bit like a jump, in that there is pre­sum­ably some par­tic­u­lar highly valuable thing that changed at the start of the fast progress. Robin Han­son frames his Foom de­bate with Eliezer as about whether there are ‘ar­chi­tec­tural’ in­no­va­tions to be made, by which he means in­no­va­tions which have a large effect (or so I un­der­stood from con­ver­sa­tion). This seems like much the same ques­tion. On this, Robin says:

Yes, some­times ar­chi­tec­tural choices have wider im­pacts. But I was an ar­tifi­cial in­tel­li­gence re­searcher for nine years, end­ing twenty years ago, and I never saw an ar­chi­tec­ture choice make a huge differ­ence, rel­a­tive to other rea­son­able ar­chi­tec­ture choices. For most big sys­tems, over­all ar­chi­tec­ture mat­ters a lot less than get­ting lots of de­tail right. Re­searchers have long wan­dered the space of ar­chi­tec­tures, mostly re­dis­cov­er­ing vari­a­tions on what oth­ers found be­fore.

3. What should ac­tivists do? Bostrom points out that ac­tivists seek­ing max­i­mum ex­pected im­pact might wish to fo­cus their plan­ning on high lev­er­age sce­nar­ios, where larger play­ers are not pay­ing at­ten­tion (p86). This is true, but it’s worth not­ing that chang­ing the prob­a­bil­ity of large play­ers pay­ing at­ten­tion is also an op­tion for ac­tivists, if they think the ‘high lev­er­age sce­nar­ios’ are likely to be much bet­ter or worse.

4. Trade. One key ques­tion seems to be whether suc­cess­ful pro­jects are likely to sell their prod­ucts, or hoard them in the hope of soon tak­ing over the world. I doubt this will be a strate­gic de­ci­sion they will make—rather it seems that one of these op­tions will be ob­vi­ously bet­ter given the situ­a­tion, and we are un­cer­tain about which. A lone in­ven­tor of writ­ing should prob­a­bly not have hoarded it for a soli­tary power grab, even though it could rea­son­ably have seemed like a good can­di­date for rad­i­cally speed­ing up the pro­cess of self-im­prove­ment.


5. Disagree­ment. Note that though few peo­ple be­lieve that a sin­gle AI pro­ject will get to dic­tate the fu­ture, this is of­ten be­cause they dis­agree with things in the pre­vi­ous chap­ter—e.g. that a sin­gle AI pro­ject will plau­si­bly be­come more ca­pa­ble than the world in the space of less than a month.

6. How big is the AI pro­ject? Bostrom dis­t­in­guishes be­tween the size of the effort to make AI and the size of the group ul­ti­mately con­trol­ling its de­ci­sions. Note that the peo­ple mak­ing de­ci­sions for the AI pro­ject may also not be the peo­ple mak­ing de­ci­sions for the AI—i.e. the agents that emerge. For in­stance, the AI mak­ing com­pany might sell ver­sions of their AI to a range of or­ga­ni­za­tions, mod­ified for their par­tic­u­lar goals. While in some sense their AI has taken over the world, the ac­tual agents are act­ing on be­half of much of so­ciety.

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. When has any­one gained a ‘de­ci­sive strate­gic ad­van­tage’ at a smaller scale than the world? Can we learn any­thing in­ter­est­ing about what char­ac­ter­is­tics a pro­ject would need to have such an ad­van­tage with re­spect to the world?

  2. How scal­able is in­no­va­tive pro­ject se­crecy? Ex­am­ine past cases: Man­hat­tan pro­ject, Bletchly park, Bit­coin, Anony­mous, Stuxnet, Skunk Works, Phan­tom Works, Google X.

  3. How large are the gaps in de­vel­op­ment time be­tween mod­ern soft­ware pro­jects? What dic­tates this? (e.g. is there diffu­sion of ideas from en­g­ineers talk­ing to each other? From peo­ple chang­ing or­ga­ni­za­tions? Do peo­ple get far enough ahead that it is hard to fol­low them?)

    If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

    How to proceed

    This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

    Next week, we will talk about Cog­ni­tive su­per­pow­ers (sec­tion 8). To pre­pare, read Chap­ter 6. The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 3 Novem­ber. Sign up to be no­tified here.