Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence

Ab­stract: As there are no cur­rently ob­vi­ous ways to cre­ate safe self-im­prov­ing su­per­in­tel­li­gence, but its emer­gence is loom­ing, we prob­a­bly need tem­po­rary ways to pre­vent its cre­ation. The only way to pre­vent it is to cre­ate a spe­cial type of AI that is able to con­trol and mon­i­tor the en­tire world. The idea has been sug­gested by Go­ertzel in the form of an AI Nanny, but his Nanny is still su­per­in­tel­li­gent, and is not easy to con­trol. We ex­plore here ways to cre­ate the safest and sim­plest form of AI which may work as an AI Nanny, that is, a global surveillance state pow­ered by a Nar­row AI, or AI Po­lice. A similar but more limited sys­tem has already been im­ple­mented in China for the pre­ven­tion of or­di­nary crime. AI po­lice will be able to pre­dict the ac­tions of and stop po­ten­tial ter­ror­ists and bad ac­tors in ad­vance. Im­ple­men­ta­tion of such AI po­lice will prob­a­bly con­sist of two steps: first, a strate­gic de­ci­sive ad­van­tage via Nar­row AI cre­ated by an in­tel­li­gence ser­vices of a nu­clear su­per­power, and then ubiquitous con­trol over po­ten­tially dan­ger­ous agents which could cre­ate unau­tho­rized ar­tifi­cial gen­eral in­tel­li­gence which could evolve into Su­per­in­tel­li­gence.

Key­words: AI – ex­is­ten­tial risks – surveillance – world gov­ern­ment – NSA

High­lights:

· Nar­row AI may be used to achieve a de­ci­sive strate­gic ad­van­tage (DSA) and ac­quire global power.

· The most prob­a­ble route to DSA via Nar­row AI is the cre­ation of Nar­row AI by the se­cret ser­vice of a nu­clear su­per­power.

· The most prob­a­ble places for its cre­ation are the US Na­tional Se­cu­rity Agency or the Chi­nese Govern­ment.

· Nar­row AI may be used to cre­ate a Global AI Po­lice for global surveillance, able to pre­vent the cre­ation of dan­ger­ous AIs and most other ex­is­ten­tial risks.

· This solu­tion is dan­ger­ous but re­al­is­tic.

Pe­ma­l­ink: https://​​philpa­pers.org/​​rec/​​TURNAN-3

Content

1. In­tro­duc­tion

2. The main con­tra­dic­tion of the AI safety prob­lem: AI must si­mul­ta­neously ex­ist and not ex­ist

3. De­ci­sive strate­gic ad­van­tage via Nar­row AI

3.1. Non-self-im­prov­ing AI can ob­tain a de­ci­sive ad­van­tage

3.2. Nar­row AI is used to cre­ate non-AI world-dom­i­nat­ing tech­nol­ogy

3.3. Types of Nar­row AI which may be used for ob­tain­ing a DSA

3.4. The knowa­bil­ity of a de­ci­sive ad­van­tage

4. AI-em­pow­ered re­con­nais­sance or­ga­ni­za­tion of a nu­clear su­per­power is the most prob­a­ble place of ori­gin of a Nar­row AI DSA

4.1. Ad­van­tages of a se­cret Nar­row AI pro­gram in­side the gov­ern­ment

4.2 Ex­ist­ing gov­ern­men­tal and in­tel­li­gence Nar­row AI pro­jects ac­cord­ing to open sources

4.3. Who is win­ning the Nar­row AI race?

5. Plan of im­ple­men­ta­tion of AI po­lice via Nar­row AI ad­van­tage

5.1. Steps of im­ple­ment­ing of AI safety via Nar­row AI DSA

5.2. Pre­dic­tive AI Po­lice based on Nar­row AI: what and how to con­trol

6. Ob­sta­cles and dan­gers

6.1. Catas­trophic risks

6.2. Mafia-state, cor­rup­tion, and the use of the gov­ern­men­tal AI by pri­vate in­di­vi­d­u­als

Con­clu­sion. Rid­ing the wave of the AI rev­olu­tion to a safer world

1. Introduction

This ar­ti­cle is pes­simistic. It as­sumes that there is no way to cre­ate safe, benev­olent self-im­prov­ing su­per­in­tel­li­gence, and that the only way to es­cape its cre­ation is the im­ple­men­ta­tion of some form of limited AI, which will work as a Global AI Nanny, con­trol­ling and pre­vent­ing the ap­pear­ance of dan­ger­ous AIs as well as other global risks.

The idea of AI Nanny was first sug­gested by Go­ertzel (Go­ertzel, 2012); we have pre­vi­ously ex­plored its lev­els of re­al­iza­tion (Turchin & Denken­berger, 2017a). An AI Nanny does not it­self need to be a su­per­in­tel­li­gence, as if it is, all the same con­trol prob­lems will ap­pear again (Muehlhauser & Sala­mon, 2012).

In this ar­ti­cle, we will ex­plore ways to cre­ate a non-su­per­in­tel­li­gent AI Nanny via Nar­row AI. Do­ing so in­volves ad­dress­ing two ques­tions: First, how to achieve a de­ci­sive strate­gic ad­van­tage (DSA) via Nar­row AI, and sec­ond, how to use such a sys­tem to achieve a level of effec­tive global con­trol suffi­cient to pre­vent the cre­ation of su­per­in­tel­li­gent AI. In the sister ar­ti­cle, we look at the next level of AI Nanny, based on hu­man up­loads, which cur­rently seems a more re­mote pos­si­bil­ity, but which may be­come pos­si­ble af­ter im­ple­men­ta­tion of a Nar­row AI Nanny (Turchin, 2017).

The idea of achiev­ing strate­gic ad­van­tage via AI be­fore the cre­ation of the su­per­in­tel­li­gence was sug­gested by So­tala (So­tala, 2018), who called it a “Ma­jor strate­gic ad­van­tage” as op­posed to a “De­ci­sive strate­gic ad­van­tage”, which is over­whelm­ingly stronger, but re­quires su­per­in­tel­li­gence. A similar line of thought was pre­sented by Alex Men­nen (Men­nen, 2017).

His­tor­i­cally, there are sev­eral ex­am­ples where an ad­van­tage in Nar­row AI has been im­por­tant. The most fa­mous is the case is break­ing of Ger­man ci­pher Enigma via elec­tro-me­chan­i­cal “cryp­to­graphic bombe” con­structed by Alan Tur­ing, which au­to­mat­i­cally gen­er­ate and tested hy­poth­e­sis about code (Welch­man, 1982). It was an over­whelm­ingly more com­plex com­put­ing sys­tem than any other dur­ing WW2, which gave the Allies in­for­ma­tional dom­i­na­tion over the Axis pow­ers. A more re­cent, but also more elu­sive, ex­am­ple is the case of Cam­bridge An­a­lyt­ica, which sup­pos­edly used its data-crunch­ing ad­van­tage to con­tribute to the re­sult of the 2016 US pres­i­den­tial elec­tions (Cot­trell, 2018). Another ex­am­ple is the use of so­phis­ti­cated cy­ber­weapons like Stuxnet to disarm an en­emy (Kush­ner, 2013).

The Chi­nese gov­ern­ment’s fa­cial recog­ni­tion and hu­man rank­ing sys­tem is a pos­si­ble ex­am­ple not of a Nar­row AI ad­van­tage, but of “global AI po­lice”, which cre­ate in­for­ma­tional dom­i­nance over all in­de­pen­dent agents; how­ever, any to­tal­i­tar­ian power which worth its name had effec­tive in­stru­ments for such in­for­ma­tional dom­i­na­tion even be­fore com­put­ers, like Stasi in the former East Ger­many.

To solve AI safety we will ap­ply the the­ory of com­plex prob­lem solv­ing cre­ated by Alt­shul­ler (1999) in Sec­tion 2; dis­cuss ways to reach a de­ci­sive ad­van­tage via Nar­row AI in sec­tion 3; and, in sec­tion 4, ex­am­ine ways to use Nar­row AI to effec­tively mon­i­tor and pre­vent cre­ation of unau­tho­rized self-im­prov­ing AI. In sec­tion 5 we will look at ways to safely de­velop AI Po­lice based on an ad­van­tage in Nar­row AI, and in sec­tion 6 we will ex­am­ine po­ten­tial failure modes.

2. The main con­tra­dic­tion of the AI safety prob­lem: AI must si­mul­ta­neously ex­ist and not exist

It is be­com­ing widely ac­cepted that suffi­ciently ad­vanced AI may be global catas­trophic risk, es­pe­cially if it be­comes su­per­in­tel­li­gent in the pro­cess of re­cur­sive self-im­prove­ment (Bostrom, 2014; Yud­kowsky, 2008). It has also been sug­gested that we should ap­ply en­g­ineer­ing stan­dards of safety to the cre­ation of AI (Yam­polsky & Fox, 2013).

Eng­ineer­ing safety de­mands that the cre­ation of the un­pre­dictably ex­plo­sive sys­tem whose safety can­not be proved (Yam­polskiy, 2016) or in­cre­men­tally tested should be pre­vented. For in­stance, no one wants a nu­clear re­ac­tor with un­pre­dictable chain re­ac­tion; even in a nu­clear bomb, the chain re­ac­tion should be pre­dictable. Hence, if to re­ally ap­ply en­g­ineer­ing safety to the AI, there is only one way to do it:

Do not cre­ate ar­tifi­cial gen­eral in­tel­li­gence (AGI).

How­ever, we can’t pre­vent cre­ation of AGIs by other agents as there is no cen­tral global au­thor­ity and abil­ity to mon­i­tor all AI labs and in­di­vi­d­u­als. In ad­di­tion, the prob­a­bil­ity of global co­op­er­a­tion is small be­cause of the on­go­ing AI arms race be­tween US and China (Ding, 2018; Perez, 2017).

More­over, if we post­pone the cre­ation of AGI, we could suc­cumb to other global catas­trophic risk, like biolog­i­cal risks (Millett & Sny­der-Beat­tie, 2017; Turchin, Green, & Denken­berger, 2017) as only AI-pow­ered global con­trol may be suffi­cient to effec­tively pre­vent them. We need pow­er­ful AI to pre­vent all other risks.

In the words of prob­lem solv­ing method TRIZ (Alt­shul­ler, 1999), the core con­tra­dic­tion of the AI prob­lem is fol­low­ing:

AGI must ex­ist and non-ex­ist si­mul­ta­neously.

What does it mean for AI to “ex­ist and non-ex­ist si­mul­ta­neously”? Sev­eral ways to limit the ca­pa­bil­ities of AI so it can’t be re­garded as “fully ex­ist­ing” have been sug­gested:

1) No agency. In this case, AI does not ex­ist as an agent sep­a­rate from hu­mans, so there is no al­ign­ment prob­lem. For ex­am­ple, AI as a hu­man aug­men­ta­tion, as en­vi­sioned in Musk’s Neu­ral­ink (Tem­ple­ton, 2017).

2) No “ar­tifi­cialcom­po­nent. AI is not cre­ated de novo, but is some­how con­nected with hu­mans, per­haps via hu­man up­load­ing (Han­son, 2016). We will look more at this case in an­other ar­ti­cle, “Hu­man up­load as AI Nanny”.

3) No “gen­eral in­tel­li­gence”. The prob­lem-solv­ing abil­ity of this AI arises not from its wit, but be­cause of its ac­cess to large amounts of data and other re­sources. It is also Nar­row AI, not a uni­ver­sal AGI. This is the ap­proach we will ex­plore in the cur­rent ar­ti­cle.

3. De­ci­sive strate­gic ad­van­tage via Nar­row AI

3.1. Non-self-im­prov­ing AI can ob­tain a de­ci­sive advantage

Re­cently So­tala (2016), Chris­ti­ano (2016), Men­nen (2017), and Krakovna (2015) have ex­plored the idea that AI may have a DSA even with­out the ca­pac­ity for self-im­prove­ment. Men­nen wrote about fol­low­ing con­di­tions for the strate­gic ad­van­tage of non-self-im­prov­ing AI:

1) World-tak­ing ca­pa­bil­ity out­perform­ing self-im­prov­ing ca­pa­bil­ities, that is, “AIs are bet­ter at tak­ing over the world than they are at pro­gram­ming AIs” (Men­nen, 2017). He sug­gests later that, hy­po­thet­i­cally, AI will be bet­ter than hu­mans at some form of en­g­ineer­ing. So­tala opined that, “for the AI to ac­quire a DSA, its level in some offen­sive ca­pa­bil­ity must over­come hu­man­ity’s defen­sive ca­pa­bil­ities” (So­tala, 2016).

2) Self-re­stric­tion in self-im­prove­ment. “An AI that is ca­pa­ble of pro­duc­ing a more ca­pa­ble AI may re­frain from do­ing so if it is un­able to solve the AI al­ign­ment prob­lem for it­self” (Men­nen, 2017). We have pre­vi­ously dis­cussed some po­ten­tial difficul­ties for any self-im­prov­ing AI (Turchin & Denken­berger, 2017b). Men­nen sug­gests that AI’s ad­van­tage in that case will be less marked, so box­ing may be more work­able, and the AI is more likely to fail in its takeover at­tempt.

3) Align­ment of non-self-im­prov­ing AI is sim­pler. “AI al­ign­ment would be eas­ier for AIs that do not un­dergo an in­tel­li­gence ex­plo­sion” (Men­nen, 2017), as it will be a) easy to mon­i­tor its goals, b) less of a differ­ence will be ob­served be­tween our goals and the AI’s in­ter­pre­ta­tion of them. This di­chotomy was also ex­plored by Maxwell (2017).

4) AI must ob­tain a DSA not only over hu­mans, but over other AIs, as well as other na­tion-states. The need to have ad­van­tage over other AIs de­pends on the num­ber and rel­a­tive differ­ence be­tween AIs pro­duc­ing teams. We have looked at the na­ture of AI arms races in an ear­lier pa­per (Turchin & Denken­berger, 2017a). A smaller ad­van­tage will pro­duce a slower as­cen­sion, and thus a mul­ti­po­lar out­come will be likely.

So­tala added a dis­tinc­tion be­tween the ma­jor strate­gic ad­van­tage pro­vided by Nar­row AI and that of DSA by su­per­in­tel­li­gent AI (So­tala, 2018). Most of what we will de­scribe be­low falls in the first cat­e­gory. The smaller the ad­van­tage, the riskier and more un­cer­tain its im­ple­men­ta­tion, and the pro­cess of the im­ple­men­ta­tion could be more vi­o­lent.

In the next sub­sec­tions we will ex­plore how Nar­row AI may be used to ob­tain a DSA.

3.2. Nar­row AI is used to cre­ate non-AI world-dom­i­nat­ing technology

Nar­row AI may be im­ple­mented in sev­eral ways to ob­tain a DSA, and for a real DSA, these im­ple­men­ta­tions should be com­bined. How­ever, any DSA will tem­po­rary, and may be in place for no more than one year.

Nu­clear war-win­ning strat­egy. Nar­row AI sys­tems could em­power strate­gic plan­ners with the abil­ity to ac­tu­ally win a nu­clear war with very lit­tle col­lat­eral dam­age or risk of global con­se­quences. That is, they could calcu­late a route to a cred­ible first strike ca­pa­bil­ity. For ex­am­ple, if nu­clear strat­egy could be suc­cess­fully for­mal­ized, like the game Go, the coun­try with the more pow­er­ful AI would win. There are sev­eral ways in which such nu­clear su­pe­ri­or­ity could win us­ing AI:

- Strate­gic dom­i­nance. Create a de­tailed world model which could then be played in the same way as a board game. This is most straight­for­ward way, but it is less likely, as cre­ation of a perfect model is un­likely with­out AGI and is difficult in the chaotic “real world”.

- In­for­ma­tional dom­i­nance. The abil­ity to learn much more in­for­ma­tion about the en­emy, e.g. the lo­ca­tion of all its nu­clear weapons and the codes to dis­able them. Such in­for­ma­tional dom­i­nance may be used to disarm the en­emy forces; it may also in­clude learn­ing all state se­crets of the en­emy with guaran­teed preser­va­tion of their own se­crets.

- Iden­tify small ac­tions with large con­se­quences. This cat­e­gory in­cludes ac­tions such as black­mail of the en­emy’s lead­ers and the use of cryp­toweapons and false flags to cor­ner the en­emy. This ap­proach will prob­a­bly will work if com­bined with strate­gic dom­i­nance.

- Dom­i­nance in man­u­fac­tur­ing. New man­u­fac­tur­ing tech­nol­ogy en­ables cheaper and dead­lier mis­siles and other mil­i­tary hard­ware like drones and large quan­tity of them. This es­pe­cially ap­ply to in­visi­ble weapons for first strike, like stealth cruise mis­siles.

- De­ploy cy­ber­weapons in­side the en­emy’s nu­clear con­trol chains. Some­thing like an ad­vanced form of a com­puter virus em­bed­ded in the nu­clear con­trol and warn­ing sys­tems.

Dom­i­nance in nu­clear war does not nec­es­sar­ily mean that ac­tual war will hap­pen, but such dom­i­nance could be used to force the en­emy to ca­pitu­late and agree to a cer­tain type of in­spec­tions. How­ever, a cred­ible demon­stra­tion of the disarm­ing ca­pa­bil­ity may be needed to mo­ti­vate com­pli­ance.

New tech­nol­ogy which helps to pro­duce other types of weapons.

- Biolog­i­cal weapons. Ad­vances in com­puter em­pow­ered bio­eng­ineer­ing could pro­duce tar­geted bioweapons. It may be not worth­while to list all pos­si­ble haz­ards which an un­eth­i­cal agent could use in a quest for global dom­i­na­tion if the agent has ac­cess to su­pe­rior biotech­nol­ogy with sci­ence-fic­tion-level ca­pa­bil­ities.

- Nan­otech­nol­ogy. Molec­u­lar man­u­fac­tur­ing will al­low the cre­ation of new types of in­visi­ble self-repli­cat­ing weapons, much more de­struc­tive then nukes.

Cy­ber­weapons, that is, weapons which con­sist of com­puter pro­grams and mostly af­fect other pro­grams.

- Hid­den switches in the en­emy’s in­fras­truc­ture.

- The abil­ity to sever com­mu­ni­ca­tion in­side an op­pos­ing mil­i­tary.

- Full com­put­er­i­za­tion of the army from the bot­tom to the top (De Spiegeleire, Maas, & Sweijs, 2017).

- Large drone swarms, like the slaugh­ter­bots from a fa­mous video (Ober­haus, 2017) or their man­u­fac­tur­ing ca­pa­bil­ities (Turchin & Denken­berger, 2018a).

- Fi­nan­cial in­stru­ments.

- Hu­man-in­fluenc­ing ca­pa­bil­ities (effec­tive so­cial ma­nipu­la­tion like tar­geted adds and fake facts).

3.3. Types of Nar­row AI which may be used for ob­tain­ing a DSA

There are sev­eral hy­po­thet­i­cal ways how nar­row AI could reach DSA.

One is Data-driven AIs: sys­tems whose main power comes from ac­cess to the large amounts of data, which com­pen­sate for their limited or nar­row “pure” in­tel­li­gence. This in­cludes sub­cat­e­gory of “Big broth­ers”. This cat­e­gory in­cludes sys­tems of crim­i­nal anal­y­sis like Palan­tir (re­cently mocked in the Se­nate as “Stan­ford An­a­lyt­ica” (Midler, 2018)), which unite mass surveillance with the abil­ity to crunch big data and find pat­terns. Another type is World simu­la­tions. World simu­la­tions may be cre­ated based on data col­lected about the world and its peo­ple to pre­dict their be­hav­ior. The pos­ses­sor of the bet­ter model of the world would win.

Limited prob­lem solvers are sys­tems which out­perform hu­mans within cer­tain nar­row fields which in­cludes:

- “Robotic minds” with limited agency and nat­u­ral lan­guage pro­cess­ing ca­pa­bil­ities able to em­power a robotic army, for ex­am­ple, as the brain of a drone swarm .

- Cryp­to­graphic supremacy. The case of Enigma shows the power of cryp­to­graphic supremacy over po­ten­tial ad­ver­saries. Such supremacy might be enough to win WW3, as it will re­sult in in­for­ma­tional trans­parency for one side. Quan­tum com­put­ers could provide such supremacy via their abil­ity to de­ci­pher codes (Preskill, 2012).

- Ex­pert sys­tems as Nar­row Or­a­cles, which could provide use­ful ad­vice in some field, per­haps based on some ma­chine learn­ing-based ad­vice-gen­er­at­ing soft­ware.

- Com­puter pro­grams able to win strate­gic games. Some­thing like a strate­gic plan­ner with play­ing abil­ities, e.g. Alpha Zero (Silver et al., 2017). Such a pro­gram may need ei­ther a hand-crafted world model or con­nec­tion with the “world simu­la­tions” de­scribed in sec­tion 1.2. Such a sys­tem may be em­pow­ered by an­other sys­tem which able to for­mal­ize any real-world situ­a­tion as a game.

- Nar­row AI in en­g­ineer­ing could dra­mat­i­cally in­crease the effec­tive­ness of some form of weapons con­struc­tion, for ex­am­ple, nu­clear or biolog­i­cal weapons, nan­otech­nol­ogy, or robotics.

Nar­row AI ad­van­tage may take also a form of Nar­row AI in­creas­ing the effec­tive­ness of group in­tel­li­gence. This could be Graph­i­cal col­lec­tive think­ing sys­tems, some­thing like dy­namic col­lec­tively ed­ited roadmaps, wikis, or Palan­tir. One at­tempt to cre­ate such a plat­form was Ar­bital (Ar­bital, 2017). Chris­ti­ano et al.’s “am­plify and dis­till” pro­ject works on fac­tored cog­ni­tion, which will be a smart­phone app which dis­tributes differ­ent por­tions of cog­ni­tive tasks be­tween teams (Ought, 2018). Also, it may take form of AI-em­pow­ered per­sonal search as­sis­tants, maybe with a sim­ple brain–com­puter in­ter­face or Com­mu­ni­ca­tion as­sis­tants, which help to make con­ver­sa­tion pro­duc­tive, record a con­ver­sa­tion log and show rele­vant in­ter­net links. Fi­nally, group in­tel­li­gence may be ag­gre­gated via large, self-im­prov­ing or­ga­ni­za­tions, which im­ple­ment all types of col­lec­tive in­tel­li­gence, hard­ware pro­duc­ing ca­pa­bil­ities, money to hire the best tal­ent, etc., like Google.

So­tala has dis­cussed “minds co­a­les­cence” as a way to cre­ate more pow­er­ful minds (So­tala & Valpola, 2012). Danila Medvedev sug­gested that the use of a pow­er­ful col­lab­o­ra­tive in­for­ma­tion pro­cess­ing sys­tem, some­thing be­tween a Wikipe­dia, Ever­note, and Mindmap, may sig­nifi­cantly in­crease group in­tel­li­gence. Similar ideas have been dis­cussed by “Neu­ronet” en­thu­si­asts like Luk­sha, where col­lec­tive in­tel­li­gence will be pro­duced via brain im­plants (Mitin, 2014).

Su­perfore­cast­ing tech­nol­ogy (Tet­lock & Gard­ner, 2016) that ag­gre­gates pre­dic­tions as well as pre­dic­tion mar­kets could be used to in­crease power of the “group brain”. In Soviet times this was known as “sha­rashka” (Ker­ber & Hardesty, 1996) – sci­en­tific lab con­sisted from im­pris­oned sci­en­tists, who were un­der gov­ern­ment con­trol and un­der pres­sure to make dis­cov­er­ies.

Nar­row AI able to reach “in­for­ma­tional dom­i­nance” over all po­ten­tial en­e­mies: in this situ­a­tion, the en­emy can’t have any se­crets and all its ac­tions are con­stantly mon­i­tored. This could be achieved via: so­phis­ti­cated spy­ware in all com­put­ers; quan­tum com­put­ers for code break­ing or some ex­otic quan­tum tech like quan­tum radar or quan­tum calcu­la­tions us­ing close time like curves; micro­scopic robots, as small as a grain of salt, which could be se­cretly im­planted in the ad­ver­sary’s head­quar­ters.

3.4. The knowa­bil­ity of a de­ci­sive advantage

Even if one side reaches the level of de­ci­sive ad­van­tage which pro­vides it with the op­por­tu­nity to take over the world, it may not re­al­ize what it pos­sesses if it doesn’t know the ca­pa­bil­ities of other play­ers, which could be made de­liber­ately vague. For ex­am­ple, in the 1940s, the US had nu­clear su­pe­ri­or­ity, but the Soviet Union made vague claims in 1947 that the nu­clear se­cret was no longer se­cret (Timer­baev, n.d.), thus cre­at­ing un­cer­tainty about its level of nu­clear suc­cess.

To en­sure a DSA, a rather in­va­sive surveillance sys­tems would need to be im­ple­mented first; in other words, the ad­van­tage must be reached first in in­for­ma­tional dom­i­na­tion, to guaran­tee knowl­edge of the ca­pa­bil­ities of all op­po­nents. This could be done via AI cre­ated in­side an in­tel­li­gence ser­vice.

A DSA pro­vided by Nar­row AI will prob­a­bly re­quire a com­bi­na­tion of sev­eral of the Nar­row AI types listed in sec­tion 3.3, and the only way to guaran­tee such dom­i­nance is the ac­tual size of the pro­ject. The size will de­pend on re­source in­vest­ments, first of all, money, but also minds, and strate­gic co­or­di­na­tion of all these pro­jects into one work­able sys­tem. It looks like only the US and China cur­rently have the re­sources and de­ter­mi­na­tion needed for such a pro­ject.

If there is no know­able DSA, both sides may re­frain from at­tack­ing each other. Arm­strong et al. have cre­ated a model of the role of AI and mu­tual knowl­edge (Arm­strong, Bostrom, & Shul­man, 2016). Bostrom has also writ­ten about the topic in his ar­ti­cle about AI open­ness (Bostrom, 2017).

A semi-sta­ble solu­tion con­sist­ing of two AIs may ap­pear, as pre­dicted by Lem (1959) and pre­vi­ously dis­cussed by us (Turchin & Denken­berger, 2018b). Such a bal­ance be­tween two su­per­pow­ers may work as a global AI Nanny, but much less effec­tively, as both sides may try to rush to de­velop su­per­in­tel­li­gent AI to ob­tain an in­sur­mountable ad­van­tage.

Nar­row AI pro­vides a unique op­por­tu­nity for know­able DSA. For ex­am­ple, the cre­ators of cryp­tolog­i­cal bombe were not only able to break the codes of the en­emy, but they prob­a­bly know that they out­performed the code break­ing tech­nolo­gies of the Axis, as the Axis didn’t men­tion the ex­is­tence of their own code break­ing and, more ob­vi­ously, didn’t start to use harder codes, which they would have done if they had similar code-break­ing tech­nol­ogy. A Nar­row AI-based DSA, based on “in­for­ma­tional dom­i­na­tion” cre­ates a unique op­por­tu­nity for an al­most peace­ful world takeover that also in­cludes AI Po­lice able to pre­vent the cre­ation of unau­tho­rized su­per­in­tel­li­gent AIs.

4. AI-em­pow­ered re­con­nais­sance or­ga­ni­za­tion of a nu­clear su­per­power is the most prob­a­ble place of ori­gin of a Nar­row AI DSA

4.1. Ad­van­tages of a se­cret Nar­row AI pro­gram in­side the gov­ern­ment

Dur­ing dis­cus­sions at MIRI (at the time, the Sin­gu­lar­ity In­sti­tute) in the 2000s, the idea that gov­ern­ment and mil­i­tary struc­tures would be in­ter­ested in cre­at­ing su­per­in­tel­li­gent AI was dis­missed, be­cause it was con­sid­ered that the gov­ern­ments were too stupid to un­der­stand fu­ture AI ca­pa­bil­ities, and thus cre­ation of AI in a small pri­vate com­pany was re­garded more likely. But now it cer­tainly not the case.

There are sev­eral rea­sons why a Nar­row AI-driven de­ci­sive strate­gic ad­van­tage could be achieved in­side the gov­ern­men­tal struc­ture of the large nu­clear su­per­pow­ers, and more­over, in­side a se­cret in­tel­li­gence and data crunch­ing agency, similar to the Na­tional Se­cu­rity Agency (NSA) of the US. A nu­clear su­per­power is already in­ter­ested in world dom­i­na­tion, or at least in­ter­ested in pre­vent­ing dom­i­na­tion by other play­ers. If geopoli­tics can be mod­eled as a strate­gic game, Nar­row AI will help to achieve ad­van­tage in such game, as ex­ist­ing Nar­row AIs demon­strate sig­nifi­cantly su­per­hu­man abil­ities in win­ning in com­plex games, similar to the games for world dom­i­nance, like Go.

A nu­clear su­per­power has al­most un­limited money for se­cret AI pro­ject com­pared with star­tups and com­mer­cial cor­po­ra­tions. His­tor­i­cally, the data-crunch­ing ca­pa­bil­ities of se­cret ser­vices have out­performed civilian ap­pli­ca­tions. An AI of the same power as a civilian one but in the hands of a nu­clear su­per­power could dra­mat­i­cally out­perform the civilian AI. Mili­tary AI could lev­er­age sev­eral non-AI ad­van­tages in the hands of the su­per­power: ac­cess to the nu­clear weapons, large com­pu­ta­tional re­sources, nets of sen­sors, pools of big data, a large con­cen­tra­tion of ex­pe­rienced re­searchers, and other se­cret state pro­grams.

Such a se­cret gov­ern­ment AI or­ga­ni­za­tion could take ad­van­tage of the open­ness in the field of AI, as it could ab­sorb in­for­ma­tion about the ad­vances of oth­ers, but would not be not legally obliged to share its own achieve­ments. Thus, it would always out­perform the cur­rent state of pub­lic knowl­edge. Govern­men­tal or­ga­ni­za­tions have used this type of ad­van­tage be­fore to dom­i­nate in cryp­tog­ra­phy.

4.2 Ex­ist­ing gov­ern­men­tal and in­tel­li­gence Nar­row AI pro­jects ac­cord­ing to open sources

When we speak about Nar­row AI in­side a re­con­nais­sance or­ga­ni­za­tion, we mean AI as a tech­nol­ogy which in­creases the effi­ciency of data crunch­ing within an or­ga­ni­za­tion which already has many ad­van­tages: very pow­er­ful in­stru­ments to col­lect data, money, ac­cess to se­cret tech­nol­ogy, and at­tracts the best minds, as well as abil­ity to ed­u­cate and train them ac­cord­ing to its stan­dards.

The US NSA has been de­scribed as the world’s largest sin­gle em­ployer of math­e­mat­i­ci­ans (and there are sev­eral other com­puter-re­lated se­cu­rity agen­cies in the US) (Love, 2014). The NSA em­ploys around 40 000 peo­ple (Rosen­bach, 2013) and has a bud­get of around 10 billion USD. For com­par­i­son, Google em­ploys 72 000 thou­sand peo­ple in 2016 (Statista, 2018).

NSA works on world simu­la­tions with hu­mans (Faggella, 2013) and has vowed to use AI (B. Willi­ams, 2017). Wired has re­ported that “Mon­sterMind, like the film ver­sion of Skynet, is a defense surveillance sys­tem that would in­stantly and au­tonomously neu­tral­ize for­eign cy­ber­at­tacks against the US, and could be used to launch re­tal­i­a­tory strikes as well” (Zet­ter, 2015). An in­ter­est­ing overview of gov­ern­men­tal data crunch­ing is pre­sented in the ar­ti­cle “The New Mili­tary-In­dus­trial Com­plex of Big Data Psy-Ops” (Shaw, 2018). It was re­ported that the CIA runs 137 se­cret AI pro­jects (Jena, 2017). How­ever, it is use­less to search open data about the most se­ri­ous AI pro­jects aimed at world dom­i­na­tion, as such data will doubtless be se­cret.

An ex­am­ple of a Nar­row AI sys­tem which could be im­ple­mented to achieve a DSA is Palan­tir, which was used for so-called “pre­dic­tive polic­ing tech­nol­ogy” (Win­ston, 2018). Palan­tir is an in­stru­ment to search large databases about peo­ple and find hid­den con­nec­tions. Such a sys­tem also prob­a­bly fa­cil­i­tates the col­lec­tive in­tel­li­gence of a group: con­ver­sa­tion sup­port Nar­row AI may record and tran­scribe con­ver­sa­tion on the fly, sug­gest sup­port­ing links, gen­er­ate ideas for brain­storm­ing and works as a mild Or­a­cle AI in nar­row do­mains. We don’t claim here that the Palan­tir is an in­stru­ment in­tended to take over the world, but that a Nar­row AI pro­vid­ing a de­ci­sive strate­gic ad­van­tage may look much like it.

Another illus­tra­tive ex­am­ple of the Nar­row AI sys­tems we are speak­ing about is the Chi­nese SenseTime, which stores data de­scribing hun­dreds of mil­lions of hu­man faces and is used for ap­pli­ca­tions like the Chi­nese so­cial credit sys­tem (Mur­phy, 2018).

4.3. Who is win­ning the Nar­row AI race?

It looks like the US is los­ing the mo­men­tum to im­ple­ment any pos­si­ble strate­gic ad­van­tage in Nar­row AI for poli­ti­cal rea­sons: the con­flict of the Trump ad­minis­tra­tion with other branches of power; Snow­den-type leaks re­sult­ing in pub­lic out­cry; and the cam­paign against mil­i­tary AI col­lab­o­ra­tion with the gov­ern­ment within Google (Archer, 2018). If this is the case, Chi­nese could take this ad­van­tage later, as their re­la­tion­ship with pri­vate or­ga­ni­za­tions is more struc­tured, poli­ti­cal power is more cen­tral­ized and eth­i­cal norms are differ­ent (Willi­ams, 2018). There are sev­eral other pow­er­ful in­tel­li­gence agen­cies of nu­clear pow­ers, like Rus­sian or Is­rael, which could do it, though the prob­a­bil­ity is lower.

How­ever, re­cent Nar­row AI em­pow­ered elec­tion ma­nipu­la­tion has hap­pened not through di­rect ac­tion by gov­ern­ments but via a small chain of pri­vate com­pa­nies (Face­book and Cam­bridge An­a­lyt­ica). This demon­strates that Nar­row AI may be used to ob­tain global power via ma­nipu­la­tion of elec­tions.

In some sense, a world takeover us­ing AI has already hap­pened, if we count the efforts of Cam­bridge An­a­lyt­ica in the US elec­tion. But it is un­likely that Rus­sian hack­ers com­bined with Rus­sian in­tel­li­gence ser­vices have the de­ci­sive strate­gic ad­van­tage in Nar­row AI. What we ob­serve looks like more of a reck­less gam­ble based on a small tem­po­rary ad­van­tage.

5. Plan of im­ple­men­ta­tion of AI po­lice via Nar­row AI advantage

5.1. Steps of im­ple­ment­ing of AI safety via Nar­row AI DSA

The plan is not what we recom­mend, but just the most log­i­cal way of ac­tion for a hy­po­thet­i­cal “ra­tio­nal” agent. Ba­si­cally, this plan con­sists of the fol­low­ing steps:

1) Gain­ing a know­able de­ci­sive ad­van­tage.

2) Im­ple­ment­ing it for the world takeover.

3) Creat­ing a global surveillance sys­tem (AI Po­lice) that con­trols any pos­si­ble sources of global risk, in­clud­ing biolog­i­cal risks, nu­clear weapons and unau­tho­rized re­search in AI.

4) Ban ad­vanced AI re­search al­to­gether or slowly ad­vance it via some safe path.

While the plan is more or less straight­for­ward, its im­ple­men­ta­tion could be both dan­ger­ous and im­moral. Its main dan­ger is that the plan means start­ing a war against the whole world with­out an in­finitely large ad­van­tage that could be en­sured only via su­per­in­tel­li­gence. War is always vi­o­lent and un­pre­dictable. We have writ­ten pre­vi­ously about the dan­gers of mil­i­tary AI (Turchin & Denken­berger, 2018b).

There is noth­ing good about such a plan; it would be much bet­ter if all coun­tries would in­stead peace­fully con­tribute to the UN and form a “com­mit­tee for pre­ven­tion of global risks”. This is un­likely to hap­pen now but may oc­cur if an ob­vi­ous small risk of a global catas­tro­phe ap­pears, such as an in­com­ing as­ter­oid or a dan­ger­ous pan­demic. The prob­lem of the cre­ation of such a com­mit­tee re­quires ad­di­tional anal­y­sis into how to use the mo­men­tum of emerg­ing global risks to help such a com­mit­tee to form, be­come per­ma­nent, and act globally with­out ex­cep­tions. Even if such a com­mit­tee were peace­fully cre­ated, it would still need AI Po­lice to mon­i­tor dan­ger­ous AI re­search.

5.2. Pre­dic­tive AI Po­lice based on Nar­row AI: what and how to control

Even if world dom­i­na­tion is reached us­ing Nar­row AI, such dom­i­na­tion is not fi­nal solu­tion, as the dom­i­nat­ing side should be able to take care of all global prob­lems, in­clud­ing cli­mate change, global catas­trophic risks and, first of all, the risks of the ap­pear­ance of an­other, even more so­phis­ti­cated or su­per­in­tel­li­gent AI which could be un­friendly.

We will call “AI Po­lice” a hy­po­thet­i­cal in­stru­ment which is able to pre­vent the ap­pear­ance of dan­ger­ous AI re­search any­where on the globe. There are two in­ter­con­nected ques­tions about AI po­lice: what and how should be mon­i­tored?

Such a sys­tem should be able to iden­tify re­searchers or com­pa­nies in­volved in ille­gal AI re­search (as­sum­ing that the cre­ation of su­per­in­tel­li­gent AI is banned). AI po­lice in­stru­ments should be in­stalled in ev­ery re­search cen­ter which pre­sum­ably has such ca­pa­bil­ities, and all such cen­ters or re­searchers should be iden­ti­fied. Similar sys­tems already was sug­gested to search for hack­ers (Bren­ton, 2018).

AI po­lice may iden­tify signs of po­ten­tially dan­ger­ous ac­tivity (like smoke as a sign of fire). Palan­tir was used in New Or­leans for “pre­dic­tive polic­ing”, where po­ten­tial crim­i­nals were iden­ti­fied via anal­y­sis of their so­cial net­work ac­tivity and then mon­i­tored more closely (Win­ston, 2018).

Such an AI Po­lice sys­tem will do all the same things that in­tel­li­gence agen­cies are do­ing now; the main differ­ence is that there will be no blind spots. The main prob­lem is how to cre­ate such a sys­tem so it does not have a blind spot in its cen­ter, which of­ten hap­pens with over­central­ized sys­tems. Maybe such sys­tem could be cre­ated with­out cen­tral­iza­tion, based in­stead on ubiquitous trans­parency, or some type of net hori­zon­tal solu­tion.

Many pos­si­ble types of Nar­row AI with a DSA, e.g. one based on in­for­ma­tional dom­i­na­tion via su­pe­ri­or­ity of the in­for­ma­tion gath­er­ing and data crunch­ing tech­nol­ogy, could be di­rectly trans­formed into AI Po­lice. Other pos­si­ble types, like a Nar­row AI win­ner in the nu­clear strate­gic game, could not be used for polic­ing. In that case, ad­di­tional solu­tions should be quickly in­vented.

6. Ob­sta­cles and dangers

6.1. Catas­trophic risks

If one side wrongly es­ti­mated its ad­van­tage, the at­tempt to take over the world may re­sult in world war. In ad­di­tion, af­ter a suc­cess­ful world takeover, a global to­tal­i­tar­ian gov­ern­ment, “Big Brother”, may be formed. Bostrom has de­scribed such an out­come as an ex­is­ten­tial risk (Bostrom, 2002). Such a world gov­ern­ment may in­dulge in un­limited cor­rup­tion and ul­ti­mately fail catas­troph­i­cally. At­tempts to fight such a global gov­ern­ment may pro­duce some­thing an­other risk, like catas­trophic ter­ror­ism.

If the “global gov­ern­ment” fails to im­ple­ment more ad­vanced forms of AI, it may be not able to fore­see fu­ture global risks; how­ever, if it does try to im­ple­ment ad­vanced forms of AI, a new level of AI Con­trol prob­lems will ap­pear. Such a world gov­ern­ment may be not the best ap­proach to solve it.

Not ev­ery at­tempt at global takeover via Nar­row AI would nec­es­sar­ily be aimed at pre­ven­tion of su­per­in­tel­li­gent AI. It is more likely to be mo­ti­vated by some limited set of na­tion­al­is­tic or sec­tar­ian goals of the per­pe­tra­tor, and thus, even af­ter a suc­cess­ful takeover, the AI Safety prob­lems will con­tinue to be un­der­es­ti­mated. How­ever, as the power of Nar­row AI will be ob­vi­ous af­ter such a takeover, con­trol over other AI pro­jects will then be im­ple­mented.

6.2. Mafia-state, cor­rup­tion, and the use of the gov­ern­men­tal AI by pri­vate individuals

While a bona fide na­tional su­per­power could be imag­ined as a ra­tio­nal and con­ser­va­tive or­ga­ni­za­tion, in re­al­ity, gov­ern­men­tal sys­tems could be cor­rupted by peo­ple with per­sonal ego­is­tic goals, will­ing to take risks, pri­va­tize prof­its, and so­cial­ize losses. A gov­ern­ment could be com­pletely im­mersed in cor­rup­tion, per­haps called a mafia-state (Naím, 2012). The main prob­lem with such a cor­rupted or­ga­ni­za­tion is that its main goal is self-preser­va­tion and profit in the near-term mode, which lower qual­ity of strate­gic de­ci­sions. One ex­am­ple is how Cam­bridge An­a­lyt­ica was hired by Rus­sian oli­garchs to ma­nipu­late elec­tions in US and Bri­tain, but these oli­garchs them­selves acted based on their lo­cal in­ter­ests (Cot­trell, 2018).

Con­clu­sion. Rid­ing the wave of the AI rev­olu­tion to a safer world

Any AI safety solu­tion should be im­ple­mentable, that is, not con­tra­dict the gen­eral ten­dency of world de­vel­op­ment. We do not have 100 years to sit in a shrine and med­i­tate on a prov­able form of AI safety (Yam­polskiy, 2016): we need to take ad­van­tage of ex­ist­ing ten­den­cies in AI de­vel­op­ment.

The cur­rent ten­den­cies are that Nar­row AI is ad­vanc­ing while AGI is lag­ging. This cre­ates the pos­si­bil­ity of a Nar­row AI-based strate­gic ad­van­tage, where Nar­row AI is used to em­power a group of peo­ple that also has ac­cess to na­tion-state scale re­sources. Such an ad­van­tage will have a small win­dow of op­por­tu­nity, be­cause there is fierce com­pe­ti­tion in AI re­search and AGI is com­ing. The group with must make a de­ci­sion: will it use this ad­van­tage for world dom­i­na­tion, which car­ries the risk of start­ing a world war, or will it wait and see how the situ­a­tion will de­velop? Re­gard­less of the risks, this Nar­row AI-based ap­proach could be our only chance to stop the later cre­ation of a hos­tile non-al­igned su­per­in­tel­li­gence.

AlexMen­nen. (2017). Ex­is­ten­tial risk from AI with­out an in­tel­li­gence ex­plo­sion. Retrieved from http://​​less­wrong.com/​​lw/​​p28/​​ex­is­ten­tial_risk_from_ai_with­out_an_in­tel­li­gence/​​

Alt­shul­ler, G. S. (1999). The in­no­va­tion al­gorithm: TRIZ, sys­tem­atic in­no­va­tion and tech­ni­cal cre­ativity. Tech­ni­cal In­no­va­tion Cen­ter, Inc.

Ar­bital. (2017). Ad­vanced agent. Ar­bi­tral. Retrieved from https://​​ar­bital.com/​​p/​​ad­vanced_agent/​​

Archer, J. (2018, May 31). Google draws up guidelines for its mil­i­tary AI fol­low­ing em­ployee fury. The Tele­graph. Retrieved from https://​​www.tele­graph.co.uk/​​tech­nol­ogy/​​2018/​​05/​​31/​​google-draws-guidelines-mil­i­tary-ai-fol­low­ing-em­ployee-fury/​​

Arm­strong, S., Bostrom, N., & Shul­man, C. (2016). Rac­ing to the precipice: a model of ar­tifi­cial in­tel­li­gence de­vel­op­ment. AI and So­ciety, 31(2), 201–206. https://​​doi.org/​​10.1007/​​s00146-015-0590-y

Bostrom, N. (2002). Ex­is­ten­tial risks: An­a­lyz­ing Hu­man Ex­tinc­tion Sce­nar­ios and Re­lated Hazards. Jour­nal of Evolu­tion and Tech­nol­ogy, Vol. 9, No. 1 (2002).

Bostrom, N. (2014). Su­per­in­tel­li­gence. Oxford: Oxford Univer­sity Press.

Bostrom, N. (2017). Strate­gic Im­pli­ca­tions of Open­ness in AI Devel­op­ment. Global Policy, 8(2), 135–148.

Bren­ton, L. (2018). Will Ar­tifi­cial In­tel­li­gence (AI) Stop Hacker At­tacks? Stay Safe On­line. Retrieved from https://​​staysafeon­line.org/​​blog/​​will-ar­tifi­cial-in­tel­li­gence-ai-stop-hacker-at­tacks/​​

Chris­ti­ano, P. (2016). Pro­saic AI al­ign­ment. Retrieved from https://​​ai-al­ign­ment.com/​​pro­saic-ai-con­trol-b959644d79c2

Cot­trell, R. (2018, March 27). Why the Cam­bridge An­a­lyt­ica scan­dal could be much more se­ri­ous than you think: The Lon­don Eco­nomic. Retrieved from https://​​www.th­elon­doneco­nomic.com/​​opinion/​​why-the-cam­bridge-an­a­lyt­ica-scan­dal-could-be-much-more-se­ri­ous-than-you-think/​​27/​​03/​​

De Spiegeleire, S., Maas, M., & Sweijs, T. (2017). Ar­tifi­cial in­tel­li­gence and the fu­ture of defence. The Hague Cen­tre for Strate­gic Stud­ies. Retrieved from http://​​www.hcss.nl/​​sites/​​de­fault/​​files/​​files/​​re­ports/​​Ar­tifi­cial%20In­tel­li­gence%20and%20the%20Fu­ture%20of%20Defense.pdf

Ding, J. (2018). De­ci­pher­ing China’s AI Dream.

Faggella, D. (2013, July 28). Sen­tient World Si­mu­la­tion and NSA Surveillance—Ex­ploit­ing Pri­vacy to Pre­dict the Fu­ture? -. TechEmer­gence. Retrieved from https://​​www.techemer­gence.com/​​nsa-surveillance-and-sen­tient-world-simu­la­tion-ex­ploit­ing-pri­vacy-to-pre­dict-the-fu­ture/​​

Go­ertzel, B. (2012). Should Hu­man­ity Build a Global AI Nanny to De­lay the Sin­gu­lar­ity Un­til It’s Bet­ter Un­der­stood? Jour­nal of Con­scious­ness Stud­ies, 19, No. 1–2, 2012, Pp. 96–111. Retrieved from http://​​cite­seerx.ist.psu.edu/​​view­doc/​​down­load?doi=10.1.1.352.3966&rep=rep1&type=pdf

Han­son, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford Univer­sity Press.

Jena, M. (2017, Septem­ber 11). OMG! CIA Has 137 Se­cret Pro­jects Go­ing In Ar­tifi­cial In­tel­li­gence. Retrieved April 10, 2018, from https://​​techviral.net/​​cia-se­cret-ar­tifi­cial-in­tel­li­gence-pro­jects/​​

Ker­ber, L. L., & Hardesty, V. (1996). Stalin’s Avi­a­tion Gu­lag: A Me­moir of An­drei Tupolev and the Purge Era. Smith­so­nian In­sti­tu­tion Press Wash­ing­ton, DC.

Krakovna, V. (2015, Novem­ber 30). Risks from gen­eral ar­tifi­cial in­tel­li­gence with­out an in­tel­li­gence ex­plo­sion. Retrieved March 25, 2018, from https://​​vkrakovna.word­press.com/​​2015/​​11/​​29/​​ai-risk-with­out-an-in­tel­li­gence-ex­plo­sion/​​

Kush­ner, D. (2013). The real story of stuxnet. IEEE Spectr. 50, 48 – 53.

Lem, S. (1959). The in­ves­ti­ga­tion. Przekrój, Poland.

Love, D. (2014). Math­e­mat­i­ci­ans At The NSA—Busi­ness In­sider. Retrieved from https://​​www.busi­ness­in­sider.com/​​math­e­mat­i­ci­ans-at-the-nsa-2014-6

Maxwell, J. (2017, De­cem­ber 31). Friendly AI through On­tol­ogy Au­to­gen­er­a­tion. Retrieved March 10, 2018, from https://​​medium.com/​​@pw­gen/​​friendly-ai-through-on­tol­ogy-au­to­gen­er­a­tion-5d375bf85922

Midler, N. (2018). What is ‘Stan­ford An­a­lyt­ica’ any­way? – The Stan­ford Daily. The Stand­ford Daily. Retrieved from https://​​www.stan­ford­daily.com/​​2018/​​04/​​10/​​what-is-stan­ford-an­a­lyt­ica-any­way/​​

Millett, P., & Sny­der-Beat­tie, A. (2017). Hu­man Agency and Global Catas­trophic Biorisks. Health Se­cu­rity, 15(4), 335–336.

Muehlhauser, L., & Sala­mon, A. (2012). In­tel­li­gence Ex­plo­sion: Ev­i­dence and Im­port. Eden, Am­non; Søraker, Johnny; Moor, James H. The Sin­gu­lar­ity Hy­poth­e­sis: A Scien­tific and Philo­soph­i­cal Assess­ment. Ber­lin: Springer.

Mur­phy, M. (2018, April 9). Chi­nese fa­cial recog­ni­tion com­pany be­comes world’s most valuable AI start-up. The Tele­graph. Retrieved from https://​​www.tele­graph.co.uk/​​tech­nol­ogy/​​2018/​​04/​​09/​​chi­nese-fa­cial-recog­ni­tion-com­pany-be­comes-wor­lds-valuable-ai/​​

Naím, M. (2012). Mafia states: Or­ga­nized crime takes office. For­eign Aff., 91, 100.

Ober­haus, D. (2017). Watch ‘Slaugh­ter­bots,’ A Warn­ing About the Fu­ture of Killer Bots. Retrieved De­cem­ber 17, 2017, from https://​​moth­er­board.vice.com/​​en_us/​​ar­ti­cle/​​9kqmy5/​​slaugh­ter­bots-au­tonomous-weapons-fu­ture-of-life

Ought. (2018). Fac­tored Cog­ni­tion (May 2018) | Ought. Retrieved July 19, 2018, from https://​​ought.org/​​pre­sen­ta­tions/​​fac­tored-cog­ni­tion-2018-05

Perez, C. E. (2017, Septem­ber 10). The West in Unaware of The Deep Learn­ing Sput­nik Mo­ment. Retrieved April 6, 2018, from https://​​medium.com/​​in­tu­ition­ma­chine/​​the-deep-learn­ing-sput­nik-mo­ment-3e5e7c41c5dd

Preskill, J. (2012). Quan­tum com­put­ing and the en­tan­gle­ment fron­tier. ArXiv:1203.5813 [Cond-Mat, Physics:Quant-Ph]. Retrieved from http://​​arxiv.org/​​abs/​​1203.5813

Rosen­bach, M. (2013). Prism Leak: In­side the Con­tro­ver­sial US Data Surveillance Pro­gram. SPIEGEL ONLINE. Retrieved from http://​​www.spiegel.de/​​in­ter­na­tional/​​world/​​prism-leak-in­side-the-con­tro­ver­sial-us-data-surveillance-pro­gram-a-904761.html

Shaw, T. (2018, March 21). The New Mili­tary-In­dus­trial Com­plex of Big Data Psy-Ops. Retrieved April 10, 2018, from https://​​www.ny­books.com/​​daily/​​2018/​​03/​​21/​​the-digi­tal-mil­i­tary-in­dus­trial-com­plex/​​

Silver, D., Hu­bert, T., Schrit­twieser, J., Antonoglou, I., Lai, M., Guez, A., … Hass­abis, D. (2017). Mas­ter­ing Chess and Shogi by Self-Play with a Gen­eral Re­in­force­ment Learn­ing Al­gorithm. ArXiv:1712.01815 [Cs]. Retrieved from http://​​arxiv.org/​​abs/​​1712.01815

So­tala, K. (2016). De­ci­sive Strate­gic Ad­van­tage with­out a Hard Take­off. Retrieved from http://​​ka­j­so­tala.fi/​​2016/​​04/​​de­ci­sive-strate­gic-ad­van­tage-with­out-a-hard-take­off/​​#comments

So­tala, K. (2018). Disjunc­tive sce­nar­ios of catas­trophic AI risk. Ar­tifi­cial In­tel­li­gence Safety And Se­cu­rity, (Ro­man Yam­polskiy, Ed.), CRC Press. Retrieved from http://​​ka­j­so­tala.fi/​​as­sets/​​2017/​​11/​​Disjunc­tive-sce­nar­ios.pdf

So­tala, K., & Valpola, H. (2012). Co­a­lesc­ing minds: brain up­load­ing-re­lated group mind sce­nar­ios. In­ter­na­tional Jour­nal of Ma­chine Con­scious­ness, 4(01), 293–312.

Statista. (2018). Num­ber of Google em­ploy­ees 2017. Retrieved July 25, 2018, from https://​​www.statista.com/​​statis­tics/​​273744/​​num­ber-of-full-time-google-em­ploy­ees/​​

Tem­ple­ton, G. (2017). Elon Musk’s Neu­raLink Is Not a Neu­ral Lace Com­pany. Retrieved Fe­bru­ary 14, 2018, from https://​​www.in­verse.com/​​ar­ti­cle/​​30600-elon-musk-neu­ral­ink-neu­ral-lace-neu­ral-dust-electrode

Tet­lock, P. E., & Gard­ner, D. (2016). Su­perfore­cast­ing: The Art and Science of Pre­dic­tion (Reprint edi­tion). Broad­way Books.

Timer­baev, R. (2003). His­tory of the in­ter­na­tional con­trol of the nu­clear en­ergy. (К истории планов международного контроля над атомной энергией). История Советского Атомного Проекта (40-е — 50-е Годы): Междунар. Симп.; Дубна, 1996. Труды. Т. 3. — 2003. Retrieved from http://​​elib.biblioatom.ru/​​text/​​is­toriya-sovet­skogo-atom­nogo-proekta_t3_2003/​​go,214/​​

Turchin, A. (2017). Hu­man up­load as AI Nanny.

Turchin, A., & Denken­berger, D. (2017a). Global Solu­tions of the AI Safety Prob­lem. manuscript.

Turchin, A., & Denken­berger, D. (2017b). Levels of self-im­prov­ment of AI.

Turchin, A., & Denken­berger, D. (2018a). Could slaugh­ter­bots wipe out hu­man­ity? Assess­ment of the global catas­trophic risk posed by au­tonomous weapons. Un­der Re­view in Jour­nal of Mili­tary Ethics.

Turchin, A., & Denken­berger, D. (2018b). Mili­tary AI as con­ver­gent goal of the self-im­prov­ing AI. Ar­tifi­cial In­tel­li­gence Safety And Se­cu­rity, (Ro­man Yam­polskiy, Ed.), CRC Press.

Turchin, A., Green, B., & Denken­berger, D. (2017). Mul­ti­ple Si­mul­ta­neous Pan­demics as Most Danger­ous Global Catas­trophic Risk Con­nected with Bioweapons and Syn­thetic Biol­ogy. Un­der Re­view in Health Se­cu­rity.

Welch­man, G. (1982). The hut six story: break­ing the enigma codes. McGraw-Hill Com­pa­nies.

Willi­ams, B. (2017). Spy chiefs set sights on AI and cy­ber -. FCW. Retrieved from https://​​fcw.com/​​ar­ti­cles/​​2017/​​09/​​07/​​in­tel-insa-ai-tech-chiefs-insa.aspx

Willi­ams, G. (2018, April 16). Why China will win the global race for com­plete AI dom­i­nance. Wired UK. Retrieved from https://​​www.wired.co.uk/​​ar­ti­cle/​​why-china-will-win-the-global-bat­tle-for-ai-dominance

Win­ston, A. (2018, Fe­bru­ary 27). Palan­tir has se­cretly been us­ing New Or­leans to test its pre­dic­tive polic­ing tech­nol­ogy. The Verge. Retrieved from https://​​www.thev­erge.com/​​2018/​​2/​​27/​​17054740/​​palan­tir-pre­dic­tive-polic­ing-tool-new-or­leans-nopd

Yam­polskiy, R. (2016). Ver­ifier The­ory and Un­ver­ifi­a­bil­ity. Retrieved from https://​​arxiv.org/​​abs/​​1609.00331

Yam­polsky, R., & Fox, J. (2013). Safety en­g­ineer­ing for ar­tifi­cial gen­eral in­tel­li­gence. Topoi, 32, 217–226.

Yud­kowsky, E. (2008). Ar­tifi­cial In­tel­li­gence as a Pos­i­tive and Nega­tive Fac­tor in Global Risk, in Global Catas­trophic Risks. (M. M. Cirkovic & N. Bostrom, Eds.). Oxford Univer­sity Press: Oxford, UK.

Zet­ter, K. (2015). So, the NSA Has an Ac­tual Skynet Pro­gram. WIRED. Retrieved from https://​​www.wired.com/​​2015/​​05/​​nsa-ac­tual-skynet-pro­gram/​​

Митин, В. (2014). Нейронет (Neu­roWeb) станет следующим поколением Интернета. PC Week. Идеи и Практики Автоматизации, 17.