Nar­row AI Nanny: Reach­ing Stra­tegic Ad­vant­age via Nar­row AI to Pre­vent Creation of the Dan­ger­ous Superintelligence

Ab­stract: As there are no cur­rently ob­vi­ous ways to cre­ate safe self-im­prov­ing su­per­in­tel­li­gence, but its emer­gence is loom­ing, we prob­ably need tem­por­ary ways to pre­vent its cre­ation. The only way to pre­vent it is to cre­ate a spe­cial type of AI that is able to con­trol and mon­itor the en­tire world. The idea has been sug­ges­ted by Go­ertzel in the form of an AI Nanny, but his Nanny is still su­per­in­tel­li­gent, and is not easy to con­trol. We ex­plore here ways to cre­ate the safest and simplest form of AI which may work as an AI Nanny, that is, a global sur­veil­lance state powered by a Nar­row AI, or AI Po­lice. A sim­ilar but more lim­ited sys­tem has already been im­ple­men­ted in Ch­ina for the pre­ven­tion of or­din­ary crime. AI po­lice will be able to pre­dict the ac­tions of and stop po­ten­tial ter­ror­ists and bad act­ors in ad­vance. Im­ple­ment­a­tion of such AI po­lice will prob­ably con­sist of two steps: first, a stra­tegic de­cis­ive ad­vant­age via Nar­row AI cre­ated by an in­tel­li­gence ser­vices of a nuc­lear su­per­power, and then ubi­quit­ous con­trol over po­ten­tially dan­ger­ous agents which could cre­ate un­au­thor­ized ar­ti­fi­cial gen­eral in­tel­li­gence which could evolve into Su­per­in­tel­li­gence.

Key­words: AI – ex­ist­en­tial risks – sur­veil­lance – world gov­ern­ment – NSA

High­lights:

· Nar­row AI may be used to achieve a de­cis­ive stra­tegic ad­vant­age (DSA) and ac­quire global power.

· The most prob­able route to DSA via Nar­row AI is the cre­ation of Nar­row AI by the secret ser­vice of a nuc­lear su­per­power.

· The most prob­able places for its cre­ation are the US Na­tional Se­cur­ity Agency or the Chinese Govern­ment.

· Nar­row AI may be used to cre­ate a Global AI Po­lice for global sur­veil­lance, able to pre­vent the cre­ation of dan­ger­ous AIs and most other ex­ist­en­tial risks.

· This solu­tion is dan­ger­ous but real­istic.

Pemalink: ht­tps://​​philpa­pers.org/​​rec/​​TURNAN-3

Content

1. In­tro­duc­tion

2. The main con­tra­dic­tion of the AI safety prob­lem: AI must sim­ul­tan­eously ex­ist and not ex­ist

3. De­cis­ive stra­tegic ad­vant­age via Nar­row AI

3.1. Non-self-im­prov­ing AI can ob­tain a de­cis­ive ad­vant­age

3.2. Nar­row AI is used to cre­ate non-AI world-dom­in­at­ing tech­no­logy

3.3. Types of Nar­row AI which may be used for ob­tain­ing a DSA

3.4. The know­ab­il­ity of a de­cis­ive ad­vant­age

4. AI-em­powered re­con­nais­sance or­gan­iz­a­tion of a nuc­lear su­per­power is the most prob­able place of ori­gin of a Nar­row AI DSA

4.1. Ad­vant­ages of a secret Nar­row AI pro­gram in­side the gov­ern­ment

4.2 Ex­ist­ing gov­ern­mental and in­tel­li­gence Nar­row AI pro­jects ac­cord­ing to open sources

4.3. Who is win­ning the Nar­row AI race?

5. Plan of im­ple­ment­a­tion of AI po­lice via Nar­row AI ad­vant­age

5.1. Steps of im­ple­ment­ing of AI safety via Nar­row AI DSA

5.2. Pre­dict­ive AI Po­lice based on Nar­row AI: what and how to con­trol

6. Obstacles and dangers

6.1. Cata­strophic risks

6.2. Mafia-state, cor­rup­tion, and the use of the gov­ern­mental AI by private in­di­vidu­als

Con­clu­sion. Rid­ing the wave of the AI re­volu­tion to a safer world

1. Introduction

This art­icle is pess­im­istic. It as­sumes that there is no way to cre­ate safe, be­ne­vol­ent self-im­prov­ing su­per­in­tel­li­gence, and that the only way to es­cape its cre­ation is the im­ple­ment­a­tion of some form of lim­ited AI, which will work as a Global AI Nanny, con­trolling and pre­vent­ing the ap­pear­ance of dan­ger­ous AIs as well as other global risks.

The idea of AI Nanny was first sug­ges­ted by Go­ertzel (Go­ertzel, 2012); we have pre­vi­ously ex­plored its levels of real­iz­a­tion (Turchin & Den­ken­ber­ger, 2017a). An AI Nanny does not it­self need to be a su­per­in­tel­li­gence, as if it is, all the same con­trol prob­lems will ap­pear again (Muehl­hauser & Sala­mon, 2012).

In this art­icle, we will ex­plore ways to cre­ate a non-su­per­in­tel­li­gent AI Nanny via Nar­row AI. Do­ing so in­volves ad­dress­ing two ques­tions: First, how to achieve a de­cis­ive stra­tegic ad­vant­age (DSA) via Nar­row AI, and second, how to use such a sys­tem to achieve a level of ef­fect­ive global con­trol suf­fi­cient to pre­vent the cre­ation of su­per­in­tel­li­gent AI. In the sis­ter art­icle, we look at the next level of AI Nanny, based on hu­man up­loads, which cur­rently seems a more re­mote pos­sib­il­ity, but which may be­come pos­sible after im­ple­ment­a­tion of a Nar­row AI Nanny (Turchin, 2017).

The idea of achiev­ing stra­tegic ad­vant­age via AI be­fore the cre­ation of the su­per­in­tel­li­gence was sug­ges­ted by Sotala (Sotala, 2018), who called it a “Ma­jor stra­tegic ad­vant­age” as op­posed to a “De­cis­ive stra­tegic ad­vant­age”, which is over­whelm­ingly stronger, but re­quires su­per­in­tel­li­gence. A sim­ilar line of thought was presen­ted by Alex Mennen (Mennen, 2017).

His­tor­ic­ally, there are sev­eral ex­amples where an ad­vant­age in Nar­row AI has been im­port­ant. The most fam­ous is the case is break­ing of Ger­man cipher En­igma via elec­tro-mech­an­ical “cryp­to­graphic bombe” con­struc­ted by Alan Tur­ing, which auto­mat­ic­ally gen­er­ate and tested hy­po­thesis about code (Welch­man, 1982). It was an over­whelm­ingly more com­plex com­put­ing sys­tem than any other dur­ing WW2, which gave the Al­lies in­form­a­tional dom­in­a­tion over the Axis powers. A more re­cent, but also more elu­sive, ex­ample is the case of Cam­bridge Ana­lyt­ica, which sup­posedly used its data-crunch­ing ad­vant­age to con­trib­ute to the res­ult of the 2016 US pres­id­en­tial elec­tions (Cot­trell, 2018). Another ex­ample is the use of soph­ist­ic­ated cy­ber­weapons like Stuxnet to dis­arm an en­emy (Kush­ner, 2013).

The Chinese gov­ern­ment’s fa­cial re­cog­ni­tion and hu­man rank­ing sys­tem is a pos­sible ex­ample not of a Nar­row AI ad­vant­age, but of “global AI po­lice”, which cre­ate in­form­a­tional dom­in­ance over all in­de­pend­ent agents; how­ever, any to­tal­it­arian power which worth its name had ef­fect­ive in­stru­ments for such in­form­a­tional dom­in­a­tion even be­fore com­puters, like Stasi in the former East Ger­many.

To solve AI safety we will ap­ply the the­ory of com­plex prob­lem solv­ing cre­ated by Altshuller (1999) in Sec­tion 2; dis­cuss ways to reach a de­cis­ive ad­vant­age via Nar­row AI in sec­tion 3; and, in sec­tion 4, ex­am­ine ways to use Nar­row AI to ef­fect­ively mon­itor and pre­vent cre­ation of un­au­thor­ized self-im­prov­ing AI. In sec­tion 5 we will look at ways to safely de­velop AI Po­lice based on an ad­vant­age in Nar­row AI, and in sec­tion 6 we will ex­am­ine po­ten­tial fail­ure modes.

2. The main con­tra­dic­tion of the AI safety prob­lem: AI must sim­ul­tan­eously ex­ist and not exist

It is be­com­ing widely ac­cep­ted that suf­fi­ciently ad­vanced AI may be global cata­strophic risk, es­pe­cially if it be­comes su­per­in­tel­li­gent in the pro­cess of re­curs­ive self-im­prove­ment (Bostrom, 2014; Yudkowsky, 2008). It has also been sug­ges­ted that we should ap­ply en­gin­eer­ing stand­ards of safety to the cre­ation of AI (Yam­po­l­sky & Fox, 2013).

Engin­eer­ing safety de­mands that the cre­ation of the un­pre­dict­ably ex­plos­ive sys­tem whose safety can­not be proved (Yam­po­l­skiy, 2016) or in­cre­ment­ally tested should be pre­ven­ted. For in­stance, no one wants a nuc­lear re­actor with un­pre­dict­able chain re­ac­tion; even in a nuc­lear bomb, the chain re­ac­tion should be pre­dict­able. Hence, if to really ap­ply en­gin­eer­ing safety to the AI, there is only one way to do it:

Do not cre­ate ar­ti­fi­cial gen­eral in­tel­li­gence (AGI).

However, we can’t pre­vent cre­ation of AGIs by other agents as there is no cent­ral global au­thor­ity and abil­ity to mon­itor all AI labs and in­di­vidu­als. In ad­di­tion, the prob­ab­il­ity of global co­oper­a­tion is small be­cause of the on­go­ing AI arms race between US and Ch­ina (Ding, 2018; Perez, 2017).

Moreover, if we post­pone the cre­ation of AGI, we could suc­cumb to other global cata­strophic risk, like bio­lo­gical risks (Mil­lett & Snyder-Beat­tie, 2017; Turchin, Green, & Den­ken­ber­ger, 2017) as only AI-powered global con­trol may be suf­fi­cient to ef­fect­ively pre­vent them. We need power­ful AI to pre­vent all other risks.

In the words of prob­lem solv­ing method TRIZ (Altshuller, 1999), the core con­tra­dic­tion of the AI prob­lem is fol­low­ing:

AGI must ex­ist and non-ex­ist sim­ul­tan­eously.

What does it mean for AI to “ex­ist and non-ex­ist sim­ul­tan­eously”? Several ways to limit the cap­ab­il­it­ies of AI so it can’t be re­garded as “fully ex­ist­ing” have been sug­ges­ted:

1) No agency. In this case, AI does not ex­ist as an agent sep­ar­ate from hu­mans, so there is no align­ment prob­lem. For ex­ample, AI as a hu­man aug­ment­a­tion, as en­vi­sioned in Musk’s Neur­alink (Tem­pleton, 2017).

2) No “ar­ti­fi­cialcom­pon­ent. AI is not cre­ated de novo, but is some­how con­nec­ted with hu­mans, per­haps via hu­man up­load­ing (Han­son, 2016). We will look more at this case in an­other art­icle, “Hu­man up­load as AI Nanny”.

3) No “gen­eral in­tel­li­gence”. The prob­lem-solv­ing abil­ity of this AI arises not from its wit, but be­cause of its ac­cess to large amounts of data and other re­sources. It is also Nar­row AI, not a uni­ver­sal AGI. This is the ap­proach we will ex­plore in the cur­rent art­icle.

3. De­cis­ive stra­tegic ad­vant­age via Nar­row AI

3.1. Non-self-im­prov­ing AI can ob­tain a de­cis­ive advantage

Re­cently Sotala (2016), Chris­ti­ano (2016), Mennen (2017), and Krakovna (2015) have ex­plored the idea that AI may have a DSA even without the ca­pa­city for self-im­prove­ment. Mennen wrote about fol­low­ing con­di­tions for the stra­tegic ad­vant­age of non-self-im­prov­ing AI:

1) World-tak­ing cap­ab­il­ity out­per­form­ing self-im­prov­ing cap­ab­il­it­ies, that is, “AIs are bet­ter at tak­ing over the world than they are at pro­gram­ming AIs” (Mennen, 2017). He sug­gests later that, hy­po­thet­ic­ally, AI will be bet­ter than hu­mans at some form of en­gin­eer­ing. Sotala opined that, “for the AI to ac­quire a DSA, its level in some of­fens­ive cap­ab­il­ity must over­come hu­man­ity’s de­fens­ive cap­ab­il­it­ies” (Sotala, 2016).

2) Self-re­stric­tion in self-im­prove­ment. “An AI that is cap­able of pro­du­cing a more cap­able AI may re­frain from do­ing so if it is un­able to solve the AI align­ment prob­lem for it­self” (Mennen, 2017). We have pre­vi­ously dis­cussed some po­ten­tial dif­fi­culties for any self-im­prov­ing AI (Turchin & Den­ken­ber­ger, 2017b). Mennen sug­gests that AI’s ad­vant­age in that case will be less marked, so box­ing may be more work­able, and the AI is more likely to fail in its takeover at­tempt.

3) Align­ment of non-self-im­prov­ing AI is sim­pler. “AI align­ment would be easier for AIs that do not un­dergo an in­tel­li­gence ex­plo­sion” (Mennen, 2017), as it will be a) easy to mon­itor its goals, b) less of a dif­fer­ence will be ob­served between our goals and the AI’s in­ter­pret­a­tion of them. This di­cho­tomy was also ex­plored by Max­well (2017).

4) AI must ob­tain a DSA not only over hu­mans, but over other AIs, as well as other na­tion-states. The need to have ad­vant­age over other AIs de­pends on the num­ber and re­l­at­ive dif­fer­ence between AIs pro­du­cing teams. We have looked at the nature of AI arms races in an earlier pa­per (Turchin & Den­ken­ber­ger, 2017a). A smal­ler ad­vant­age will pro­duce a slower as­cen­sion, and thus a mul­ti­polar out­come will be likely.

Sotala ad­ded a dis­tinc­tion between the ma­jor stra­tegic ad­vant­age provided by Nar­row AI and that of DSA by su­per­in­tel­li­gent AI (Sotala, 2018). Most of what we will de­scribe be­low falls in the first cat­egory. The smal­ler the ad­vant­age, the ris­kier and more un­cer­tain its im­ple­ment­a­tion, and the pro­cess of the im­ple­ment­a­tion could be more vi­ol­ent.

In the next sub­sec­tions we will ex­plore how Nar­row AI may be used to ob­tain a DSA.

3.2. Nar­row AI is used to cre­ate non-AI world-dom­in­at­ing technology

Nar­row AI may be im­ple­men­ted in sev­eral ways to ob­tain a DSA, and for a real DSA, these im­ple­ment­a­tions should be com­bined. However, any DSA will tem­por­ary, and may be in place for no more than one year.

Nuc­lear war-win­ning strategy. Nar­row AI sys­tems could em­power stra­tegic plan­ners with the abil­ity to ac­tu­ally win a nuc­lear war with very little col­lat­eral dam­age or risk of global con­sequences. That is, they could cal­cu­late a route to a cred­ible first strike cap­ab­il­ity. For ex­ample, if nuc­lear strategy could be suc­cess­fully form­al­ized, like the game Go, the coun­try with the more power­ful AI would win. There are sev­eral ways in which such nuc­lear su­peri­or­ity could win us­ing AI:

- Stra­tegic dom­in­ance. Create a de­tailed world model which could then be played in the same way as a board game. This is most straight­for­ward way, but it is less likely, as cre­ation of a per­fect model is un­likely without AGI and is dif­fi­cult in the chaotic “real world”.

- In­form­a­tional dom­in­ance. The abil­ity to learn much more in­form­a­tion about the en­emy, e.g. the loc­a­tion of all its nuc­lear weapons and the codes to dis­able them. Such in­form­a­tional dom­in­ance may be used to dis­arm the en­emy forces; it may also in­clude learn­ing all state secrets of the en­emy with guar­an­teed pre­ser­va­tion of their own secrets.

- Identify small ac­tions with large con­sequences. This cat­egory in­cludes ac­tions such as black­mail of the en­emy’s lead­ers and the use of cryptoweapons and false flags to corner the en­emy. This ap­proach will prob­ably will work if com­bined with stra­tegic dom­in­ance.

- Dom­in­ance in man­u­fac­tur­ing. New man­u­fac­tur­ing tech­no­logy en­ables cheaper and dead­lier mis­siles and other mil­it­ary hard­ware like drones and large quant­ity of them. This es­pe­cially ap­ply to in­vis­ible weapons for first strike, like stealth cruise mis­siles.

- Deploy cy­ber­weapons in­side the en­emy’s nuc­lear con­trol chains. So­mething like an ad­vanced form of a com­puter virus em­bed­ded in the nuc­lear con­trol and warn­ing sys­tems.

Dom­in­ance in nuc­lear war does not ne­ces­sar­ily mean that ac­tual war will hap­pen, but such dom­in­ance could be used to force the en­emy to ca­pit­u­late and agree to a cer­tain type of in­spec­tions. However, a cred­ible demon­stra­tion of the dis­arm­ing cap­ab­il­ity may be needed to mo­tiv­ate com­pli­ance.

New tech­no­logy which helps to pro­duce other types of weapons.

- Bi­olo­gical weapons. Ad­vances in com­puter em­powered bioen­gin­eer­ing could pro­duce tar­geted bioweapons. It may be not worth­while to list all pos­sible haz­ards which an un­eth­ical agent could use in a quest for global dom­in­a­tion if the agent has ac­cess to su­per­ior bi­o­tech­no­logy with sci­ence-fic­tion-level cap­ab­il­it­ies.

- Na­n­o­tech­no­logy. Molecu­lar man­u­fac­tur­ing will al­low the cre­ation of new types of in­vis­ible self-rep­lic­at­ing weapons, much more de­struct­ive then nukes.

Cy­ber­weapons, that is, weapons which con­sist of com­puter pro­grams and mostly af­fect other pro­grams.

- Hid­den switches in the en­emy’s in­fra­struc­ture.

- The abil­ity to sever com­mu­nic­a­tion in­side an op­pos­ing mil­it­ary.

- Full com­pu­ter­iz­a­tion of the army from the bot­tom to the top (De Spiegel­eire, Maas, & Sweijs, 2017).

- Large drone swarms, like the slaugh­ter­bots from a fam­ous video (Ober­haus, 2017) or their man­u­fac­tur­ing cap­ab­il­it­ies (Turchin & Den­ken­ber­ger, 2018a).

- Fin­an­cial in­stru­ments.

- Hu­man-in­flu­en­cing cap­ab­il­it­ies (ef­fect­ive so­cial ma­nip­u­la­tion like tar­geted adds and fake facts).

3.3. Types of Nar­row AI which may be used for ob­tain­ing a DSA

There are sev­eral hy­po­thet­ical ways how nar­row AI could reach DSA.

One is Data-driven AIs: sys­tems whose main power comes from ac­cess to the large amounts of data, which com­pensate for their lim­ited or nar­row “pure” in­tel­li­gence. This in­cludes sub­cat­egory of “Big broth­ers”. This cat­egory in­cludes sys­tems of crim­inal ana­lysis like Palantir (re­cently mocked in the Sen­ate as “Stan­ford Ana­lyt­ica” (Midler, 2018)), which unite mass sur­veil­lance with the abil­ity to crunch big data and find pat­terns. Another type is World sim­u­la­tions. World sim­u­la­tions may be cre­ated based on data col­lec­ted about the world and its people to pre­dict their be­ha­vior. The pos­sessor of the bet­ter model of the world would win.

Lim­ited prob­lem solv­ers are sys­tems which out­per­form hu­mans within cer­tain nar­row fields which in­cludes:

- “Ro­botic minds” with lim­ited agency and nat­ural lan­guage pro­cessing cap­ab­il­it­ies able to em­power a ro­botic army, for ex­ample, as the brain of a drone swarm .

- Cryp­to­graphic su­prem­acy. The case of En­igma shows the power of cryp­to­graphic su­prem­acy over po­ten­tial ad­versar­ies. Such su­prem­acy might be enough to win WW3, as it will res­ult in in­form­a­tional trans­par­ency for one side. Quantum com­puters could provide such su­prem­acy via their abil­ity to de­cipher codes (Preskill, 2012).

- Ex­pert sys­tems as Nar­row Oracles, which could provide use­ful ad­vice in some field, per­haps based on some ma­chine learn­ing-based ad­vice-gen­er­at­ing soft­ware.

- Com­puter pro­grams able to win stra­tegic games. So­mething like a stra­tegic plan­ner with play­ing abil­it­ies, e.g. Alpha Zero (Sil­ver et al., 2017). Such a pro­gram may need either a hand-craf­ted world model or con­nec­tion with the “world sim­u­la­tions” de­scribed in sec­tion 1.2. Such a sys­tem may be em­powered by an­other sys­tem which able to form­al­ize any real-world situ­ation as a game.

- Nar­row AI in en­gin­eer­ing could dra­mat­ic­ally in­crease the ef­fect­ive­ness of some form of weapons con­struc­tion, for ex­ample, nuc­lear or bio­lo­gical weapons, na­n­o­tech­no­logy, or ro­bot­ics.

Nar­row AI ad­vant­age may take also a form of Nar­row AI in­creas­ing the ef­fect­ive­ness of group in­tel­li­gence. This could be Graph­ical col­lect­ive think­ing sys­tems, some­thing like dy­namic col­lect­ively ed­ited roadmaps, wi­kis, or Palantir. One at­tempt to cre­ate such a plat­form was Ar­bital (Ar­bital, 2017). Chris­ti­ano et al.’s “amp­lify and dis­till” pro­ject works on factored cog­ni­tion, which will be a smart­phone app which dis­trib­utes dif­fer­ent por­tions of cog­nit­ive tasks between teams (Ought, 2018). Also, it may take form of AI-em­powered per­sonal search as­sist­ants, maybe with a simple brain–com­puter in­ter­face or Com­mu­nic­a­tion as­sist­ants, which help to make con­ver­sa­tion pro­duct­ive, re­cord a con­ver­sa­tion log and show rel­ev­ant in­ter­net links. Fin­ally, group in­tel­li­gence may be ag­greg­ated via large, self-im­prov­ing or­gan­iz­a­tions, which im­ple­ment all types of col­lect­ive in­tel­li­gence, hard­ware pro­du­cing cap­ab­il­it­ies, money to hire the best tal­ent, etc., like Google.

Sotala has dis­cussed “minds co­ales­cence” as a way to cre­ate more power­ful minds (Sotala & Val­pola, 2012). Danila Med­ve­dev sug­ges­ted that the use of a power­ful col­lab­or­at­ive in­form­a­tion pro­cessing sys­tem, some­thing between a Wiki­pe­dia, Ever­note, and Mindmap, may sig­ni­fic­antly in­crease group in­tel­li­gence. Sim­ilar ideas have been dis­cussed by “Neur­onet” en­thu­si­asts like Luk­sha, where col­lect­ive in­tel­li­gence will be pro­duced via brain im­plants (Mitin, 2014).

Su­per­fore­cast­ing tech­no­logy (Tet­lock & Gard­ner, 2016) that ag­greg­ates pre­dic­tions as well as pre­dic­tion mar­kets could be used to in­crease power of the “group brain”. In Soviet times this was known as “shar­ashka” (Ker­ber & Hardesty, 1996) – sci­entific lab con­sisted from im­prisoned sci­ent­ists, who were un­der gov­ern­ment con­trol and un­der pres­sure to make dis­cov­er­ies.

Nar­row AI able to reach “in­form­a­tional dom­in­ance” over all po­ten­tial en­emies: in this situ­ation, the en­emy can’t have any secrets and all its ac­tions are con­stantly mon­itored. This could be achieved via: soph­ist­ic­ated spy­ware in all com­puters; quantum com­puters for code break­ing or some exotic quantum tech like quantum radar or quantum cal­cu­la­tions us­ing close time like curves; mi­cro­scopic ro­bots, as small as a grain of salt, which could be secretly im­planted in the ad­versary’s headquar­ters.

3.4. The know­ab­il­ity of a de­cis­ive advantage

Even if one side reaches the level of de­cis­ive ad­vant­age which provides it with the op­por­tun­ity to take over the world, it may not real­ize what it pos­sesses if it doesn’t know the cap­ab­il­it­ies of other play­ers, which could be made de­lib­er­ately vague. For ex­ample, in the 1940s, the US had nuc­lear su­peri­or­ity, but the Soviet Union made vague claims in 1947 that the nuc­lear secret was no longer secret (Timerbaev, n.d.), thus cre­at­ing un­cer­tainty about its level of nuc­lear suc­cess.

To en­sure a DSA, a rather in­vas­ive sur­veil­lance sys­tems would need to be im­ple­men­ted first; in other words, the ad­vant­age must be reached first in in­form­a­tional dom­in­a­tion, to guar­an­tee know­ledge of the cap­ab­il­it­ies of all op­pon­ents. This could be done via AI cre­ated in­side an in­tel­li­gence ser­vice.

A DSA provided by Nar­row AI will prob­ably re­quire a com­bin­a­tion of sev­eral of the Nar­row AI types lis­ted in sec­tion 3.3, and the only way to guar­an­tee such dom­in­ance is the ac­tual size of the pro­ject. The size will de­pend on re­source in­vest­ments, first of all, money, but also minds, and stra­tegic co­ordin­a­tion of all these pro­jects into one work­able sys­tem. It looks like only the US and Ch­ina cur­rently have the re­sources and de­term­in­a­tion needed for such a pro­ject.

If there is no know­able DSA, both sides may re­frain from at­tack­ing each other. Arm­strong et al. have cre­ated a model of the role of AI and mu­tual know­ledge (Arm­strong, Bostrom, & Shul­man, 2016). Bostrom has also writ­ten about the topic in his art­icle about AI open­ness (Bostrom, 2017).

A semi-stable solu­tion con­sist­ing of two AIs may ap­pear, as pre­dicted by Lem (1959) and pre­vi­ously dis­cussed by us (Turchin & Den­ken­ber­ger, 2018b). Such a bal­ance between two su­per­powers may work as a global AI Nanny, but much less ef­fect­ively, as both sides may try to rush to de­velop su­per­in­tel­li­gent AI to ob­tain an in­sur­mount­able ad­vant­age.

Nar­row AI provides a unique op­por­tun­ity for know­able DSA. For ex­ample, the cre­at­ors of crypto­lo­gical bombe were not only able to break the codes of the en­emy, but they prob­ably know that they out­per­formed the code break­ing tech­no­lo­gies of the Axis, as the Axis didn’t men­tion the ex­ist­ence of their own code break­ing and, more ob­vi­ously, didn’t start to use harder codes, which they would have done if they had sim­ilar code-break­ing tech­no­logy. A Nar­row AI-based DSA, based on “in­form­a­tional dom­in­a­tion” cre­ates a unique op­por­tun­ity for an al­most peace­ful world takeover that also in­cludes AI Po­lice able to pre­vent the cre­ation of un­au­thor­ized su­per­in­tel­li­gent AIs.

4. AI-em­powered re­con­nais­sance or­gan­iz­a­tion of a nuc­lear su­per­power is the most prob­able place of ori­gin of a Nar­row AI DSA

4.1. Ad­vant­ages of a secret Nar­row AI pro­gram in­side the gov­ern­ment

Dur­ing dis­cus­sions at MIRI (at the time, the Sin­gu­lar­ity In­sti­tute) in the 2000s, the idea that gov­ern­ment and mil­it­ary struc­tures would be in­ter­ested in cre­at­ing su­per­in­tel­li­gent AI was dis­missed, be­cause it was con­sidered that the gov­ern­ments were too stu­pid to un­der­stand fu­ture AI cap­ab­il­it­ies, and thus cre­ation of AI in a small private com­pany was re­garded more likely. But now it cer­tainly not the case.

There are sev­eral reas­ons why a Nar­row AI-driven de­cis­ive stra­tegic ad­vant­age could be achieved in­side the gov­ern­mental struc­ture of the large nuc­lear su­per­powers, and moreover, in­side a secret in­tel­li­gence and data crunch­ing agency, sim­ilar to the Na­tional Se­cur­ity Agency (NSA) of the US. A nuc­lear su­per­power is already in­ter­ested in world dom­in­a­tion, or at least in­ter­ested in pre­vent­ing dom­in­a­tion by other play­ers. If geo­pol­it­ics can be modeled as a stra­tegic game, Nar­row AI will help to achieve ad­vant­age in such game, as ex­ist­ing Nar­row AIs demon­strate sig­ni­fic­antly su­per­hu­man abil­it­ies in win­ning in com­plex games, sim­ilar to the games for world dom­in­ance, like Go.

A nuc­lear su­per­power has al­most un­lim­ited money for secret AI pro­ject com­pared with star­tups and com­mer­cial cor­por­a­tions. His­tor­ic­ally, the data-crunch­ing cap­ab­il­it­ies of secret ser­vices have out­per­formed ci­vil­ian ap­plic­a­tions. An AI of the same power as a ci­vil­ian one but in the hands of a nuc­lear su­per­power could dra­mat­ic­ally out­per­form the ci­vil­ian AI. Mil­it­ary AI could lever­age sev­eral non-AI ad­vant­ages in the hands of the su­per­power: ac­cess to the nuc­lear weapons, large com­pu­ta­tional re­sources, nets of sensors, pools of big data, a large con­cen­tra­tion of ex­per­i­enced re­search­ers, and other secret state pro­grams.

Such a secret gov­ern­ment AI or­gan­iz­a­tion could take ad­vant­age of the open­ness in the field of AI, as it could ab­sorb in­form­a­tion about the ad­vances of oth­ers, but would not be not leg­ally ob­liged to share its own achieve­ments. Thus, it would al­ways out­per­form the cur­rent state of pub­lic know­ledge. Govern­mental or­gan­iz­a­tions have used this type of ad­vant­age be­fore to dom­in­ate in cryp­to­graphy.

4.2 Ex­ist­ing gov­ern­mental and in­tel­li­gence Nar­row AI pro­jects ac­cord­ing to open sources

When we speak about Nar­row AI in­side a re­con­nais­sance or­gan­iz­a­tion, we mean AI as a tech­no­logy which in­creases the ef­fi­ciency of data crunch­ing within an or­gan­iz­a­tion which already has many ad­vant­ages: very power­ful in­stru­ments to col­lect data, money, ac­cess to secret tech­no­logy, and at­tracts the best minds, as well as abil­ity to edu­cate and train them ac­cord­ing to its stand­ards.

The US NSA has been de­scribed as the world’s largest single em­ployer of math­em­aticians (and there are sev­eral other com­puter-re­lated se­cur­ity agen­cies in the US) (Love, 2014). The NSA em­ploys around 40 000 people (Rosen­bach, 2013) and has a budget of around 10 bil­lion USD. For com­par­ison, Google em­ploys 72 000 thou­sand people in 2016 (Statista, 2018).

NSA works on world sim­u­la­tions with hu­mans (Faggella, 2013) and has vowed to use AI (B. Wil­li­ams, 2017). Wired has re­por­ted that “Mon­ster­Mind, like the film ver­sion of Skynet, is a de­fense sur­veil­lance sys­tem that would in­stantly and autonom­ously neut­ral­ize for­eign cy­ber­at­tacks against the US, and could be used to launch re­tali­at­ory strikes as well” (Zet­ter, 2015). An in­ter­est­ing over­view of gov­ern­mental data crunch­ing is presen­ted in the art­icle “The New Mil­it­ary-In­dus­trial Com­plex of Big Data Psy-Ops” (Shaw, 2018). It was re­por­ted that the CIA runs 137 secret AI pro­jects (Jena, 2017). However, it is use­less to search open data about the most ser­i­ous AI pro­jects aimed at world dom­in­a­tion, as such data will doubt­less be secret.

An ex­ample of a Nar­row AI sys­tem which could be im­ple­men­ted to achieve a DSA is Palantir, which was used for so-called “pre­dict­ive poli­cing tech­no­logy” (Win­ston, 2018). Palantir is an in­stru­ment to search large data­bases about people and find hid­den con­nec­tions. Such a sys­tem also prob­ably fa­cil­it­ates the col­lect­ive in­tel­li­gence of a group: con­ver­sa­tion sup­port Nar­row AI may re­cord and tran­scribe con­ver­sa­tion on the fly, sug­gest sup­port­ing links, gen­er­ate ideas for brain­storm­ing and works as a mild Oracle AI in nar­row do­mains. We don’t claim here that the Palantir is an in­stru­ment in­ten­ded to take over the world, but that a Nar­row AI provid­ing a de­cis­ive stra­tegic ad­vant­age may look much like it.

Another il­lus­trat­ive ex­ample of the Nar­row AI sys­tems we are speak­ing about is the Chinese SenseTime, which stores data de­scrib­ing hun­dreds of mil­lions of hu­man faces and is used for ap­plic­a­tions like the Chinese so­cial credit sys­tem (Murphy, 2018).

4.3. Who is win­ning the Nar­row AI race?

It looks like the US is los­ing the mo­mentum to im­ple­ment any pos­sible stra­tegic ad­vant­age in Nar­row AI for polit­ical reas­ons: the con­flict of the Trump ad­min­is­tra­tion with other branches of power; Snowden-type leaks res­ult­ing in pub­lic out­cry; and the cam­paign against mil­it­ary AI col­lab­or­a­tion with the gov­ern­ment within Google (Archer, 2018). If this is the case, Chinese could take this ad­vant­age later, as their re­la­tion­ship with private or­gan­iz­a­tions is more struc­tured, polit­ical power is more cent­ral­ized and eth­ical norms are dif­fer­ent (Wil­li­ams, 2018). There are sev­eral other power­ful in­tel­li­gence agen­cies of nuc­lear powers, like Rus­sian or Is­rael, which could do it, though the prob­ab­il­ity is lower.

However, re­cent Nar­row AI em­powered elec­tion ma­nip­u­la­tion has happened not through dir­ect ac­tion by gov­ern­ments but via a small chain of private com­pan­ies (Face­book and Cam­bridge Ana­lyt­ica). This demon­strates that Nar­row AI may be used to ob­tain global power via ma­nip­u­la­tion of elec­tions.

In some sense, a world takeover us­ing AI has already happened, if we count the ef­forts of Cam­bridge Ana­lyt­ica in the US elec­tion. But it is un­likely that Rus­sian hack­ers com­bined with Rus­sian in­tel­li­gence ser­vices have the de­cis­ive stra­tegic ad­vant­age in Nar­row AI. What we ob­serve looks like more of a reck­less gamble based on a small tem­por­ary ad­vant­age.

5. Plan of im­ple­ment­a­tion of AI po­lice via Nar­row AI advantage

5.1. Steps of im­ple­ment­ing of AI safety via Nar­row AI DSA

The plan is not what we re­com­mend, but just the most lo­gical way of ac­tion for a hy­po­thet­ical “ra­tional” agent. Basic­ally, this plan con­sists of the fol­low­ing steps:

1) Gain­ing a know­able de­cis­ive ad­vant­age.

2) Im­ple­ment­ing it for the world takeover.

3) Creat­ing a global sur­veil­lance sys­tem (AI Po­lice) that con­trols any pos­sible sources of global risk, in­clud­ing bio­lo­gical risks, nuc­lear weapons and un­au­thor­ized re­search in AI.

4) Ban ad­vanced AI re­search al­to­gether or slowly ad­vance it via some safe path.

While the plan is more or less straight­for­ward, its im­ple­ment­a­tion could be both dan­ger­ous and im­moral. Its main danger is that the plan means start­ing a war against the whole world without an in­fin­itely large ad­vant­age that could be en­sured only via su­per­in­tel­li­gence. War is al­ways vi­ol­ent and un­pre­dict­able. We have writ­ten pre­vi­ously about the dangers of mil­it­ary AI (Turchin & Den­ken­ber­ger, 2018b).

There is noth­ing good about such a plan; it would be much bet­ter if all coun­tries would in­stead peace­fully con­trib­ute to the UN and form a “com­mit­tee for pre­ven­tion of global risks”. This is un­likely to hap­pen now but may oc­cur if an ob­vi­ous small risk of a global cata­strophe ap­pears, such as an in­com­ing as­ter­oid or a dan­ger­ous pan­demic. The prob­lem of the cre­ation of such a com­mit­tee re­quires ad­di­tional ana­lysis into how to use the mo­mentum of emer­ging global risks to help such a com­mit­tee to form, be­come per­man­ent, and act glob­ally without ex­cep­tions. Even if such a com­mit­tee were peace­fully cre­ated, it would still need AI Po­lice to mon­itor dan­ger­ous AI re­search.

5.2. Pre­dict­ive AI Po­lice based on Nar­row AI: what and how to control

Even if world dom­in­a­tion is reached us­ing Nar­row AI, such dom­in­a­tion is not fi­nal solu­tion, as the dom­in­at­ing side should be able to take care of all global prob­lems, in­clud­ing cli­mate change, global cata­strophic risks and, first of all, the risks of the ap­pear­ance of an­other, even more soph­ist­ic­ated or su­per­in­tel­li­gent AI which could be un­friendly.

We will call “AI Po­lice” a hy­po­thet­ical in­stru­ment which is able to pre­vent the ap­pear­ance of dan­ger­ous AI re­search any­where on the globe. There are two in­ter­con­nec­ted ques­tions about AI po­lice: what and how should be mon­itored?

Such a sys­tem should be able to identify re­search­ers or com­pan­ies in­volved in il­legal AI re­search (as­sum­ing that the cre­ation of su­per­in­tel­li­gent AI is banned). AI po­lice in­stru­ments should be in­stalled in every re­search cen­ter which pre­sum­ably has such cap­ab­il­it­ies, and all such cen­ters or re­search­ers should be iden­ti­fied. Sim­ilar sys­tems already was sug­ges­ted to search for hack­ers (Brenton, 2018).

AI po­lice may identify signs of po­ten­tially dan­ger­ous activ­ity (like smoke as a sign of fire). Palantir was used in New Or­leans for “pre­dict­ive poli­cing”, where po­ten­tial crim­in­als were iden­ti­fied via ana­lysis of their so­cial net­work activ­ity and then mon­itored more closely (Win­ston, 2018).

Such an AI Po­lice sys­tem will do all the same things that in­tel­li­gence agen­cies are do­ing now; the main dif­fer­ence is that there will be no blind spots. The main prob­lem is how to cre­ate such a sys­tem so it does not have a blind spot in its cen­ter, which of­ten hap­pens with over­cent­ral­ized sys­tems. Maybe such sys­tem could be cre­ated without cent­ral­iz­a­tion, based in­stead on ubi­quit­ous trans­par­ency, or some type of net ho­ri­zontal solu­tion.

Many pos­sible types of Nar­row AI with a DSA, e.g. one based on in­form­a­tional dom­in­a­tion via su­peri­or­ity of the in­form­a­tion gath­er­ing and data crunch­ing tech­no­logy, could be dir­ectly trans­formed into AI Po­lice. Other pos­sible types, like a Nar­row AI win­ner in the nuc­lear stra­tegic game, could not be used for poli­cing. In that case, ad­di­tional solu­tions should be quickly in­ven­ted.

6. Obstacles and dangers

6.1. Cata­strophic risks

If one side wrongly es­tim­ated its ad­vant­age, the at­tempt to take over the world may res­ult in world war. In ad­di­tion, after a suc­cess­ful world takeover, a global to­tal­it­arian gov­ern­ment, “Big Brother”, may be formed. Bostrom has de­scribed such an out­come as an ex­ist­en­tial risk (Bostrom, 2002). Such a world gov­ern­ment may in­dulge in un­lim­ited cor­rup­tion and ul­ti­mately fail cata­stroph­ic­ally. At­tempts to fight such a global gov­ern­ment may pro­duce some­thing an­other risk, like cata­strophic ter­ror­ism.

If the “global gov­ern­ment” fails to im­ple­ment more ad­vanced forms of AI, it may be not able to fore­see fu­ture global risks; how­ever, if it does try to im­ple­ment ad­vanced forms of AI, a new level of AI Con­trol prob­lems will ap­pear. Such a world gov­ern­ment may be not the best ap­proach to solve it.

Not every at­tempt at global takeover via Nar­row AI would ne­ces­sar­ily be aimed at pre­ven­tion of su­per­in­tel­li­gent AI. It is more likely to be mo­tiv­ated by some lim­ited set of na­tion­al­istic or sec­tarian goals of the per­pet­rator, and thus, even after a suc­cess­ful takeover, the AI Safety prob­lems will con­tinue to be un­der­es­tim­ated. However, as the power of Nar­row AI will be ob­vi­ous after such a takeover, con­trol over other AI pro­jects will then be im­ple­men­ted.

6.2. Mafia-state, cor­rup­tion, and the use of the gov­ern­mental AI by private individuals

While a bona fide na­tional su­per­power could be ima­gined as a ra­tional and con­ser­vat­ive or­gan­iz­a­tion, in real­ity, gov­ern­mental sys­tems could be cor­rup­ted by people with per­sonal ego­istic goals, will­ing to take risks, privat­ize profits, and so­cial­ize losses. A gov­ern­ment could be com­pletely im­mersed in cor­rup­tion, per­haps called a mafia-state (Naím, 2012). The main prob­lem with such a cor­rup­ted or­gan­iz­a­tion is that its main goal is self-pre­ser­va­tion and profit in the near-term mode, which lower qual­ity of stra­tegic de­cisions. One ex­ample is how Cam­bridge Ana­lyt­ica was hired by Rus­sian ol­ig­archs to ma­nip­u­late elec­tions in US and Bri­tain, but these ol­ig­archs them­selves ac­ted based on their local in­terests (Cot­trell, 2018).

Con­clu­sion. Rid­ing the wave of the AI re­volu­tion to a safer world

Any AI safety solu­tion should be im­ple­ment­able, that is, not con­tra­dict the gen­eral tend­ency of world de­vel­op­ment. We do not have 100 years to sit in a shrine and med­it­ate on a prov­able form of AI safety (Yam­po­l­skiy, 2016): we need to take ad­vant­age of ex­ist­ing tend­en­cies in AI de­vel­op­ment.

The cur­rent tend­en­cies are that Nar­row AI is ad­van­cing while AGI is lag­ging. This cre­ates the pos­sib­il­ity of a Nar­row AI-based stra­tegic ad­vant­age, where Nar­row AI is used to em­power a group of people that also has ac­cess to na­tion-state scale re­sources. Such an ad­vant­age will have a small win­dow of op­por­tun­ity, be­cause there is fierce com­pet­i­tion in AI re­search and AGI is com­ing. The group with must make a de­cision: will it use this ad­vant­age for world dom­in­a­tion, which car­ries the risk of start­ing a world war, or will it wait and see how the situ­ation will de­velop? Regard­less of the risks, this Nar­row AI-based ap­proach could be our only chance to stop the later cre­ation of a hos­tile non-aligned su­per­in­tel­li­gence.

AlexMennen. (2017). Ex­ist­en­tial risk from AI without an in­tel­li­gence ex­plo­sion. Retrieved from http://​​less­wrong.com/​​lw/​​p28/​​ex­ist­en­tial_risk_from_ai_without_an_in­tel­li­gence/​​

Altshuller, G. S. (1999). The in­nov­a­tion al­gorithm: TRIZ, sys­tem­atic in­nov­a­tion and tech­nical cre­ativ­ity. Tech­nical In­nov­a­tion Center, Inc.

Ar­bital. (2017). Ad­vanced agent. Ar­bit­ral. Retrieved from ht­tps://​​ar­bital.com/​​p/​​ad­vanced_agent/​​

Archer, J. (2018, May 31). Google draws up guidelines for its mil­it­ary AI fol­low­ing em­ployee fury. The Tele­graph. Retrieved from ht­tps://​​www.tele­graph.co.uk/​​tech­no­logy/​​2018/​​05/​​31/​​google-draws-guidelines-mil­it­ary-ai-fol­low­ing-em­ployee-fury/​​

Arm­strong, S., Bostrom, N., & Shul­man, C. (2016). Ra­cing to the pre­cip­ice: a model of ar­ti­fi­cial in­tel­li­gence de­vel­op­ment. AI and So­ci­ety, 31(2), 201–206. ht­tps://​​doi.org/​​10.1007/​​s00146-015-0590-y

Bostrom, N. (2002). Ex­ist­en­tial risks: Ana­lyz­ing Hu­man Ex­tinc­tion Scen­arios and Related Haz­ards. Journal of Evolu­tion and Tech­no­logy, Vol. 9, No. 1 (2002).

Bostrom, N. (2014). Su­per­in­tel­li­gence. Ox­ford: Ox­ford University Press.

Bostrom, N. (2017). Stra­tegic Im­plic­a­tions of Open­ness in AI Devel­op­ment. Global Policy, 8(2), 135–148.

Brenton, L. (2018). Will Ar­ti­fi­cial In­tel­li­gence (AI) Stop Hacker At­tacks? Stay Safe On­line. Retrieved from ht­tps://​​staysafeon­line.org/​​blog/​​will-ar­ti­fi­cial-in­tel­li­gence-ai-stop-hacker-at­tacks/​​

Chris­ti­ano, P. (2016). Pro­saic AI align­ment. Retrieved from ht­tps://​​ai-align­ment.com/​​pro­saic-ai-con­trol-b959644d79c2

Cot­trell, R. (2018, March 27). Why the Cam­bridge Ana­lyt­ica scan­dal could be much more ser­i­ous than you think: The Lon­don Eco­nomic. Retrieved from ht­tps://​​www.thel­on­doneco­nomic.com/​​opin­ion/​​why-the-cam­bridge-ana­lyt­ica-scan­dal-could-be-much-more-ser­i­ous-than-you-think/​​27/​​03/​​

De Spiegel­eire, S., Maas, M., & Sweijs, T. (2017). Ar­ti­fi­cial in­tel­li­gence and the fu­ture of de­fence. The Hague Centre for Stra­tegic Stud­ies. Retrieved from http://​​www.hcss.nl/​​sites/​​de­fault/​​files/​​files/​​re­ports/​​Ar­ti­fi­cial%20In­tel­li­gence%20and%20the%20Fu­ture%20of%20De­fense.pdf

Ding, J. (2018). De­ci­pher­ing Ch­ina’s AI Dream.

Faggella, D. (2013, July 28). Sen­tient World Sim­u­la­tion and NSA Sur­veil­lance—Ex­ploit­ing Pri­vacy to Pre­dict the Fu­ture? -. TechEmer­gence. Retrieved from ht­tps://​​www.te­chemer­gence.com/​​nsa-sur­veil­lance-and-sen­tient-world-sim­u­la­tion-ex­ploit­ing-pri­vacy-to-pre­dict-the-fu­ture/​​

Go­ertzel, B. (2012). Should Hu­man­ity Build a Global AI Nanny to Delay the Sin­gu­lar­ity Until It’s Bet­ter Under­stood? Journal of Con­scious­ness Stud­ies, 19, No. 1–2, 2012, Pp. 96–111. Retrieved from http://​​cite­seerx.ist.psu.edu/​​view­doc/​​down­load?doi=10.1.1.352.3966&rep=rep1&type=pdf

Han­son, R. (2016). The Age of Em: Work, Love, and Life when Ro­bots Rule the Earth. Ox­ford University Press.

Jena, M. (2017, Septem­ber 11). OMG! CIA Has 137 Secret Pro­jects Go­ing In Ar­ti­fi­cial In­tel­li­gence. Retrieved April 10, 2018, from ht­tps://​​techviral.net/​​cia-secret-ar­ti­fi­cial-in­tel­li­gence-pro­jects/​​

Ker­ber, L. L., & Hardesty, V. (1996). Stalin’s Avi­ation Gu­lag: A Mem­oir of Andrei Tu­polev and the Purge Era. Smith­so­nian In­sti­tu­tion Press Wash­ing­ton, DC.

Krakovna, V. (2015, Novem­ber 30). Risks from gen­eral ar­ti­fi­cial in­tel­li­gence without an in­tel­li­gence ex­plo­sion. Retrieved March 25, 2018, from ht­tps://​​vkrakovna.word­press.com/​​2015/​​11/​​29/​​ai-risk-without-an-in­tel­li­gence-ex­plo­sion/​​

Kush­ner, D. (2013). The real story of stuxnet. IEEE Spectr. 50, 48 – 53.

Lem, S. (1959). The in­vest­ig­a­tion. Przekrój, Po­land.

Love, D. (2014). Mathem­aticians At The NSA—Busi­ness In­sider. Retrieved from ht­tps://​​www.busi­ness­in­sider.com/​​math­em­aticians-at-the-nsa-2014-6

Max­well, J. (2017, Decem­ber 31). Friendly AI through On­to­logy Au­to­gen­er­a­tion. Retrieved March 10, 2018, from ht­tps://​​me­dium.com/​​@pw­gen/​​friendly-ai-through-on­to­logy-auto­gen­er­a­tion-5d375bf85922

Midler, N. (2018). What is ‘Stan­ford Ana­lyt­ica’ any­way? – The Stan­ford Daily. The Stand­ford Daily. Retrieved from ht­tps://​​www.stan­ford­daily.com/​​2018/​​04/​​10/​​what-is-stan­ford-ana­lyt­ica-any­way/​​

Mil­lett, P., & Snyder-Beat­tie, A. (2017). Hu­man Agency and Global Cata­strophic Bi­or­isks. Health Se­cur­ity, 15(4), 335–336.

Muehl­hauser, L., & Sala­mon, A. (2012). In­tel­li­gence Ex­plo­sion: Evid­ence and Im­port. Eden, Am­non; Søraker, Johnny; Moor, James H. The Sin­gu­lar­ity Hy­po­thesis: A Scientific and Philo­soph­ical Assess­ment. Ber­lin: Springer.

Murphy, M. (2018, April 9). Chinese fa­cial re­cog­ni­tion com­pany be­comes world’s most valu­able AI start-up. The Tele­graph. Retrieved from ht­tps://​​www.tele­graph.co.uk/​​tech­no­logy/​​2018/​​04/​​09/​​chinese-fa­cial-re­cog­ni­tion-com­pany-be­comes-worlds-valu­able-ai/​​

Naím, M. (2012). Mafia states: Or­gan­ized crime takes of­fice. For­eign Aff., 91, 100.

Ober­haus, D. (2017). Watch ‘Slaugh­ter­bots,’ A Warn­ing About the Fu­ture of Killer Bots. Retrieved Decem­ber 17, 2017, from ht­tps://​​mother­board.vice.com/​​en_us/​​art­icle/​​9kqmy5/​​slaugh­ter­bots-autonom­ous-weapons-fu­ture-of-life

Ought. (2018). Factored Cog­ni­tion (May 2018) | Ought. Retrieved July 19, 2018, from ht­tps://​​ought.org/​​present­a­tions/​​factored-cog­ni­tion-2018-05

Perez, C. E. (2017, Septem­ber 10). The West in Un­aware of The Deep Learn­ing Sput­nik Mo­ment. Retrieved April 6, 2018, from ht­tps://​​me­dium.com/​​in­tu­ition­ma­chine/​​the-deep-learn­ing-sput­nik-mo­ment-3e5e7c41c5dd

Preskill, J. (2012). Quantum com­put­ing and the en­tan­gle­ment fron­tier. ArXiv:1203.5813 [Cond-Mat, Phys­ics:Quant-Ph]. Retrieved from http://​​arxiv.org/​​abs/​​1203.5813

Rosen­bach, M. (2013). Prism Leak: In­side the Con­tro­ver­sial US Data Sur­veil­lance Pro­gram. SPIEGEL ONLINE. Retrieved from http://​​www.spiegel.de/​​in­ter­na­tional/​​world/​​prism-leak-in­side-the-con­tro­ver­sial-us-data-sur­veil­lance-pro­gram-a-904761.html

Shaw, T. (2018, March 21). The New Mil­it­ary-In­dus­trial Com­plex of Big Data Psy-Ops. Retrieved April 10, 2018, from ht­tps://​​www.nybooks.com/​​daily/​​2018/​​03/​​21/​​the-di­gital-mil­it­ary-in­dus­trial-com­plex/​​

Sil­ver, D., Hubert, T., Sch­rit­twieser, J., An­tono­glou, I., Lai, M., Guez, A., … Hassabis, D. (2017). Mas­ter­ing Chess and Shogi by Self-Play with a Gen­eral Rein­force­ment Learn­ing Al­gorithm. ArXiv:1712.01815 [Cs]. Retrieved from http://​​arxiv.org/​​abs/​​1712.01815

Sotala, K. (2016). De­cis­ive Stra­tegic Ad­vant­age without a Hard Takeoff. Retrieved from http://​​kajsotala.fi/​​2016/​​04/​​de­cis­ive-stra­tegic-ad­vant­age-without-a-hard-takeoff/​​#comments

Sotala, K. (2018). Dis­junct­ive scen­arios of cata­strophic AI risk. Ar­ti­fi­cial In­tel­li­gence Safety And Se­cur­ity, (Ro­man Yam­po­l­skiy, Ed.), CRC Press. Retrieved from http://​​kajsotala.fi/​​as­sets/​​2017/​​11/​​Dis­junct­ive-scen­arios.pdf

Sotala, K., & Val­pola, H. (2012). Co­ales­cing minds: brain up­load­ing-re­lated group mind scen­arios. In­ter­na­tional Journal of Machine Con­scious­ness, 4(01), 293–312.

Statista. (2018). Num­ber of Google em­ploy­ees 2017. Retrieved July 25, 2018, from ht­tps://​​www.statista.com/​​stat­ist­ics/​​273744/​​num­ber-of-full-time-google-em­ploy­ees/​​

Tem­pleton, G. (2017). Elon Musk’s Neur­aLink Is Not a Neural Lace Com­pany. Retrieved Febru­ary 14, 2018, from ht­tps://​​www.in­verse.com/​​art­icle/​​30600-elon-musk-neur­alink-neural-lace-neural-dust-electrode

Tet­lock, P. E., & Gard­ner, D. (2016). Su­per­fore­cast­ing: The Art and Science of Pre­dic­tion (Re­print edi­tion). Broad­way Books.

Timerbaev, R. (2003). His­tory of the in­ter­na­tional con­trol of the nuc­lear en­ergy. (К истории планов международного контроля над атомной энергией). История Советского Атомного Проекта (40-е — 50-е Годы): Междунар. Симп.; Дубна, 1996. Труды. Т. 3. — 2003. Retrieved from http://​​elib.bib­li­oatom.ru/​​text/​​is­tor­iya-sov­et­skogo-atom­nogo-proekta_t3_2003/​​go,214/​​

Turchin, A. (2017). Hu­man up­load as AI Nanny.

Turchin, A., & Den­ken­ber­ger, D. (2017a). Global Solu­tions of the AI Safety Prob­lem. ma­nu­script.

Turchin, A., & Den­ken­ber­ger, D. (2017b). Levels of self-im­prov­ment of AI.

Turchin, A., & Den­ken­ber­ger, D. (2018a). Could slaugh­ter­bots wipe out hu­man­ity? Assess­ment of the global cata­strophic risk posed by autonom­ous weapons. Under Review in Journal of Mil­it­ary Eth­ics.

Turchin, A., & Den­ken­ber­ger, D. (2018b). Mil­it­ary AI as con­ver­gent goal of the self-im­prov­ing AI. Ar­ti­fi­cial In­tel­li­gence Safety And Se­cur­ity, (Ro­man Yam­po­l­skiy, Ed.), CRC Press.

Turchin, A., Green, B., & Den­ken­ber­ger, D. (2017). Mul­tiple Sim­ul­tan­eous Pan­dem­ics as Most Dan­ger­ous Global Cata­strophic Risk Con­nec­ted with Bioweapons and Syn­thetic Bi­ology. Under Review in Health Se­cur­ity.

Welch­man, G. (1982). The hut six story: break­ing the en­igma codes. McGraw-Hill Com­pan­ies.

Wil­li­ams, B. (2017). Spy chiefs set sights on AI and cy­ber -. FCW. Retrieved from ht­tps://​​fcw.com/​​art­icles/​​2017/​​09/​​07/​​in­tel-insa-ai-tech-chiefs-insa.aspx

Wil­li­ams, G. (2018, April 16). Why Ch­ina will win the global race for com­plete AI dom­in­ance. Wired UK. Retrieved from ht­tps://​​www.wired.co.uk/​​art­icle/​​why-china-will-win-the-global-battle-for-ai-dominance

Win­ston, A. (2018, Febru­ary 27). Palantir has secretly been us­ing New Or­leans to test its pre­dict­ive poli­cing tech­no­logy. The Verge. Retrieved from ht­tps://​​www.theverge.com/​​2018/​​2/​​27/​​17054740/​​pa­lantir-pre­dict­ive-poli­cing-tool-new-or­leans-nopd

Yam­po­l­skiy, R. (2016). Veri­fier The­ory and Un­veri­fi­ab­il­ity. Retrieved from ht­tps://​​arxiv.org/​​abs/​​1609.00331

Yam­po­l­sky, R., & Fox, J. (2013). Safety en­gin­eer­ing for ar­ti­fi­cial gen­eral in­tel­li­gence. Topoi, 32, 217–226.

Yudkowsky, E. (2008). Ar­ti­fi­cial In­tel­li­gence as a Pos­it­ive and Neg­at­ive Factor in Global Risk, in Global Cata­strophic Risks. (M. M. Cirkovic & N. Bostrom, Eds.). Ox­ford University Press: Ox­ford, UK.

Zet­ter, K. (2015). So, the NSA Has an Ac­tual Skynet Pro­gram. WIRED. Retrieved from ht­tps://​​www.wired.com/​​2015/​​05/​​nsa-ac­tual-skynet-pro­gram/​​

Митин, В. (2014). Нейронет (Neur­oWeb) станет следующим поколением Интернета. PC Week. Идеи и Практики Автоматизации, 17.