[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

From a pa­per by Cen­ter for Tech­nol­ogy and Na­tional Se­cu­rity Policy & Na­tional Defense Univer­sity:

“Strong AI: Strong AI has been the holy grail of ar­tifi­cial in­tel­li­gence re­search for decades. Strong AI seeks to build a ma­chine which can simu­late the full range of hu­man cog­ni­tion, and po­ten­tially in­clude such traits as con­scious­ness, sen­tience, sapi­ence, and self-aware­ness. No AI sys­tem has so far come close to these ca­pa­bil­ities; how­ever, many now be­lieve that strong AI may be achieved some­time in the 2020s. Sev­eral tech­nolog­i­cal ad­vances are fos­ter­ing this op­ti­mism; for ex­am­ple, com­puter pro­ces­sors will likely reach the com­pu­ta­tional power of the hu­man brain some­time in the 2020s (the so-called “sin­gu­lar­ity”). Other fun­da­men­tal ad­vances are in de­vel­op­ment, in­clud­ing ex­otic/​dy­namic pro­ces­sor ar­chi­tec­tures, full brain simu­la­tions, neuro-synap­tic com­put­ers, and gen­eral knowl­edge rep­re­sen­ta­tion sys­tems such as IBM Wat­son. It is difficult to fully pre­dict what such profound im­prove­ments in ar­tifi­cial cog­ni­tion could im­ply; how­ever, some cred­ible thinkers have already posited a va­ri­ety of po­ten­tial risks re­lated to loss of con­trol of as­pects of the phys­i­cal world by hu­man be­ings. For ex­am­ple, a 2013 re­port com­mis­sioned by the United Na­tions has called for a wor­ld­wide mora­to­rium on the de­vel­op­ment and use of au­tonomous robotic weapons sys­tems un­til in­ter­na­tional rules can be de­vel­oped for their use.

Na­tional Se­cu­rity Im­pli­ca­tions: Over the next 10 to 20 years, robotics and AI will con­tinue to make sig­nifi­cant im­prove­ments across a broad range of tech­nol­ogy ap­pli­ca­tions of rele­vance to the U.S. mil­i­tary. Un­manned ve­hi­cles will con­tinue to in­crease in so­phis­ti­ca­tion and num­bers, both on the bat­tlefield and in sup­port­ing mis­sions. Robotic sys­tems can also play a wider range of roles in au­tomat­ing rou­tine tasks, for ex­am­ple in lo­gis­tics and ad­minis­tra­tive work. Telemedicine, robotic as­sisted surgery, and ex­pert sys­tems can im­prove mil­i­tary health care and lower costs. The built in­fras­truc­ture, for ex­am­ple, can be man­aged more effec­tively with em­bed­ded sys­tems, sav­ing en­ergy and other re­sources. In­creas­ingly so­phis­ti­cated weak AI tools can offload much of the rou­tine cog­ni­tive or de­ci­sion­mak­ing tasks that cur­rently re­quire hu­man op­er­a­tors. As­sum­ing cur­rent sys­tems move closer to strong AI ca­pa­bil­ities, they could also play a larger and more sig­nifi­cant role in prob­lem solv­ing, per­haps even for strat­egy de­vel­op­ment or op­er­a­tional plan­ning. In the longer term, fully robotic sol­diers may be de­vel­oped and de­ployed, par­tic­u­larly by wealthier coun­tries, al­though the poli­ti­cal and so­cial ram­ifi­ca­tions of such sys­tems will likely be sig­nifi­cant. One nega­tive as­pect of these trends, how­ever, lies in the risks that are pos­si­ble due to un­fore­seen vuln­er­a­bil­ities that may arise from the large scale de­ploy­ment of smart au­to­mated sys­tems, for which there is lit­tle prac­ti­cal ex­pe­rience. An emerg­ing risk is the abil­ity of small scale or ter­ror­ist groups to de­sign and build func­tion­ally ca­pa­ble un­manned sys­tems which could perform a va­ri­ety of hos­tile mis­sions.”

So strong AI is on the amer­i­can mil­i­tary’s radar, and at least some in­volved have a ba­sic un­der­stand­ing of the fact that it could be risky. The pa­per also con­tains brief overviews of many other po­ten­tially trans­for­ma­tional tech­nolo­gies.