Some Thoughts on Singularity Strategies

Fol­lowup to: Out­line of pos­si­ble Sin­gu­lar­ity sce­nar­ios (that are not com­pletely dis­as­trous)

Given that the Sin­gu­lar­ity and be­ing strate­gic are pop­u­lar top­ics around here, it’s sur­pris­ing there hasn’t been more dis­cus­sion on how to an­swer the ques­tion “In what di­rec­tion should we nudge the fu­ture, to max­i­mize the chances and im­pact of a pos­i­tive Sin­gu­lar­ity?” (“We” mean­ing the SIAI/​FHI/​LW/​Sin­gu­lar­i­tar­ian com­mu­nity.)

(Is this an ap­pro­pri­ate way to frame the ques­tion? It’s how I would in­stinc­tively frame the ques­tion, but per­haps we ought to dis­cussed al­ter­na­tives first. For ex­am­ple, one might be “What quest should we em­bark upon to save the world?”, which seems to be the frame that Eliezer in­stinc­tively prefers. But I worry that think­ing in terms of “quest” fa­vors the part of the brain that is built mainly for sig­nal­ing in­stead of plan­ning. Another al­ter­na­tive would be “What strat­egy max­i­mizes ex­pect util­ity?” but that seems too tech­ni­cal for hu­man minds to grasp on an in­tu­itive level, and we don’t have the tools to an­swer the ques­tion for­mally.)

Let’s start by as­sum­ing that hu­man­ity will want to build at least one Friendly su­per­in­tel­li­gence sooner or later, ei­ther from scratch, or by im­prov­ing hu­man minds, be­cause with­out such an en­tity, it’s likely that even­tu­ally ei­ther a su­per­in­tel­li­gent, non-Friendly en­tity will arise, or civ­i­liza­tion will col­lapse. The cur­rent state of af­fairs, in which there is no in­tel­li­gence greater than baseline-hu­man level, seems un­likely to be sta­ble over the billions of years of the uni­verse’s re­main­ing life. (Nor does that seem par­tic­u­larly de­sir­able even if it is pos­si­ble.)

Whether to push for (or per­son­ally head to­wards) de novo AI di­rectly, or IA/​up­load­ing first, de­pends heav­ily on the ex­pected (or more gen­er­ally, sub­jec­tive prob­a­bil­ity dis­tri­bu­tion of) difficulty of build­ing a Friendly AI from scratch, which in turn in­volves a great deal of log­i­cal and philo­soph­i­cal un­cer­tainty. (For ex­am­ple, if it’s known that it ac­tu­ally takes a min­i­mum of 10 peo­ple with IQ 200 to build a Friendly AI, then there is clearly lit­tle point in push­ing for de novo AI first.)

Be­sides the ex­pected difficulty of build­ing FAI from scratch, an­other fac­tor that weighs heav­ily in the de­ci­sion is the risk of ac­ci­den­tally build­ing an unFriendly AI (or con­tribut­ing to oth­ers build­ing UFAIs) while try­ing to build FAI. Tak­ing this into ac­count also in­volves lots of log­i­cal and philo­soph­i­cal un­cer­tainty. (But it seems safe to as­sume that this risk, if plot­ted against the in­tel­li­gence of the AI builders, forms an in­verted U shape.)

Since we don’t have good for­mal tools for deal­ing with log­i­cal and philo­soph­i­cal un­cer­tainty, it seems hard to do bet­ter than to make some in­cre­men­tal im­prove­ments over gut in­stinct. One idea is to train our in­tu­itions to be more ac­cu­rate, for ex­am­ple by learn­ing about the his­tory of AI and philos­o­phy, or learn­ing known cog­ni­tive bi­ases and do­ing de­bi­as­ing ex­er­cises. But this seems in­suffi­cient to gap the widely differ­ing in­tu­itions peo­ple have on these ques­tions.

My own feel­ing is that the chance of suc­cess of of build­ing FAI, as­sum­ing cur­rent hu­man in­tel­li­gence dis­tri­bu­tion, is low (even if given un­limited fi­nan­cial re­sources), while the risk of un­in­ten­tion­ally build­ing or con­tribut­ing to UFAI is high. I think I can ex­pli­cate a part of my in­tu­ition this way: There must be a min­i­mum level of in­tel­li­gence be­low which the chances of suc­cess­fully build­ing an FAI is neg­ligible. We hu­mans seem at best just barely smart enough to build a su­per­in­tel­li­gent UFAI. Wouldn’t it be sur­pris­ing that the in­tel­li­gence thresh­old for build­ing UFAI and FAI turn out to be the same?

Given that there are known ways to sig­nifi­cantly in­crease the num­ber of ge­niuses (i.e., von Neu­mann level, or IQ 180 and greater), by clon­ing or em­bryo se­lec­tion, an ob­vi­ous al­ter­na­tive Sin­gu­lar­ity strat­egy is to in­vest di­rectly or in­di­rectly in these tech­nolo­gies, and to try to miti­gate ex­is­ten­tial risks (for ex­am­ple by at­tempt­ing to de­lay all sig­nifi­cant AI efforts) un­til they ma­ture and bear fruit (in the form of adult ge­nius-level FAI re­searchers). Other strate­gies in the same vein are to pur­sue cog­ni­tive/​phar­ma­ceu­ti­cal/​neu­ro­sur­gi­cal ap­proaches to in­creas­ing the in­tel­li­gence of ex­ist­ing hu­mans, or to push for brain em­u­la­tion first fol­lowed by in­tel­li­gence en­hance­ment of hu­man minds in soft­ware form.

So­cial/​PR is­sues aside, these al­ter­na­tives make more in­tu­itive sense to me. The chances of suc­cess seem higher, and if dis­aster does oc­cur as a re­sult of the in­tel­li­gence am­plifi­ca­tion effort, we’re more likely to be left with a fu­ture that is at least partly in­fluenced by hu­man val­ues. (Of course, in the fi­nal anal­y­sis, we also have to con­sider so­cial/​PR prob­lems, but all Sin­gu­lar­ity ap­proaches seem to have similar prob­lems, which can be partly ame­lio­rated by the com­mon sub-strat­egy of “rais­ing the gen­eral san­ity level”.)

I’m cu­ri­ous in what oth­ers think. What does your in­tu­ition say about these is­sues? Are there good ar­gu­ments in fa­vor of any par­tic­u­lar strat­egy that I’ve missed? Is there an­other strat­egy that might be bet­ter than the ones men­tioned above?