What Evidence Is AlphaGo Zero Re AGI Complexity?

Eliezer Yud­kowsky write a post on Face­book on on Oct 17, where I replied at the time. Yes­ter­day he re­posted that here (link), minus my re­sponses. So I’ve com­posed the fol­low­ing re­sponse to put here:

I have agreed that an AI-based econ­omy could grow faster than does our econ­omy to­day. The is­sue is how fast the abil­ities of one AI sys­tem might plau­si­bly grow, rel­a­tive to the abil­ities of the en­tire rest of the world at that time, across a range of tasks roughly as broad as the world econ­omy. Could one small sys­tem re­ally “foom” to beat the whole rest of the world?

As many have noted, while AI has of­ten made im­pres­sive and rapid progress in spe­cific nar­row do­mains, it is much less clear how fast we are pro­gress­ing to­ward hu­man level AGI sys­tems with scopes of ex­per­tise as broad as those of the world econ­omy. Aver­aged over all do­mains, progress has been slow. And at past rates of progress, I have es­ti­mated that it might take cen­turies.

Over the his­tory of com­puter sci­ence, we have de­vel­oped many gen­eral tools with sim­ple ar­chi­tec­tures and built from other gen­eral tools, tools that al­low su­per hu­man perfor­mance on many spe­cific tasks scat­tered across a wide range of prob­lem do­mains. For ex­am­ple, we have su­per­hu­man ways to sort lists, and lin­ear re­gres­sion al­lows su­per­hu­man pre­dic­tion from sim­ple gen­eral tools like ma­trix in­ver­sion.

Yet the ex­is­tence of a limited num­ber of such tools has so far been far from suffi­cient to en­able any­thing re­motely close to hu­man level AGI. Alpha Go Zero is (or is built from) a new tool in this fam­ily, and its de­vel­op­ers de­serve our praise and grat­i­tude. And we can ex­pect more such tools to be found in the fu­ture. But I am skep­ti­cal that it is the last such tool we will need, or even re­motely close to the last such tool.

For spe­cific sim­ple tools with sim­ple ar­chi­tec­tures, ar­chi­tec­ture can mat­ter a lot. But our ro­bust ex­pe­rience with soft­ware has been that even when we have ac­cess to many sim­ple and pow­er­ful tools, we solve most prob­lems via com­plex com­bi­na­tions of sim­ple tools. Com­bi­na­tions so com­plex, in fact, that our main is­sue is usu­ally man­ag­ing the com­plex­ity, rather than in­clud­ing the right few tools. In those com­plex sys­tems, ar­chi­tec­ture mat­ters a lot less than does lots of com­plex de­tail. That is what I meant by sug­gest­ing that ar­chi­tec­ture isn’t the key to AGI.

You might claim that once we have enough good sim­ple tools, com­plex­ity will no longer be re­quired. With enough sim­ple tools (and some data to crunch), a few sim­ple and rel­a­tively ob­vi­ous com­bi­na­tions of those tools will be suffi­cient to perform most all tasks in the world econ­omy at a hu­man level. And thus the first team to find the last sim­ple gen­eral tool needed might “foom” via hav­ing an enor­mous ad­van­tage over the en­tire rest of the world put to­gether. At least if that one last tool were pow­er­ful enough. I dis­agree with this claim, but I agree that nei­ther view can be eas­ily and clearly proven wrong.

Even so, I don’t see how find­ing one more sim­ple gen­eral tool can be much ev­i­dence one way or an­other. I never meant to im­ply that we had found all the sim­ple gen­eral tools we would ever find. I in­stead sug­gest that sim­ple gen­eral tools just won’t be enough, and thus find­ing the “last” tool re­quired also won’t let its team foom.

The best ev­i­dence re­gard­ing the need for com­plex­ity in strong broad sys­tems is the ac­tual com­plex­ity ob­served in such sys­tems. The hu­man brain is ar­guably such a sys­tem, and when we have ar­tifi­cial sys­tems of this sort they will also offer more ev­i­dence. Un­til then one might try to col­lect ev­i­dence about the dis­tri­bu­tion of com­plex­ity across our strongest broad­est sys­tems, even when such sys­tems are far be­low the AGI level. But point­ing out that one par­tic­u­lar ca­pa­ble sys­tem hap­pens to use mainly one sim­ple tool, well that by it­self can’t offer much ev­i­dence one way or an­other.