Takeoff Speed: Simple Asymptotics in a Toy Model.

Link post

I’ve been hav­ing fun re­cently read­ing about “AI Risk”. There is lots of elo­quent writ­ing out there about this topic: I es­pe­cially recom­mend Scott Alexan­der’s Su­per­in­tel­li­gence FAQ for those look­ing for a fun read. The sub­ject has reached the pub­lic con­scious­ness, with high pro­file peo­ple like Stephen Hawk­ing and Elon Musk speak­ing pub­li­cly about it. There is also an in­creas­ing amount of fund­ing and re­search effort be­ing de­voted to un­der­stand­ing AI risk. See for ex­am­ple the Fu­ture of Hu­man­ity In­sti­tute at Oxford, the Fu­ture of Life In­sti­tute at MIT, and the Ma­chine In­tel­li­gence Re­search In­sti­tute in Berkeley, among oth­ers. Th­ese groups seem to be do­ing lots of in­ter­est­ing re­search, which I am mostly ig­no­rant of. In this post I just want to talk about a sim­ple ex­er­cise in asymp­totics.

First, Some Back­ground.

A “su­per­in­tel­li­gent” AI is loosely defined to be an en­tity that is much bet­ter than we are at es­sen­tially any cog­ni­tive/​learn­ing/​plan­ning task. Per­haps, by anal­ogy, a su­per­in­tel­li­gent AI is to hu­man be­ings as hu­man be­ings are to Ben­gal tigers, in terms of gen­eral in­tel­li­gence. It shouldn’t be hard to con­vince your­self that if we were in the com­pany of a su­per­in­tel­li­gence, then we would be very right to be wor­ried: af­ter all, it is in­tel­li­gence that al­lows hu­man be­ings to to­tally dom­i­nate the world and drive Ben­gal tigers to near ex­tinc­tion, de­spite the fact that tigers phys­iolog­i­cally dom­i­nate hu­mans in most other re­spects. This is the case even if the su­per­in­tel­li­gence doesn’t have the de­struc­tion of hu­man­ity as a goal per-se (af­ter all, we don’t have it out for tigers), and even if the su­per­in­tel­li­gence is just an un­con­scious but su­per-pow­er­ful op­ti­miza­tion al­gorithm. I won’t re­hash the ar­gu­ments here (Scott does it bet­ter) but it es­sen­tially boils down to the fact that it is quite hard to an­ti­ci­pate what the re­sults of op­ti­miz­ing an ob­jec­tive func­tion will be, if the op­ti­miza­tion is done over a suffi­ciently rich space of strate­gies. And if we get it wrong, and the op­ti­miza­tion has some severely un­pleas­ant side-effects? It is tempt­ing to sug­gest that at that point, we just un­plug the com­puter and start over. The prob­lem is that if we un­plug the in­tel­li­gence, it won’t do as well at op­ti­miz­ing its ob­jec­tive func­tion com­pared to if it took steps to pre­vent us from un­plug­ging it. So if it’s strat­egy space is rich enough so that it is able to take steps to defend it­self, it will. Lots of the most in­ter­est­ing re­search in this field seems to be about how to al­ign op­ti­miza­tion ob­jec­tives with our own de­sires, or sim­ply how to write down ob­jec­tive func­tions that don’t in­duce the op­ti­miza­tion al­gorithm to try and pre­vent us from un­plug­ging it, while also not in­cen­tiviz­ing the al­gorithm to un­plug it­self (the cor­rigi­bil­ity prob­lem).

Ok. It seems un­con­tro­ver­sial that a hy­po­thet­i­cal su­per­in­tel­li­gence would be some­thing we should take very se­ri­ously as a dan­ger. But isn’t it pre­ma­ture to worry about this, given how far off it seems to be? We aren’t even that good at mak­ing product recom­men­da­tions, let alone op­ti­miza­tion al­gorithms so pow­er­ful that they might in­ad­ver­tently de­stroy all of hu­man­ity. Even if su­per­in­tel­li­gence will ul­ti­mately be some­thing to take very se­ri­ously, are we even in a po­si­tion to pro­duc­tively think about it now, given how lit­tle we know about how such a thing might work at a tech­ni­cal level? This seems to be the po­si­tion that An­drew Ng was tak­ing, in his much quoted state­ment that (para­phras­ing) wor­ry­ing about the dan­gers of su­per-in­tel­li­gence right now is like wor­ry­ing about over­pop­u­la­tion on Mars. Not that it might not even­tu­ally be a se­ri­ous con­cern, but that we will get a higher re­turn in­vest­ing our in­tel­lec­tual efforts right now on more im­me­di­ate prob­lems.

The stan­dard counter to this is that su­per-in­tel­li­gence might always seem like it is well be­yond our cur­rent ca­pa­bil­ities—maybe cen­turies in the fu­ture—un­til, all of a sud­den, it ap­pears as the re­sult of an un­con­trol­lable chain re­ac­tion known as an “in­tel­li­gence ex­plo­sion”, or “sin­gu­lar­ity”. (As far as I can tell, very few peo­ple ac­tu­ally think that in­tel­li­gence growth would ex­hibit an ac­tual math­e­mat­i­cal sin­gu­lar­ity—this seems in­stead to be a metaphor for ex­po­nen­tial growth.) If this is what we ex­pect, then now might very well be the time to worry about su­per-in­tel­li­gence. The first ar­gu­ment of this form was put forth by Bri­tish math­e­mat­i­cian I.J. Good (of Good-Tur­ing Fre­quency Es­ti­ma­tion!):

“Let an ul­train­tel­li­gent ma­chine be defined as a ma­chine that can far sur­pass all the in­tel­lec­tual ac­tivi­ties of any man how­ever clever. Since the de­sign of ma­chines is one of these in­tel­lec­tual ac­tivi­ties, an ul­train­tel­li­gent ma­chine could de­sign even bet­ter ma­chines; there would then un­ques­tion­ably be an ‘in­tel­li­gence ex­plo­sion,’ and the in­tel­li­gence of man would be left far be­hind. Thus the first ul­train­tel­li­gent ma­chine is the last in­ven­tion that man need ever make, pro­vided that the ma­chine is docile enough to tell us how to keep it un­der con­trol.”

Scott Alexan­der sum­ma­rizes the same ar­gu­ment a bit more quan­ti­ta­tively. In this pas­sage, he is imag­in­ing the start­ing point be­ing a full-brain simu­la­tion of Ein­stein—ex­cept run on faster hard­ware, so that our simu­lated Ein­stein op­er­ates at a much faster clock-speed than his his­tor­i­cal name­sake:

It might, like the his­tor­i­cal Ein­stein, con­tem­plate physics. Or it might con­tem­plate an area very rele­vant to its own in­ter­ests: ar­tifi­cial in­tel­li­gence. In that case, in­stead of mak­ing a rev­olu­tion­ary physics break­through ev­ery few hours, it will make a rev­olu­tion­ary AI break­through ev­ery few hours. Each AI break­through it makes, it will have the op­por­tu­nity to re­pro­gram it­self to take ad­van­tage of its dis­cov­ery, be­com­ing more in­tel­li­gent, thus speed­ing up its break­throughs fur­ther. The cy­cle will stop only when it reaches some phys­i­cal limit – some tech­ni­cal challenge to fur­ther im­prove­ments that even an en­tity far smarter than Ein­stein can­not dis­cover a way around.
To hu­man pro­gram­mers, such a cy­cle would look like a “crit­i­cal mass”. Be­fore the crit­i­cal level, any AI ad­vance de­liv­ers only mod­est benefits. But any tiny im­prove­ment that pushes an AI above the crit­i­cal level would re­sult in a feed­back loop of in­ex­orable self-im­prove­ment all the way up to some strato­spheric limit of pos­si­ble com­put­ing power.
This feed­back loop would be ex­po­nen­tial; rel­a­tively slow in the be­gin­ning, but blind­ingly fast as it ap­proaches an asymp­tote. Con­sider the AI which starts off mak­ing forty break­throughs per year – one ev­ery nine days. Now sup­pose it gains on av­er­age a 10% speed im­prove­ment with each break­through. It starts on Jan­uary 1. Its first break­through comes Jan­uary 10 or so. Its sec­ond comes a lit­tle faster, Jan­uary 18. Its third is a lit­tle faster still, Jan­uary 25. By the be­gin­ning of Fe­bru­ary, it’s sped up to pro­duc­ing one break­through ev­ery seven days, more or less. By the be­gin­ning of March, it’s mak­ing about one break­through ev­ery three days or so. But by March 20, it’s up to one break­through a day. By late on the night of March 29, it’s mak­ing a break­through ev­ery sec­ond.

As far as I can tell, this pos­si­bil­ity of an ex­po­nen­tially-paced in­tel­li­gence ex­plo­sion is the main ar­gu­ment for folks de­vot­ing time to wor­ry­ing about su­per-in­tel­li­gent AI now, even though cur­rent tech­nol­ogy doesn’t give us any­thing even close. So in the rest of this post, I want to push a lit­tle bit on the claim that the feed­back loop in­duced by a self-im­prov­ing AI would lead to ex­po­nen­tial growth, and see what as­sump­tions un­der­lie it.

A Toy Model for Rates of Self Improvement

Lets write down an ex­tremely sim­ple toy model for how quickly the in­tel­li­gence of a self im­prov­ing sys­tem would grow, as a func­tion of time. And I want to em­pha­size that the model I will pro­pose is clearly a toy: it ab­stracts away ev­ery­thing that is in­ter­est­ing about the prob­lem of de­sign­ing an AI. But it should be suffi­cient to fo­cus on a sim­ple ques­tion of asymp­totics, and the de­gree to which growth rates de­pend on the ex­tent to which AI re­search ex­hibits diminish­ing marginal re­turns on in­vest­ment. In the model, AI re­search ac­cu­mu­lates with time: at time t, R(t) units of AI re­search have been con­ducted. Per­haps think of this as a quan­tifi­ca­tion of the num­ber of AI “break­throughs” that have been made in Scott Alexan­der’s tel­ling of the in­tel­li­gence ex­plo­sion ar­gu­ment. The in­tel­li­gence of the sys­tem at time t, de­noted I(t), will be some func­tion of the ac­cu­mu­lated re­search R(t). The model will make two as­sump­tions:

  1. The rate at which re­search is con­ducted is di­rectly pro­por­tional to the cur­rent in­tel­li­gence of the sys­tem. We can think about this ei­ther as a dis­crete dy­nam­ics, or as a differ­en­tial equa­tion. In the dis­crete case, we have: , and in the con­tin­u­ous case: .

  2. The re­la­tion­ship be­tween the cur­rent in­tel­li­gence of the sys­tem and the cur­rently ac­cu­mu­lated quan­tity of re­search is gov­erned by some func­tion f: .

The func­tion f can be thought of as cap­tur­ing the marginal rate of re­turn of ad­di­tional re­search on the ac­tual in­tel­li­gence of an AI. For ex­am­ple, if we think AI re­search is some­thing like pump­ing wa­ter from a well—a task for which dou­bling the work dou­bles the re­turn—then, we would model f as lin­ear: . In this case, AI re­search does not ex­hibit any diminish­ing marginal re­turns: a unit of re­search gives us just as much benefit in terms of in­creased in­tel­li­gence, no mat­ter how much we already un­der­stand about in­tel­li­gence. On the other hand, if we think that AI re­search should ex­hibit diminish­ing marginal re­turns—as many cre­ative en­deav­ors seem to—then we would model f as an in­creas­ing con­cave func­tion. For ex­am­ple, we might let , or , or , etc. If we are re­ally pes­simistic about the difficulty of AI, we might even model . In these cases, in­tel­li­gence is still in­creas­ing in re­search effort, but the rate of in­crease as a func­tion of re­search effort is diminish­ing, as we un­der­stand more and more about AI. Note how­ever that the rate at which re­search is be­ing con­ducted is in­creas­ing, which might still lead us to ex­po­nen­tial growth in in­tel­li­gence, if it in­creases fast enough.

So how does our choice of f af­fect in­tel­li­gence growth rates? First, lets con­sider the case in which – the case of no diminish­ing marginal re­turns on re­search in­vest­ment. Here is a plot of the growth over 1000 time steps in the dis­crete model:

Here, we see ex­po­nen­tial growth in in­tel­li­gence. (It isn’t hard to di­rectly work out that in this case, in the dis­crete model, we have , and in the con­tin­u­ous model, we have ). And the plot illus­trates the ar­gu­ment for wor­ry­ing about AI risk now. Viewed at this scale, progress in AI ap­pears to plod along at unim­pres­sive lev­els be­fore sud­denly shoot­ing up to an uni­mag­in­able level: in this case, a quan­tity if writ­ten down as a dec­i­mal that would have more than 300 ze­ros.

It isn’t sur­pris­ing that if we were to model severely diminish­ing re­turns – say , that this would not oc­cur. Below, we plot what hap­pens when , with time taken out all the way to 1,000,000 rather than merely 1000 as in the above plot:

In­tel­li­gence growth is not very im­pres­sive here. At time 1,000,000 we haven’t even reached 17. If you wanted to reach (say) an in­tel­li­gence level of 30 you’d have to wait an uni­mag­in­ably long time. In this case, we definitely don’t need to worry about an “in­tel­li­gence ex­plo­sion”, and prob­a­bly not even about ever reach­ing any­thing that could be called a su­per-in­tel­li­gence.

But what about mod­er­ate (polyno­mial) lev­els of diminish­ing marginal re­turns. What if we take ? Lets see:

Ok – now we are mak­ing more progress, but even though in­tel­li­gence now has a polyno­mial re­la­tion­ship to re­search (and re­search speed is in­creas­ing, in a chain re­ac­tion!) the rate of growth in in­tel­li­gence is still de­creas­ing. What about if ? Lets see:

At least now the rate of growth doesn’t seem to be de­creas­ing: but it is grow­ing only lin­early with time. Hardly an ex­plo­sion. Maybe we just need to get more ag­gres­sive in our mod­el­ing. What if ?

Ok, now we’ve got some­thing! At least now the rate of in­tel­li­gence gains is in­creas­ing with time. But it is in­creas­ing more slowly than a quadratic func­tion – a far cry from the ex­po­nen­tial growth that char­ac­ter­izes an in­tel­li­gence ex­plo­sion.

Lets take a break from all of this plot­ting. The model we wrote down is sim­ple enough that we can just go and solve the differ­en­tial equa­tion. Sup­pose we have for some . Then the differ­en­tial equa­tion solves to give us: What this means is that for any pos­i­tive value of , in this model, in­tel­li­gence grows at only a polyno­mial rate. The only way this model gives us ex­po­nen­tial growth is if we take , and in­sist that – i.e. that the in­tel­li­gence de­sign prob­lem does not ex­hibit any diminish­ing marginal re­turns at all.


So what do we learn from this ex­er­cise? Of course one can quib­ble with the de­tails of the model, and one can be­lieve differ­ent things about what form for the func­tion f best ap­prox­i­mates re­al­ity. But for me, this model helps crys­tal­lize the ex­tent to which the “ex­po­nen­tial in­tel­li­gence ex­plo­sion” story cru­cially re­lies on in­tel­li­gence de­sign be­ing one of those rare tasks that doesn’t ex­hibit any de­creas­ing marginal re­turns on effort at all. This seems un­likely to me, and counter to ex­pe­rience.
Of course, there are tech­nolog­i­cal pro­cesses out there that do ap­pear to ex­hibit ex­po­nen­tial growth, at least for a lit­tle while. Moore’s law is the most salient ex­am­ple. But it is im­por­tant to re­mem­ber that even ex­po­nen­tial growth for a lit­tle while need not seem ex­plo­sive at hu­man time scales. Dou­bling ev­ery day cor­re­sponds to ex­po­nen­tial growth, but so does in­creas­ing by 1% a year. To para­phrase Ed Felten: our re­tire­ment plans ex­tend be­yond de­posit­ing a few dol­lars into a sav­ings ac­count, and wait­ing for the in­evitable “wealth ex­plo­sion” that will make us uni­mag­in­ably rich.


I don’t claim that any­thing in this post is ei­ther novel or sur­pris­ing to folks who spend their time think­ing about this sort of thing. There is at least one pa­per that writes down a model in­clud­ing diminish­ing marginal re­turns, which yields a lin­ear rate of in­tel­li­gence growth.

It is also in­ter­est­ing to note that in the model we wrote down, ex­po­nen­tial growth is re­ally a knife edge phe­nomenon. We already ob­served that we get ex­po­nen­tial growth if , but not if for any . But what if we have for ? In that case, we don’t get ex­po­nen­tial growth ei­ther! As Hadi Elzayn pointed out to me, Os­good’s Test tell us that in this case, the func­tion con­tains an ac­tual math­e­mat­i­cal sin­gu­lar­ity – it ap­proaches an in­finite value in finite time.

No nominations.
No reviews.