Misconceptions about continuous takeoff

There has been con­sid­er­able de­bate over whether de­vel­op­ment in AI will ex­pe­rience a dis­con­ti­nu­ity, or whether it will fol­low a more con­tin­u­ous growth curve. Given the lack of con­sen­sus and the con­fus­ing, di­verse ter­minol­ogy, it is nat­u­ral to hy­poth­e­size that much of the de­bate is due to sim­ple mi­s­un­der­stand­ings. Here, I seek to dis­solve some mis­con­cep­tions about the con­tin­u­ous per­spec­tive, based mostly on how I have seen peo­ple mis­in­ter­pret it in my own ex­pe­rience.

First, we need to know what I even mean by con­tin­u­ous take­off. When I say it, I mean a sce­nario where the de­vel­op­ment of com­pe­tent, pow­er­ful AI fol­lows a tra­jec­tory that is roughly in line with what we would have ex­pected by ex­trap­o­lat­ing from past progress. That is, there is no point at which a sin­gle pro­ject lunges for­ward in de­vel­op­ment and cre­ates an AI that is much more com­pe­tent than any other pro­ject be­fore it. This leads to the first clar­ifi­ca­tion,

Con­tin­u­ous doesn’t nec­es­sar­ily mean slow

The po­si­tion I am call­ing “con­tin­u­ous” has been called a num­ber of differ­ent names over the years. Many re­fer to it as “slow” or “soft.” I think con­tin­u­ous is prefer­able to these terms be­cause it fo­cuses at­ten­tion on the strate­gi­cally rele­vant part of the ques­tion. It seems to mat­ter less what the ac­tual clock-time is from AGI to su­per­in­tel­li­gence, and in­stead mat­ters more if there are will be sin­gle pro­jects who break pre­vi­ous tech­nolog­i­cal trends and gain ca­pa­bil­ities that are highly un­usual rel­a­tive to the past.

More­over, there are ex­am­ples of rapid tech­nolog­i­cal de­vel­op­ments that I con­sider to be con­tin­u­ous. As an ex­am­ple, con­sider GANs. In 2014, GANs were used to gen­er­ate low qual­ity black-and-white pho­tos of hu­man faces. By late 2018, they were used to cre­ate nearly-pho­to­re­al­is­tic images of hu­man faces.

Yet, at no point dur­ing this de­vel­op­ment did any pro­ject leap for­ward by a huge mar­gin. In­stead, ev­ery pa­per built upon the last one by mak­ing minor im­prove­ments and in­creas­ing the com­pute in­volved. Since these minor im­prove­ments nonethe­less hap­pened rapidly, the re­sult is that the GANs fol­lowed a fast de­vel­op­ment rel­a­tive to the life­times of hu­mans.

Ex­trap­o­lat­ing from this progress, we can as­sume that GAN video gen­er­a­tion will fol­low a similar tra­jec­tory, start­ing with sim­ple low re­s­olu­tion clips, and grad­u­ally tran­si­tion­ing to the cre­ation of HD videos. What would be un­usual is if some­one right now in late 2019 pro­duces some HD videos us­ing GANs.

Large power differ­en­tials can still hap­pen in a con­tin­u­ous takeoff

Power differ­en­tials be­tween na­tions, com­mu­ni­ties, and peo­ple are not un­usual in the course of his­tory. There­fore, the ex­is­tence of a deep power differ­en­tial caused by AI would not au­to­mat­i­cally im­ply that a dis­con­ti­nu­ity has oc­curred.

In a con­tin­u­ous take­off, a sin­gle na­tion or cor­po­ra­tion might still pull ahead in AI de­vel­op­ment by a big mar­gin and use this to their strate­gic ad­van­tage. To see how, con­sider how tech­nol­ogy in the in­dus­trial rev­olu­tion was used by west­ern Euro­pean na­tions to con­quer much of the world.

Na­tions rich enough to man­u­fac­ture rifles main­tained a large strate­gic ad­van­tage over those un­able to. De­spite this, the rifle did not ex­pe­rience any sur­pris­ing de­vel­op­ments which cat­a­pulted it to ex­treme use­ful­ness, as far as I can tell. In­stead, sharp­shoot­ing be­came grad­u­ally more ac­cu­rate, with each decade pro­duc­ing slightly bet­ter rifles.

See also: Soft take­off can still lead to de­ci­sive strate­gic advantage

Con­tin­u­ous take­off doesn’t re­quire be­liev­ing that ems will come first

This mis­con­cep­tion seems to mostly be a his­tor­i­cal rem­nant of the Han­son-Yud­kowsky AI-Foom de­bate. In the old days, there weren’t many peo­ple ac­tively crit­i­ciz­ing foom. So, if you dis­agreed with foom, it was prob­a­bly be­cause you were sym­pa­thetic to Han­son’s views.

There are now many peo­ple who dis­agree with foom who don’t take Han­son’s side. Paul Chris­ti­ano and AI Im­pacts ap­pear some­what at the fore­front of this new view.

Re­cur­sive self-im­prove­ment is com­pat­i­ble with con­tin­u­ous takeoff

In my ex­pe­rience, re­cur­sive self im­prove­ment is one of the main rea­sons cited for why we should ex­pect a dis­con­ti­nu­ity. The val­idity of this ar­gu­ment is far from sim­ple, but need­less to say: folks who sub­scribe to con­tin­u­ous take­off aren’t sim­ply ig­nor­ing it.

Con­sider I.J. Good’s ini­tial elab­o­ra­tion of re­cur­sive self im­prove­ment,

Let an ul­train­tel­li­gent ma­chine be defined as a ma­chine that can far sur­pass all the in­tel­lec­tual ac­tivi­ties of any man how­ever clever. Since the de­sign of ma­chines is one of these in­tel­lec­tual ac­tivi­ties, an ul­train­tel­li­gent ma­chine could de­sign even bet­ter ma­chines; there would then un­ques­tion­ably be an ‘in­tel­li­gence ex­plo­sion’, and the in­tel­li­gence of man would be left far be­hind.

The ob­vi­ous in­ter­pre­ta­tion from the con­tin­u­ous per­spec­tive is that by the time we have an ul­train­tel­li­gent ma­chine, we’ll already have a not-quite-ul­train­tel­li­gent ma­chine. There­fore, the ad­van­tage that an ul­train­tel­li­gent ma­chine will have over the col­lec­tive of hu­man­ity + ma­chines will be mod­est.

It is some­times ar­gued that even if this ad­van­tage is mod­est, the growth curves will be ex­po­nen­tial, and there­fore a slight ad­van­tage right now will com­pound to be­come a large ad­van­tage over a long enough pe­riod of time. How­ever, this ar­gu­ment by it­self is not an ar­gu­ment against a con­tin­u­ous take­off.

Ex­po­nen­tial growth curves are com­mon for macroe­co­nomic growth, and there­fore this ar­gu­ment should ap­ply equally to any sys­tem which ex­pe­riences a pos­i­tive feed­back loop. Fur­ther­more, large strate­gic ad­van­tages do not au­to­mat­i­cally con­sti­tute a dis­con­ti­nu­ity since they can still hap­pen even if no pro­ject surges for­ward sud­denly.

Con­tin­u­ous take­off is rele­vant to AI alignment

The mis­con­cep­tion here is some­thing along the lines of, “Well, we might not be able to agree about AI take­off, but at least we can agree that AI safety is ex­tremely valuable in ei­ther case.” Un­for­tu­nately, the use­ful­ness of many ap­proaches to AI al­ign­ment ap­pear to hinge quite a bit on con­tin­u­ous take­off.

Con­sider the ques­tion of whether an AGI would defect dur­ing test­ing. The ar­gu­ment goes that an AI will have an in­stru­men­tal rea­son to pre­tend to be al­igned while weak, and then en­ter a treach­er­ous turn when it is safe from mod­ifi­ca­tion. If this phe­nomenon ever oc­curs, there are two dis­tinct ap­proaches we can take to min­i­mize po­ten­tial harm.

First, we could ap­ply ex­treme cau­tion and try to en­sure that no sys­tem will ever lie about its in­ten­tions. Se­cond, we could more-or-less deal with sys­tems which defect as they arise. For in­stance, dur­ing de­ploy­ment we could no­tice that some sys­tems are op­ti­miz­ing some­thing differ­ent than what we in­tended dur­ing train­ing, and there­fore we shut them down.

The first ap­proach is preferred if you think that there will be a rapid ca­pa­bil­ity gain rel­a­tive the rest of civ­i­liza­tion. If we de­ploy an AI and it sud­denly cat­a­pults to ex­cep­tional com­pe­tence, then we don’t re­ally have a choice other than to get its val­ues right the first time.

On the other hand, un­der a con­tin­u­ous take­off, the sec­ond ap­proach seems more promis­ing. Each in­di­vi­d­ual sys­tem won’t by them­selves carry more power than the sum of pro­jects be­fore it. In­stead, AIs will only be slightly bet­ter than the ones that came be­fore it, in­clud­ing any AIs we are us­ing to mon­i­tor the newer ones. There­fore, to the ex­tent that the sec­ond ap­proach car­ries a risk, it will prob­a­bly look less like a sud­den world dom­i­na­tion and will look more like a bad product rol­lout, in line with say, the re­lease of Win­dows Vista.

Now, ob­vi­ously there are im­por­tant differ­ences be­tween cur­rent tech­nolog­i­cal prod­ucts and fu­ture AGIs. Still, the gen­eral strat­egy of “deal­ing with things as they come up” is much more vi­able un­der con­tin­u­ous take­off. There­fore, if a con­tin­u­ous take­off is more likely, we should fo­cus our at­ten­tion on ques­tions which fun­da­men­tally can’t be solved as they come up. This is a de­par­ture from the way that many have framed AI al­ign­ment in the past.