Distinguishing definitions of takeoff

I find dis­cus­sions about AI take­off to be very con­fus­ing. Often, peo­ple will ar­gue for “slow take­off” or “fast take­off” and then when I ask them to op­er­a­tional­ize what those terms mean, they end up say­ing some­thing quite differ­ent than what I thought those terms meant.

To help alle­vi­ate this prob­lem, I aim to com­pile the defi­ni­tions of AI take­off that I’m cur­rently aware of, with an em­pha­sis on defi­ni­tions that have clear speci­fi­ca­tions. I will con­tinue up­dat­ing the post as long as I think it serves as a use­ful refer­ence for oth­ers.

In this post, an AI take­off can be roughly con­strued as “the dy­nam­ics of the world as­so­ci­ated with the de­vel­op­ment of pow­er­ful ar­tifi­cial in­tel­li­gence.” Th­ese defi­ni­tions char­ac­ter­ize differ­ent ways that the world can evolve as trans­for­ma­tive AI is de­vel­oped.

Foom/​Hard takeoff

The tra­di­tional hard take­off po­si­tion, or “Foom” po­si­tion (these ap­pear to be equiv­a­lent terms) was char­ac­ter­ized in this post from Eliezer Yud­kowsky. It con­trasts Han­son’s take­off sce­nario by em­pha­siz­ing lo­cal dy­nam­ics: rather than a pop­u­la­tion of ar­tifi­cial in­tel­li­gences com­ing into ex­is­tence, there would be a sin­gle in­tel­li­gence that quickly reaches a level of com­pe­tence that out­strips the world’s ca­pa­bil­ities to con­trol it. The pro­posed mechanism that causes such a dy­namic is re­cur­sive self im­prove­ment, though Yud­kowsky later sug­gested that this wasn’t nec­es­sary.

The abil­ity for re­cur­sive self im­prove­ment to in­duce a hard take­off was defended in In­tel­li­gence Ex­plo­sion Microe­co­nomics. He ar­gues against Robin Han­son in the AI Foom de­bates. Watch this video to see the live de­bate.

Given the word “hard” in this no­tion of take­off, a “soft” take­off could sim­ply be defined as the nega­tion of a hard take­off.

Han­so­nian “slow” takeoff

Robin Han­son ob­jected to hard take­off by pre­dict­ing that growth in AI ca­pa­bil­ities will not be ex­tremely un­even be­tween pro­jects. In other words, there is un­likely to be one AI pro­ject, or even a small set of AI pro­jects, that pro­duces a sys­tem that out­strips the abil­ities of the rest of the world. While he re­jects Yud­kowsky’s ar­gu­ment, it is in­ac­cu­rate to say that Robin Han­son ex­pected growth in AI ca­pa­bil­ities to be slow.

In Eco­nomic Growth Given Ma­chine In­tel­li­gence, Han­son ar­gues that AI in­duced growth could cause GDP to dou­ble on the timescale of months. Very high eco­nomic growth would mark a rad­i­cal tran­si­tion to a faster mode of tech­nolog­i­cal progress and ca­pa­bil­ities, some­thing that Han­son ar­gues is en­tirely prece­dented in hu­man his­tory.

The tech­nol­ogy that Han­son en­vi­sions will in­duce fast eco­nomic growth is whole brain em­u­la­tion, which he wrote a book about. In gen­eral, Han­son re­jects the frame­work that AGI should be seen as an in­ven­tion that oc­curs at a par­tic­u­lar mo­ment in time: in­stead, AI should be viewed as an in­put to the econ­omy, (like elec­tric­ity, though the con­sid­er­a­tions may be differ­ent).

Bostro­mian takeoffs

Nick Bostrom ap­peared to throw away much of the ter­minol­ogy in the AI Foom de­bate in or­der to in­vent his own. In Su­per­in­tel­li­gence he pro­vides a char­ac­ter­i­za­tion of three types of AI ca­pa­bil­ity growth modes, defined by the clock-time (real phys­i­cal time) from when a sys­tem is roughly hu­man-level to when it is strongly su­per­in­tel­li­gent, defined as “a level of in­tel­li­gence vastly greater than con­tem­po­rary hu­man­ity’s com­bined in­tel­lec­tual where­withal.”

Some have ob­jected to Bostrom’s use of clock-time to define take­off, in­stead ar­gu­ing that work re­quired to al­ign sys­tems is a bet­ter met­ric (though harder to mea­sure).

Slow

A slow take­off is one that oc­curs over the timescale of decades or cen­turies. Bostrom pre­dicted that this timescale would al­low for in­sti­tu­tions, such as gov­ern­ments, to re­act to new AI de­vel­op­ments. It would also al­low for test­ing in­cre­men­tally more pow­er­ful tech­nolo­gies with­out ex­is­ten­tial risks as­so­ci­ated with test­ing.

Fast

A fast take­off is one that oc­curs over the timescale of min­utes, hours, or days. Given such short time to re­act, Bostrom be­lieves that lo­cal dy­nam­ics of the take­off be­come rele­vant, as was the case in Yud­kowsky’s foom sce­nario.

Moderate

A mod­er­ate take­off is situ­ated be­tween slow and fast, and oc­curs on the timescale of months or years.

Con­tin­u­ous takeoff

Con­tin­u­ous take­off was defined, and par­tially defended in my post. Its mean­ing pri­mar­ily de­rives from Katja Grace’s post on dis­con­tin­u­ous progress around the de­vel­op­ment of AGI. In that post, Grace char­ac­ter­izes dis­con­ti­nu­ities:

We say a tech­nolog­i­cal dis­con­ti­nu­ity has oc­curred when a par­tic­u­lar tech­nolog­i­cal ad­vance pushes some progress met­ric sub­stan­tially above what would be ex­pected based on ex­trap­o­lat­ing past progress. We mea­sure the size of a dis­con­ti­nu­ity in terms of how many years of past progress would have been needed to pro­duce the same im­prove­ment. We use judg­ment to de­cide how to ex­trap­o­late past progress.

In my post, I ex­trap­o­late this con­cept and in­vert it, us­ing ter­minol­ogy that I saw Ro­hin use in this Align­ment Newslet­ter edi­tion, and define con­tin­u­ous take­off as

A sce­nario where the de­vel­op­ment of com­pe­tent, pow­er­ful AI fol­lows a tra­jec­tory that is roughly in line with what we would have ex­pected by ex­trap­o­lat­ing from past progress.

Grad­ual/​in­cre­men­tal take­off?

Some peo­ple ob­jected to my use of the word con­tin­u­ous, as they found that the words grad­ual or in­cre­men­tal are more de­scrip­tive and math­e­mat­i­cally ac­cu­rate. After all, the fol­low­ing func­tion is con­tin­u­ous, but not grad­ual.

Ad­di­tion­ally, if you agree with Han­son’s the­sis that his­tory can be seen as a se­ries of eco­nomic growth modes, each faster than the last one, then con­tin­u­ous take­off as plainly defined is in trou­ble. That’s be­cause tech­nolog­i­cal progress from 1800 − 1900 was much faster than tech­nolog­i­cal progress from 1700 − 1800. There­fore, “ex­trap­o­lat­ing from past progress” would provide an in­cor­rect es­ti­mate of progress, if one did not fore­see the in­dus­trial rev­olu­tion. In gen­eral, ex­trap­o­lat­ing from past progress is hard be­cause it de­pends on the refer­ence class you are us­ing to fore­cast.

Paul slow takeoff

Paul Chris­ti­ano ar­gues that we should char­ac­ter­ize take­off in terms of eco­nomic growth rates (similar to Han­son) but uses a defi­ni­tion that em­pha­sizes how quickly the econ­omy tran­si­tions into a pe­riod of higher growth. He defines slow take­off as

There will be a com­plete 4 year in­ter­val in which world out­put dou­bles, be­fore the first 1 year in­ter­val in which world out­put dou­bles. (Similarly, we’ll see an 8 year dou­bling be­fore a 2 year dou­bling, etc.)

and defines fast take­off as the nega­tion of the above state­ment. Note that this defi­ni­tion leaves a third pos­si­bil­ity: you could be­lieve that the world out­put will never dou­ble dur­ing a 1 year in­ter­val, a po­si­tion I would re­fer to as “no take­off” which I ex­plain next.

Paul’s out­line of slow take­off shares some of its mean­ing with con­tin­u­ous take­off, be­cause un­der a slow tran­si­tion to a higher growth mode, change won’t be sud­den.

No takeoff

“No take­off” is es­sen­tially my term for the be­lief that world eco­nomic growth rates won’t ac­cel­er­ate to a very high level (per­haps >30% real GDP growth rate in one year) fol­low­ing the de­vel­op­ment of AI. William Ma­caskill is a no­table skep­tic of AI take­off. I have cre­ated this Me­tac­u­lus ques­tion to op­er­a­tional­ize the the­sis.

The Effec­tive Altru­ism Foun­da­tion wrote this post sug­gest­ing that peak eco­nomic growth rates may lie in the past. If we use the out­side view, this po­si­tion may be rea­son­able. Eco­nomic growth rates have slowed down since the 1960s de­spite the rise of per­sonal com­put­ers and the in­ter­net: tech­nolo­gies that we might have naively pre­dicted would be trans­for­ma­tive ahead of time.

This po­si­tion should not be con­fused with the idea that hu­man­ity will never de­velop su­per­in­tel­li­gent com­put­ers, though that sce­nario is com­pat­i­ble with no take­off.

Drexler’s takeoff

Eric Drexler ar­gues in Com­pre­hen­sive AI Ser­vices (CAIS) that fu­ture AI will be mod­u­lar, mean­ing that there is un­likely to be a sin­gle sys­tem that can perform a set of di­verse tasks all at once be­fore there are in­di­vi­d­ual sys­tems that can perform the in­di­vi­d­ual tasks more com­pe­tently than the sin­gle sys­tem can. This idea shares ground­work with Han­son’s ob­jec­tion to a lo­cal take­off. The re­verse of this sce­nario is what Han­son calls “lumpy AI” where sin­gle agen­tic sys­tems out­com­pete a set of ser­vices.

Drexler uses the CAIS model to ar­gue against the bi­nary char­ac­ter­i­za­tion of self-im­prove­ment. Just as tech­nol­ogy already feeds into it­self, and thus the world can already be seen as “re­cur­sively self im­prov­ing it­self”, fu­ture AI re­search could feed into it­self as re­cur­sive tech­nolog­i­cal im­prove­ment, with­out the nec­es­sary fo­cus on sin­gle sys­tems im­prov­ing them­selves.

In other words, rather than view­ing AIs as ei­ther self im­prov­ing or not, self im­prove­ment can be seen as a con­tinuum from “the en­tire world works to im­prove a sys­tem” on one end, and “a sin­gle lo­cal sys­tem im­proves only it­self, with out­side forces pro­vid­ing min­i­mal benefit to growth in ca­pa­bil­ities” on the other.

Bau­mann’s soft takeoff

In this post, To­bias Bau­mann ar­gues that we should op­er­a­tional­ize soft take­off in terms of how quickly the frac­tion of global eco­nomic ac­tivity at­tributable to au­tonomous AI sys­tems will rise. “Time” here is not nec­es­sar­ily clock-time, as was the case in Bostrom’s take­off. Time can also re­fer to eco­nomic time, which is a mea­sure of time that ad­justs for rate of eco­nomic growth, and poli­ti­cal time, a mea­sure that ad­justs for rate of so­cial change.

He ex­plains that this op­er­a­tional­iza­tion avoids the pit­falls of defi­ni­tions that rely on mo­ments in time where AI reaches thresh­olds such as “hu­man-level” or “su­per­in­tel­li­gent.” He ar­gues that AI is likely to sur­pass hu­man abil­ities in some do­mains and not in oth­ers, rather than sur­pass us in all ways all at once.

Robin Han­son ap­pears to agree with a similar mea­sure for AI progress.

Less com­mon definitions

Event Hori­zon/​Epistemic Horizon

In 2007, Yud­kowsky out­lined the three schools of sin­gu­lar­ity, which was per­haps the state of the art for take­off dis­cus­sions at the time. In it he in­cluded his own sce­nario (Foom), the Event Hori­zon, and Ac­cel­er­at­ing Change.

The Event Hori­zon hy­poth­e­sis could be seen as an ex­trap­o­la­tion of Ver­nor Vinge’s defi­ni­tion of the tech­nolog­i­cal sin­gu­lar­ity. It is defined as a point in time af­ter which cur­rent mod­els of fu­ture progress break down, which is es­sen­tially the op­po­site defi­ni­tion of con­tin­u­ous take­off.

An epistemic hori­zon would be rele­vant for de­ci­sion mak­ing be­cause it would im­ply that AI progress could come sud­denly, with­out warn­ing. If this were true, then our safety guaran­tees as­sumed un­der a con­tin­u­ous take­off sce­nario would fail. Fur­ther­more, even if we could pre­dict rapid change ahead of time, due to so­cial pres­sures, peo­ple might fail to act un­til it’s too late, a po­si­tion ar­gued for in There’s No Fire Alarm for Ar­tifi­cial Gen­eral In­tel­li­gence.

(Note, I see a lot of peo­ple in­ter­pret­ing the Fire Alarm es­say as merely ar­gu­ing that we can’t pre­dict rapid progress be­fore it’s too late. The es­say it­self dis­pels this in­ter­pre­ta­tion, “When I ob­serve that there’s no fire alarm for AGI, I’m not say­ing that there’s no pos­si­ble equiv­a­lent of smoke ap­pear­ing from un­der a door.”)

Ac­cel­er­at­ing change

Con­tin­u­ing the dis­cus­sion from the three schools of sin­gu­lar­ity, this ver­sion of AI take­off is most closely as­so­ci­ated with Ray Kurzweil. Ac­cel­er­at­ing change is char­ac­ter­ized by AI ca­pa­bil­ity tra­jec­to­ries fol­low­ing smooth ex­po­nen­tial curves. It shares with con­tin­u­ous take­off the pre­dictabil­ity of AI de­vel­op­ments, but is more nar­row and makes much more spe­cific pre­dic­tions.

In­di­vi­d­ual vs. col­lec­tive takeoff

Kaj So­tala has used the words “in­di­vi­d­ual take­off” vs. “col­lec­tive take­off” which I think are roughly syn­ony­mous with the lo­cal vs. global dis­tinc­tion pro­vided by the Foom de­bate. Other words that of­ten come up are “dis­tributed” and “diffuse”, “unipo­lar” vs “mul­ti­po­lar”, and “de­ci­sive strate­gic ad­van­tage.”

Go­ertzel’s semihard takeoff

I can’t say much about this one ex­cept that it’s in-be­tween soft and hard take­off.


Fur­ther reading

The AI Foom debate

A Con­tra Foom Read­ing List and Reflec­tions on In­tel­li­gence from Mag­nus Vinding

Self-im­prov­ing AI: an Anal­y­sis, from John Storrs Hall

How sure are we about this AI stuff?, from Ben Garfinkel

Can We Avoid a Hard Take­off from Ver­nor Vinge