Recursive Self-Improvement

Fol­lowup to: Life’s Story Con­tinues, Sur­prised by Brains, Cas­cades, Cy­cles, In­sight, Re­cur­sion, Magic, En­gelbart: In­suffi­ciently Re­cur­sive, To­tal Nano Domination

I think that at some point in the de­vel­op­ment of Ar­tifi­cial In­tel­li­gence, we are likely to see a fast, lo­cal in­crease in ca­pa­bil­ity—“AI go FOOM”. Just to be clear on the claim, “fast” means on a timescale of weeks or hours rather than years or decades; and “FOOM” means way the hell smarter than any­thing else around, ca­pa­ble of de­liv­er­ing in short time pe­ri­ods tech­nolog­i­cal ad­vance­ments that would take hu­mans decades, prob­a­bly in­clud­ing full-scale molec­u­lar nan­otech­nol­ogy (that it gets by e.g. or­der­ing cus­tom pro­teins over the In­ter­net with 72-hour turnaround time). Not, “ooh, it’s a lit­tle Ein­stein but it doesn’t have any robot hands, how cute”.

Most peo­ple who ob­ject to this sce­nario, ob­ject to the “fast” part. Robin Han­son ob­jected to the “lo­cal” part. I’ll try to han­dle both, though not all in one shot to­day.

We are set­ting forth to an­a­lyze the de­vel­op­men­tal ve­loc­ity of an Ar­tifi­cial In­tel­li­gence. We’ll break down this ve­loc­ity into op­ti­miza­tion slope, op­ti­miza­tion re­sources, and op­ti­miza­tion effi­ciency. We’ll need to un­der­stand cas­cades, cy­cles, in­sight and re­cur­sion; and we’ll strat­ify our re­cur­sive lev­els into the metacog­ni­tive, cog­ni­tive, meta­knowl­edge, knowl­edge, and ob­ject level.

Quick re­view:

  • “Op­ti­miza­tion slope” is the good­ness and num­ber of op­por­tu­ni­ties in the vol­ume of solu­tion space you’re cur­rently ex­plor­ing, on what­ever your prob­lem is;

  • “Op­ti­miza­tion re­sources” is how much com­put­ing power, sen­sory band­width, tri­als, etc. you have available to ex­plore op­por­tu­ni­ties;

  • “Op­ti­miza­tion effi­ciency” is how well you use your re­sources. This will be de­ter­mined by the good­ness of your cur­rent mind de­sign—the point in mind de­sign space that is your cur­rent self—along with its knowl­edge and meta­knowl­edge (see be­low).

Op­ti­miz­ing your­self is a spe­cial case, but it’s one we’re about to spend a lot of time talk­ing about.

By the time any mind solves some kind of ac­tual prob­lem, there’s ac­tu­ally been a huge causal lat­tice of op­ti­miza­tions ap­plied - for ex­am­ple, hu­mans brain evolved, and then hu­mans de­vel­oped the idea of sci­ence, and then ap­plied the idea of sci­ence to gen­er­ate knowl­edge about grav­ity, and then you use this knowl­edge of grav­ity to fi­nally de­sign a damn bridge or some­thing.

So I shall strat­ify this causal­ity into lev­els—the bound­aries be­ing semi-ar­bi­trary, but you’ve got to draw them some­where:

  • “Me­tacog­ni­tive” is the op­ti­miza­tion that builds the brain—in the case of a hu­man, nat­u­ral se­lec­tion; in the case of an AI, ei­ther hu­man pro­gram­mers or, af­ter some point, the AI it­self.

  • “Cog­ni­tive”, in hu­mans, is the la­bor performed by your neu­ral cir­cuitry, al­gorithms that con­sume large amounts of com­put­ing power but are mostly opaque to you. You know what you’re see­ing, but you don’t know how the vi­sual cor­tex works. The Root of All Failure in AI is to un­der­es­ti­mate those al­gorithms be­cause you can’t see them… In an AI, the lines be­tween pro­ce­du­ral and declar­a­tive knowl­edge are the­o­ret­i­cally blurred, but in prac­tice it’s of­ten pos­si­ble to dis­t­in­guish cog­ni­tive al­gorithms and cog­ni­tive con­tent.

  • “Me­ta­knowl­edge”: Dis­cov­er­ies about how to dis­cover, “Science” be­ing an archety­pal ex­am­ple, “Math” be­ing an­other. You can think of these as re­flec­tive cog­ni­tive con­tent (knowl­edge about how to think).

  • “Knowl­edge”: Know­ing how grav­ity works.

  • “Ob­ject level”: Spe­cific ac­tual prob­lems like build­ing a bridge or some­thing.

I am ar­gu­ing that an AI’s de­vel­op­men­tal ve­loc­ity will not be smooth; the fol­low­ing are some classes of phe­nom­ena that might lead to non-smooth­ness. First, a cou­ple of points that weren’t raised ear­lier:

  • Rough­ness: A search space can be nat­u­rally rough—have un­evenly dis­tributed slope. With con­stant op­ti­miza­tion pres­sure, you could go through a long phase where im­prove­ments are easy, then hit a new vol­ume of the search space where im­prove­ments are tough. Or vice versa. Call this fac­tor rough­ness.

  • Re­source over­hangs: Rather than re­sources grow­ing in­cre­men­tally by rein­vest­ment, there’s a big bucket o’ re­sources be­hind a locked door, and once you un­lock the door you can walk in and take them all.

And these other fac­tors pre­vi­ously cov­ered:

  • Cas­cades are when one de­vel­op­ment leads the way to an­other—for ex­am­ple, once you dis­cover grav­ity, you might find it eas­ier to un­der­stand a coiled spring.

  • Cy­cles are feed­back loops where a pro­cess’s out­put be­comes its in­put on the next round. As the clas­sic ex­am­ple of a fis­sion chain re­ac­tion illus­trates, a cy­cle whose un­der­ly­ing pro­cesses are con­tin­u­ous, may show qual­i­ta­tive changes of sur­face be­hav­ior—a thresh­old of crit­i­cal­ity—the differ­ence be­tween each neu­tron lead­ing to the emis­sion of 0.9994 ad­di­tional neu­trons ver­sus each neu­tron lead­ing to the emis­sion of 1.0006 ad­di­tional neu­trons. k is the effec­tive neu­tron mul­ti­pli­ca­tion fac­tor and I will use it metaphor­i­cally.

  • In­sights are items of knowl­edge that tremen­dously de­crease the cost of solv­ing a wide range of prob­lems—for ex­am­ple, once you have the calcu­lus in­sight, a whole range of physics prob­lems be­come a whole lot eas­ier to solve. In­sights let you fly through, or tele­port through, the solu­tion space, rather than search­ing it by hand—that is, “in­sight” rep­re­sents knowl­edge about the struc­ture of the search space it­self.

and fi­nally,

  • Re­cur­sion is the sort of thing that hap­pens when you hand the AI the ob­ject-level prob­lem of “re­design your own cog­ni­tive al­gorithms”.

Sup­pose I go to an AI pro­gram­mer and say, “Please write me a pro­gram that plays chess.” The pro­gram­mer will tackle this us­ing their ex­ist­ing knowl­edge and in­sight in the do­main of chess and search trees; they will ap­ply any meta­knowl­edge they have about how to solve pro­gram­ming prob­lems or AI prob­lems; they will pro­cess this knowl­edge us­ing the deep al­gorithms of their neu­ral cir­cuitry; and this neu­tral cir­cuitry will have been de­signed (or rather its wiring al­gorithm de­signed) by nat­u­ral se­lec­tion.

If you go to a suffi­ciently so­phis­ti­cated AI—more so­phis­ti­cated than any that cur­rently ex­ists—and say, “write me a chess-play­ing pro­gram”, the same thing might hap­pen: The AI would use its knowl­edge, meta­knowl­edge, and ex­ist­ing cog­ni­tive al­gorithms. Only the AI’s metacog­ni­tive level would be, not nat­u­ral se­lec­tion, but the ob­ject level of the pro­gram­mer who wrote the AI, us­ing their knowl­edge and in­sight etc.

Now sup­pose that in­stead you hand the AI the prob­lem, “Write a bet­ter al­gorithm than X for stor­ing, as­so­ci­at­ing to, and re­triev­ing mem­o­ries”. At first glance this may ap­pear to be just an­other ob­ject-level prob­lem that the AI solves us­ing its cur­rent knowl­edge, meta­knowl­edge, and cog­ni­tive al­gorithms. And in­deed, in one sense it should be just an­other ob­ject-level prob­lem. But it so hap­pens that the AI it­self uses al­gorithm X to store as­so­ci­a­tive mem­o­ries, so if the AI can im­prove on this al­gorithm, it can rewrite its code to use the new al­gorithm X+1.

This means that the AI’s metacog­ni­tive level—the op­ti­miza­tion pro­cess re­spon­si­ble for struc­tur­ing the AI’s cog­ni­tive al­gorithms in the first place—has now col­lapsed to iden­tity with the AI’s ob­ject level.

For some odd rea­son, I run into a lot of peo­ple who vi­gor­ously deny that this phe­nomenon is at all novel; they say, “Oh, hu­man­ity is already self-im­prov­ing, hu­man­ity is already go­ing through a FOOM, hu­man­ity is already in a Sin­gu­lar­ity” etc. etc.

Now to me, it seems clear that—at this point in the game, in ad­vance of the ob­ser­va­tion—it is prag­mat­i­cally worth draw­ing a dis­tinc­tion be­tween in­vent­ing agri­cul­ture and us­ing that to sup­port more pro­fes­sion­al­ized in­ven­tors, ver­sus di­rectly rewrit­ing your own source code in RAM. Be­fore you can even ar­gue about whether the two phe­nom­ena are likely to be similar in prac­tice, you need to ac­cept that they are, in fact, two differ­ent things to be ar­gued about.

And I do ex­pect them to be very dis­tinct in prac­tice. In­vent­ing sci­ence is not rewrit­ing your neu­ral cir­cuitry. There is a ten­dency to com­pletely over­look the power of brain al­gorithms, be­cause they are in­visi­ble to in­tro­spec­tion. It took a long time his­tor­i­cally for peo­ple to re­al­ize that there was such a thing as a cog­ni­tive al­gorithm that could un­der­lie think­ing. And then, once you point out that cog­ni­tive al­gorithms ex­ist, there is a ten­dency to tremen­dously un­der­es­ti­mate them, be­cause you don’t know the spe­cific de­tails of how your hip­pocam­pus is stor­ing mem­o­ries well or poorly—you don’t know how it could be im­proved, or what differ­ence a slight degra­da­tion could make. You can’t draw de­tailed causal links be­tween the wiring of your neu­ral cir­cuitry, and your perfor­mance on real-world prob­lems. All you can see is the knowl­edge and the meta­knowl­edge, and that’s where all your causal links go; that’s all that’s visi­bly im­por­tant.

To see the brain cir­cuitry vary, you’ve got to look at a chim­panzee, ba­si­cally. Which is not some­thing that most hu­mans spend a lot of time do­ing, be­cause chim­panzees can’t play our games.

You can also see the tremen­dous over­looked power of the brain cir­cuitry by ob­serv­ing what hap­pens when peo­ple set out to pro­gram what looks like “knowl­edge” into Good-Old-Fash­ioned AIs, se­man­tic nets and such. Roughly, noth­ing hap­pens. Well, re­search pa­pers hap­pen. But no ac­tual in­tel­li­gence hap­pens. Without those opaque, over­looked, in­visi­ble brain al­gorithms, there is no real knowl­edge—only a tape recorder play­ing back hu­man words. If you have a small amount of fake knowl­edge, it doesn’t do any­thing, and if you have a huge amount of fake knowl­edge pro­grammed in at huge ex­pense, it still doesn’t do any­thing.

So the cog­ni­tive level—in hu­mans, the level of neu­ral cir­cuitry and neu­ral al­gorithms—is a level of tremen­dous but in­visi­ble power. The difficulty of pen­e­trat­ing this in­visi­bil­ity and cre­at­ing a real cog­ni­tive level is what stops mod­ern-day hu­mans from cre­at­ing AI. (Not that an AI’s cog­ni­tive level would be made of neu­rons or any­thing equiv­a­lent to neu­rons; it would just do cog­ni­tive la­bor on the same level of or­ga­ni­za­tion. Planes don’t flap their wings, but they have to pro­duce lift some­how.)

Re­cur­sion that can rewrite the cog­ni­tive level is worth dis­t­in­guish­ing.

But to some, hav­ing a term so nar­row as to re­fer to an AI rewrit­ing its own source code, and not to hu­mans in­vent­ing farm­ing, seems hardly open, hardly em­brac­ing, hardly com­mu­nal; for we all know that to say two things are similar shows greater en­light­en­ment than say­ing that they are differ­ent. Or maybe it’s as sim­ple as iden­ti­fy­ing “re­cur­sive self-im­prove­ment” as a term with pos­i­tive af­fec­tive valence, so you figure out a way to ap­ply that term to hu­man­ity, and then you get a nice dose of warm fuzzies. Any­way.

So what hap­pens when you start rewrit­ing cog­ni­tive al­gorithms?

Well, we do have one well-known his­tor­i­cal case of an op­ti­miza­tion pro­cess writ­ing cog­ni­tive al­gorithms to do fur­ther op­ti­miza­tion; this is the case of nat­u­ral se­lec­tion, our alien god.

Nat­u­ral se­lec­tion seems to have pro­duced a pretty smooth tra­jec­tory of more so­phis­ti­cated brains over the course of hun­dreds of mil­lions of years. That gives us our first data point, with these char­ac­ter­is­tics:

  • Nat­u­ral se­lec­tion on sex­ual mul­ti­cel­lu­lar eu­kary­otic life can prob­a­bly be treated as, to first or­der, an op­ti­mizer of roughly con­stant effi­ciency and con­stant re­sources.

  • Nat­u­ral se­lec­tion does not have any­thing akin to in­sights. It does some­times stum­ble over adap­ta­tions that prove to be sur­pris­ingly reusable out­side the con­text for which they were adapted, but it doesn’t fly through the search space like a hu­man. Nat­u­ral se­lec­tion is just search­ing the im­me­di­ate neigh­bor­hood of its pre­sent point in the solu­tion space, over and over and over.

  • Nat­u­ral se­lec­tion does have cas­cades; adap­ta­tions open up the way for fur­ther adap­ta­tions.

So—if you’re nav­i­gat­ing the search space via the ridicu­lously stupid and in­effi­cient method of look­ing at the neigh­bors of the cur­rent point, with­out in­sight—with con­stant op­ti­miza­tion pres­sure—then...

Well, I’ve heard it claimed that the evolu­tion of biolog­i­cal brains has ac­cel­er­ated over time, and I’ve also heard that claim challenged. If there’s ac­tu­ally been an ac­cel­er­a­tion, I would tend to at­tribute that to the “adap­ta­tions open up the way for fur­ther adap­ta­tions” phe­nomenon—the more brain genes you have, the more chances for a mu­ta­tion to pro­duce a new brain gene. (Or, more com­plexly: the more or­ganis­mal er­ror-cor­rect­ing mechanisms the brain has, the more likely a mu­ta­tion is to pro­duce some­thing use­ful rather than fatal.) In the case of ho­minids in par­tic­u­lar over the last few mil­lion years, we may also have been ex­pe­rienc­ing ac­cel­er­ated se­lec­tion on brain pro­teins, per se—which I would at­tribute to sex­ual se­lec­tion, or brain var­i­ance ac­count­ing for a greater pro­por­tion of to­tal fit­ness var­i­ance.

Any­way, what we definitely do not see un­der these con­di­tions is log­a­r­ith­mic or de­cel­er­at­ing progress. It did not take ten times as long to go from H. erec­tus to H. sapi­ens as from H. ha­bilis to H. erec­tus. Ho­minid evolu­tion did not take eight hun­dred mil­lion years of ad­di­tional time, af­ter evolu­tion im­me­di­ately pro­duced Aus­tralo­p­ithe­cus-level brains in just a few mil­lion years af­ter the in­ven­tion of neu­rons them­selves.

And an­other, similar ob­ser­va­tion: hu­man in­tel­li­gence does not re­quire a hun­dred times as much com­put­ing power as chim­panzee in­tel­li­gence. Hu­man brains are merely three times too large, and our pre­frontal cor­tices six times too large, for a pri­mate with our body size.

Or again: It does not seem to re­quire 1000 times as many genes to build a hu­man brain as to build a chim­panzee brain, even though hu­man brains can build toys that are a thou­sand times as neat.

Why is this im­por­tant? Be­cause it shows that with con­stant op­ti­miza­tion pres­sure from nat­u­ral se­lec­tion and no in­tel­li­gent in­sight, there were no diminish­ing re­turns to a search for bet­ter brain de­signs up to at least the hu­man level. There were prob­a­bly ac­cel­er­at­ing re­turns (with a low ac­cel­er­a­tion fac­tor). There are no visi­ble speed­bumps, so far as I know.

But all this is to say only of nat­u­ral se­lec­tion, which is not re­cur­sive.

If you have an in­vest­ment whose out­put is not cou­pled to its in­put—say, you have a bond, and the bond pays you a cer­tain amount of in­ter­est ev­ery year, and you spend the in­ter­est ev­ery year—then this will tend to re­turn you a lin­ear amount of money over time. After one year, you’ve re­ceived $10; af­ter 2 years, $20; af­ter 3 years, $30.

Now sup­pose you change the qual­i­ta­tive physics of the in­vest­ment, by cou­pling the out­put pipe to the in­put pipe. When­ever you get an in­ter­est pay­ment, you in­vest it in more bonds. Now your re­turns over time will fol­low the curve of com­pound in­ter­est, which is ex­po­nen­tial. (Please note: Not all ac­cel­er­at­ing pro­cesses are smoothly ex­po­nen­tial. But this one hap­pens to be.)

The first pro­cess grows at a rate that is lin­ear over time; the sec­ond pro­cess grows at a rate that is lin­ear in its cu­mu­la­tive re­turn so far.

The too-ob­vi­ous math­e­mat­i­cal idiom to de­scribe the im­pact of re­cur­sion is re­plac­ing an equation

y = f(t)

with

dy/​dt = f(y)

For ex­am­ple, in the case above, rein­vest­ing our re­turns trans­formed the lin­early growing

y = m*t

into

y’ = m*y

whose solu­tion is the ex­po­nen­tially growing

y = e^(m*t)

Now… I do not think you can re­ally solve equa­tions like this to get any­thing like a de­scrip­tion of a self-im­prov­ing AI.

But it’s the ob­vi­ous rea­son why I don’t ex­pect the fu­ture to be a con­tinu­a­tion of past trends. The fu­ture con­tains a feed­back loop that the past does not.

As a differ­ent Eliezer Yud­kowsky wrote, very long ago:

“If com­put­ing power dou­bles ev­ery eigh­teen months, what hap­pens when com­put­ers are do­ing the re­search?”

And this sounds hor­rify­ingly naive to my pre­sent ears, be­cause that’s not re­ally how it works at all—but still, it illus­trates the idea of “the fu­ture con­tains a feed­back loop that the past does not”.

His­tory up un­til this point was a long story about nat­u­ral se­lec­tion pro­duc­ing hu­mans, and then, af­ter hu­mans hit a cer­tain thresh­old, hu­mans start­ing to rapidly pro­duce knowl­edge and meta­knowl­edge that could—among other things—feed more hu­mans and sup­port more of them in lives of pro­fes­sional spe­cial­iza­tion.

To a first ap­prox­i­ma­tion, nat­u­ral se­lec­tion held still dur­ing hu­man cul­tural de­vel­op­ment. Even if Gre­gory Clark’s crazy ideas are crazy enough to be true—i.e., some hu­man pop­u­la­tions evolved lower dis­count rates and more in­dus­tri­ous work habits over the course of just a few hun­dred years from 1200 to 1800 - that’s just tweak­ing a few rel­a­tively small pa­ram­e­ters; it is not the same as de­vel­op­ing new com­plex adap­ta­tions with lots of in­ter­de­pen­dent parts. It’s not a chimp-hu­man type gap.

So then, with hu­man cog­ni­tion re­main­ing more or less con­stant, we found that knowl­edge feeds off knowl­edge with k > 1 - given a back­ground of roughly con­stant cog­ni­tive al­gorithms at the hu­man level. We dis­cov­ered ma­jor chunks of meta­knowl­edge, like Science and the no­tion of Pro­fes­sional Spe­cial­iza­tion, that changed the ex­po­nents of our progress; hav­ing lots more hu­mans around, due to e.g. the ob­ject-level in­no­va­tion of farm­ing, may have have also played a role. Progress in any one area tended to be choppy, with large in­sights leap­ing for­ward, fol­lowed by a lot of slow in­cre­men­tal de­vel­op­ment.

With his­tory to date, we’ve got a se­ries of in­te­grals look­ing some­thing like this:

Me­tacog­ni­tive = nat­u­ral se­lec­tion, op­ti­miza­tion effi­ciency/​re­sources roughly constant

Cog­ni­tive = Hu­man in­tel­li­gence = in­te­gral of evolu­tion­ary op­ti­miza­tion ve­loc­ity over a few hun­dred mil­lion years, then roughly con­stant over the last ten thou­sand years

Me­ta­knowl­edge = Pro­fes­sional Spe­cial­iza­tion, Science, etc. = in­te­gral over cog­ni­tion we did about pro­ce­dures to fol­low in think­ing, where meta­knowl­edge can also feed on it­self, there were ma­jor in­sights and cas­cades, etc.

Knowl­edge = all that ac­tual sci­ence, en­g­ineer­ing, and gen­eral knowl­edge ac­cu­mu­la­tion we did = in­te­gral of cog­ni­tion+meta­knowl­edge(cur­rent knowl­edge) over time, where knowl­edge feeds upon it­self in what seems to be a roughly ex­po­nen­tial process

Ob­ject level = stuff we ac­tu­ally went out and did = in­te­gral of cog­ni­tion+meta­knowl­edge+knowl­edge(cur­rent solu­tions); over a short timescale this tends to be smoothly ex­po­nen­tial to the de­gree that the peo­ple in­volved un­der­stand the idea of in­vest­ments com­pet­ing on the ba­sis of in­ter­est rate, but over medium-range timescales the ex­po­nent varies, and on a long range the ex­po­nent seems to increase

If you were to sum­ma­rize that in one breath, it would be, “with con­stant nat­u­ral se­lec­tion push­ing on brains, progress was lin­ear or mildly ac­cel­er­at­ing; with con­stant brains push­ing on meta­knowl­edge and knowl­edge and ob­ject-level progress feed­ing back to meta­knowl­edge and op­ti­miza­tion re­sources, progress was ex­po­nen­tial or mildly su­per­ex­po­nen­tial”.

Now fold back the ob­ject level so that it be­comes the metacog­ni­tive level.

And note that we’re do­ing this through a chain of differ­en­tial equa­tions, not just one; it’s the fi­nal out­put at the ob­ject level, af­ter all those in­te­grals, that be­comes the ve­loc­ity of metacog­ni­tion.

You should get...

...very fast progress? Well, no, not nec­es­sar­ily. You can also get nearly zero progress.

If you’re a re­cur­sified op­ti­miz­ing com­piler, you rewrite your­self just once, get a sin­gle boost in speed (like 50% or some­thing), and then never im­prove your­self any fur­ther, ever again.

If you’re EURISKO, you man­age to mod­ify some of your meta­heuris­tics, and the meta­heuris­tics work no­tice­ably bet­ter, and they even man­age to make a few fur­ther mod­ifi­ca­tions to them­selves, but then the whole pro­cess runs out of steam and flatlines.

It was hu­man in­tel­li­gence that pro­duced these ar­ti­facts to be­gin with. Their own op­ti­miza­tion power is far short of hu­man—so in­cred­ibly weak that, af­ter they push them­selves along a lit­tle, they can’t push any fur­ther. Worse, their op­ti­miza­tion at any given level is char­ac­ter­ized by a limited num­ber of op­por­tu­ni­ties, which once used up are gone—ex­tremely sharp diminish­ing re­turns.

When you fold a com­pli­cated, choppy, cas­cade-y chain of differ­en­tial equa­tions in on it­self via re­cur­sion, it should ei­ther flatline or blow up. You would need ex­actly the right law of diminish­ing re­turns to fly through the ex­tremely nar­row soft take­off key­hole.

The ob­served his­tory of op­ti­miza­tion to date makes this even more un­likely. I don’t see any rea­son­able way that you can have con­stant evolu­tion pro­duce hu­man in­tel­li­gence on the ob­served his­tor­i­cal tra­jec­tory (lin­ear or ac­cel­er­at­ing), and con­stant hu­man in­tel­li­gence pro­duce sci­ence and tech­nol­ogy on the ob­served his­tor­i­cal tra­jec­tory (ex­po­nen­tial or su­per­ex­po­nen­tial), and fold that in on it­self, and get out some­thing whose rate of progress is in any sense an­thro­po­mor­phic. From our per­spec­tive it should ei­ther flatline or FOOM.

When you first build an AI, it’s a baby—if it had to im­prove it­self, it would al­most im­me­di­ately flatline. So you push it along us­ing your own cog­ni­tion, meta­knowl­edge, and knowl­edge—not get­ting any benefit of re­cur­sion in do­ing so, just the usual hu­man idiom of knowl­edge feed­ing upon it­self and in­sights cas­cad­ing into in­sights. Even­tu­ally the AI be­comes so­phis­ti­cated enough to start im­prov­ing it­self, not just small im­prove­ments, but im­prove­ments large enough to cas­cade into other im­prove­ments. (Though right now, due to lack of hu­man in­sight, what hap­pens when mod­ern re­searchers push on their AGI de­sign is mainly noth­ing.) And then you get what I. J. Good called an “in­tel­li­gence ex­plo­sion”.

I even want to say that the func­tions and curves be­ing such as to al­low hit­ting the soft take­off key­hole, is ruled out by ob­served his­tory to date. But there are small con­ceiv­able loop­holes, like “maybe all the curves change dras­ti­cally and com­pletely as soon as we get past the part we know about in or­der to give us ex­actly the right an­thro­po­mor­phic fi­nal out­come”, or “maybe the tra­jec­tory for in­sight­ful op­ti­miza­tion of in­tel­li­gence has a law of diminish­ing re­turns where blind evolu­tion gets ac­cel­er­at­ing re­turns”.

There’s other fac­tors con­tribut­ing to hard take­off, like the ex­is­tence of hard­ware over­hang in the form of the poorly defended In­ter­net and fast se­rial com­put­ers. There’s more than one pos­si­ble species of AI we could see, given this whole anal­y­sis. I haven’t yet touched on the is­sue of lo­cal­iza­tion (though the ba­sic is­sue is ob­vi­ous: the ini­tial re­cur­sive cas­cade of an in­tel­li­gence ex­plo­sion can’t race through hu­man brains be­cause hu­man brains are not mod­ifi­able un­til the AI is already su­per­in­tel­li­gent).

But to­day’s post is already too long, so I’d best con­tinue to­mor­row.

Post scrip­tum: It oc­curred to me just af­ter writ­ing this that I’d been vic­tim of a cached Kurzweil thought in speak­ing of the knowl­edge level as “ex­po­nen­tial”. Ob­ject-level re­sources are ex­po­nen­tial in hu­man his­tory be­cause of phys­i­cal cy­cles of rein­vest­ment. If you try defin­ing knowl­edge as pro­duc­tivity per worker, I ex­pect that’s ex­po­nen­tial too (or pro­duc­tivity growth would be un­no­tice­able by now as a com­po­nent in eco­nomic progress). I wouldn’t be sur­prised to find that pub­lished jour­nal ar­ti­cles are grow­ing ex­po­nen­tially. But I’m not quite sure that it makes sense to say hu­man­ity has learned as much since 1938 as in all ear­lier hu­man his­tory… though I’m quite will­ing to be­lieve we pro­duced more goods… then again we surely learned more since 1500 than in all the time be­fore. Any­way, hu­man knowl­edge be­ing “ex­po­nen­tial” is a more com­pli­cated is­sue than I made it out to be. But hu­man ob­ject level is more clearly ex­po­nen­tial or su­per­ex­po­nen­tial.