Underconstrained Abstractions

Fol­lowup to: The Weak In­side View

Saith Robin:

“It is easy, way too easy, to gen­er­ate new mechanisms, ac­counts, the­o­ries, and ab­strac­tions. To see if such things are use­ful, we need to vet them, and that is eas­iest “nearby”, where we know a lot. When we want to deal with or un­der­stand things “far”, where we know lit­tle, we have lit­tle choice other than to rely on mechanisms, the­o­ries, and con­cepts that have worked well near. Far is just the wrong place to try new things.”

Well… I un­der­stand why one would have that re­ac­tion. But I’m not sure we can re­ally get away with that.

When pos­si­ble, I try to talk in con­cepts that can be ver­ified with re­spect to ex­ist­ing his­tory. When I talk about nat­u­ral se­lec­tion not run­ning into a law of diminish­ing re­turns on ge­netic com­plex­ity or brain size, I’m talk­ing about some­thing that we can try to ver­ify by look­ing at the ca­pa­bil­ities of other or­ganisms with brains big and small. When I talk about the bound­aries to shar­ing cog­ni­tive con­tent be­tween AI pro­grams, you can look at the field of AI the way it works to­day and see that, lo and be­hold, there isn’t a lot of cog­ni­tive con­tent shared.

But in my book this is just one trick in a library of method­olo­gies for deal­ing with the Fu­ture, which is, in gen­eral, a hard thing to pre­dict.

Let’s say that in­stead of us­ing my com­pli­cated-sound­ing dis­junc­tion (many differ­ent rea­sons why the growth tra­jec­tory might con­tain an up­ward cliff, which don’t all have to be true), I in­stead staked my whole story on the crit­i­cal thresh­old of hu­man in­tel­li­gence. Say­ing, “Look how sharp the slope is here!”—well, it would sound like a sim­pler story. It would be closer to fit­ting on a T-Shirt. And by talk­ing about just that one ab­strac­tion and no oth­ers, I could make it sound like I was deal­ing in ver­ified his­tor­i­cal facts—hu­man­ity’s evolu­tion­ary his­tory is some­thing that has already hap­pened.

But speak­ing of an ab­strac­tion be­ing “ver­ified” by pre­vi­ous his­tory is a tricky thing. There is this lit­tle prob­lem of un­der­con­straint—of there be­ing more than one pos­si­ble ab­strac­tion that the data “ver­ifies”.

In “Cas­cades, Cy­cles, In­sight” I said that eco­nomics does not seem to me to deal much in the ori­gins of novel knowl­edge and novel de­signs, and said, “If I un­der­es­ti­mate your power and merely par­ody your field, by all means in­form me what kind of eco­nomic study has been done of such things.” This challenge was an­swered by com­ments di­rect­ing me to some pa­pers on “en­doge­nous growth”, which hap­pens to be the name of the­o­ries that don’t take pro­duc­tivity im­prove­ments as ex­oge­nous forces.

I’ve looked at some liter­a­ture on en­doge­nous growth. And don’t get me wrong, it’s prob­a­bly not too bad as eco­nomics. How­ever, the sem­i­nal liter­a­ture talks about ideas be­ing gen­er­ated by com­bin­ing other ideas, so that if you’ve got N ideas already and you’re com­bin­ing them three at a time, that’s a po­ten­tial N!/​((3!)(N − 3!)) new ideas to ex­plore. And then goes on to note that, in this case, there will be vastly more ideas than any­one can ex­plore, so that the rate at which ideas are ex­ploited will de­pend more on a paucity of ex­plor­ers than a paucity of ideas.

Well… first of all, the no­tion that “ideas are gen­er­ated by com­bin­ing other ideas N at a time” is not ex­actly an amaz­ing AI the­ory; it is an economist look­ing at, es­sen­tially, the whole prob­lem of AI, and try­ing to solve it in 5 sec­onds or less. It’s not as if any ex­per­i­ment was performed to ac­tu­ally watch ideas re­com­bin­ing. Try to build an AI around this the­ory and you will find out in very short or­der how use­less it is as an ac­count of where ideas come from...

But more im­por­tantly, if the only propo­si­tion you ac­tu­ally use in your the­ory is that there are more ideas than peo­ple to ex­ploit them, then this is the only propo­si­tion that can even be par­tially ver­ified by test­ing your the­ory.

Even if a re­com­bi­nant growth the­ory can be fit to the data, then the his­tor­i­cal data still un­der­con­strains the many pos­si­ble ab­strac­tions that might de­scribe the num­ber of pos­si­ble ideas available—any hy­poth­e­sis that has around “more ideas than peo­ple to ex­ploit them” will fit the same data equally well. You should sim­ply say, “I as­sume there are more ideas than peo­ple to ex­ploit them”, not go so far into math­e­mat­i­cal de­tail as to talk about N choose 3 ideas. It’s not that the dan­gling math here is un­der­con­strained by the pre­vi­ous data, but that you’re not even us­ing it go­ing for­ward.

(And does it even fit the data? I have friends in ven­ture cap­i­tal who would laugh like hell at the no­tion that there’s an un­limited num­ber of re­ally good ideas out there. Some kind of Gaus­sian or power-law or some­thing dis­tri­bu­tion for the good­ness of available ideas seems more in or­der… I don’t ob­ject to “en­doge­nous growth” sim­plify­ing things for the sake of hav­ing one sim­plified ab­strac­tion and see­ing if it fits the data well; we all have to do that. Claiming that the un­der­ly­ing math doesn’t just let you build a use­ful model, but also has a fairly di­rect cor­re­spon­dence to re­al­ity, ought to be a whole ’nother story, in eco­nomics—or so it seems to me.)

(If I merely mis­in­ter­pret the en­doge­nous growth liter­a­ture or un­der­es­ti­mate its so­phis­ti­ca­tion, by all means cor­rect me.)

The fur­ther away you get from highly reg­u­lar things like atoms, and the closer you get to sur­face phe­nom­ena that are the fi­nal prod­ucts of many mov­ing parts, the more his­tory un­der­con­strains the ab­strac­tions that you use. This is part of what makes fu­tur­ism difficult. If there were ob­vi­ously only one story that fit the data, who would bother to use any­thing else?

Is Moore’s Law a story about the in­crease in com­put­ing power over time - the num­ber of tran­sis­tors on a chip, as a func­tion of how far the planets have spun in their or­bits, or how many times a light wave emit­ted from a ce­sium atom has changed phase?

Or does the same data equally ver­ify a hy­poth­e­sis about ex­po­nen­tial in­creases in in­vest­ment in man­u­fac­tur­ing fa­cil­ities and R&D, with an even higher ex­po­nent, show­ing a law of diminish­ing re­turns?

Or is Moore’s Law show­ing the in­crease in com­put­ing power, as a func­tion of some kind of op­ti­miza­tion pres­sure ap­plied by hu­man re­searchers, them­selves think­ing at a cer­tain rate?

That last one might seem hard to ver­ify, since we’ve never watched what hap­pens when a chim­panzee tries to work in a chip R&D lab. But on some raw, el­e­men­tal level—would the his­tory of the world re­ally be just the same, pro­ceed­ing on just ex­actly the same timeline as the planets move in their or­bits, if, for these last fifty years, the re­searchers them­selves had been run­ning on the lat­est gen­er­a­tion of com­puter chip at any given point? That sounds to me even sillier than hav­ing a fi­nan­cial model in which there’s no way to ask what hap­pens if real es­tate prices go down.

And then, when you ap­ply the ab­strac­tion go­ing for­ward, there’s the ques­tion of whether there’s more than one way to ap­ply it—which is one rea­son why a lot of fu­tur­ists tend to dwell in great gory de­tail on the past events that seem to sup­port their ab­strac­tions, but just as­sume a sin­gle ap­pli­ca­tion for­ward.

E.g. Mo­ravec in ’88, spend­ing a lot of time talk­ing about how much “com­put­ing power” the hu­man brain seems to use—but much less time talk­ing about whether an AI would use the same amount of com­put­ing power, or whether us­ing Moore’s Law to ex­trap­o­late the first su­per­com­puter of this size is the right way to time the ar­rival of AI. (Mo­ravec thought we were sup­posed to have AI around now, based on his calcu­la­tions—and he un­der­es­ti­mated the size of the su­per­com­put­ers we’d ac­tu­ally have in 2008.)

That’s an­other part of what makes fu­tur­ism difficult—af­ter you’ve told your story about the past, even if it seems like an ab­strac­tion that can be “ver­ified” with re­spect to the past (but what if you over­looked an al­ter­na­tive story for the same ev­i­dence?) that of­ten leaves a lot of slack with re­gards to ex­actly what will hap­pen with re­spect to that ab­strac­tion, go­ing for­ward.

So if it’s not as sim­ple as just us­ing the one trick of find­ing ab­strac­tions you can eas­ily ver­ify on available data...

...what are some other tricks to use?