Intelligence in Economics

Fol­lowup to: Eco­nomic Defi­ni­tion of In­tel­li­gence?

After I challenged Robin to show how eco­nomic con­cepts can be use­ful in defin­ing or mea­sur­ing in­tel­li­gence, Robin re­sponded by—as I in­ter­pret it—challeng­ing me to show why a gen­er­al­ized con­cept of “in­tel­li­gence” is any use in eco­nomics.

Well, I’m not an economist (as you may have no­ticed) but I’ll try to re­spond as best I can.

My pri­mary view of the world tends to be through the lens of AI. If I talk about eco­nomics, I’m go­ing to try to sub­sume it into no­tions like ex­pected util­ity max­i­miza­tion (I man­u­fac­ture lots of copies of some­thing that I can use to achieve my goals) or in­for­ma­tion the­ory (if you man­u­fac­ture lots of copies of some­thing, my prob­a­bil­ity of see­ing a copy goes up). This sub­sump­tion isn’t meant to be some kind of challenge for aca­demic supremacy—it’s just what hap­pens if you ask an AI guy an econ ques­tion.

So first, let me de­scribe what I see when I look at eco­nomics:

I see a spe­cial case of game the­ory in which some in­ter­ac­tions are highly reg­u­lar and re­peat­able: You can take 3 units of steel and 1 unit of la­bor and make 1 truck that will trans­port 5 units of grain be­tween Chicago and Manch­ester once per week, and agents can po­ten­tially do this over and over again. If the num­bers aren’t con­stant, they’re at least reg­u­lar—there’s diminish­ing marginal util­ity, or sup­ply/​de­mand curves, rather than rol­ling ran­dom dice ev­ery time. Imag­ine eco­nomics if no two el­e­ments of re­al­ity were fun­gible—you’d just have a huge in­com­press­ible prob­lem in non-zero-sum game the­ory.

This may be, for ex­am­ple, why we don’t think of sci­en­tists writ­ing pa­pers that build on the work of other sci­en­tists in terms of an econ­omy of sci­ence pa­pers—if you turn an economist loose on sci­ence, they may mea­sure sci­en­tist salaries paid in fun­gible dol­lars, or try to see whether sci­en­tists trade countable cita­tions with each other. But it’s much less likely to oc­cur to them to an­a­lyze the way that units of sci­en­tific knowl­edge are pro­duced from pre­vi­ous units plus sci­en­tific la­bor. Where in­for­ma­tion is con­cerned, two iden­ti­cal copies of a file are the same in­for­ma­tion as one file. So ev­ery unit of knowl­edge is unique, non-fun­gible, and so is each act of pro­duc­tion. There isn’t even a com­mon cur­rency that mea­sures how much a given pa­per con­tributes to hu­man knowl­edge. (I don’t know what economists don’t know, so do cor­rect me if this is ac­tu­ally ex­ten­sively stud­ied.)

Since “in­tel­li­gence” deals with an in­for­ma­tional do­main, build­ing a bridge from it to eco­nomics isn’t triv­ial—but where do fac­to­ries come from, any­way? Why do hu­mans get a higher re­turn on cap­i­tal than chim­panzees?

I see two ba­sic bridges be­tween in­tel­li­gence and eco­nomics.

The first bridge is the role of in­tel­li­gence in eco­nomics: the way that steel is put to­gether into a truck in­volves choos­ing one out of an ex­po­nen­tially vast num­ber of pos­si­ble con­figu­ra­tions. With a more clever con­figu­ra­tion, you may be able to make a truck us­ing less steel, or less la­bor. In­tel­li­gence also plays a role at a larger scale, in de­cid­ing whether or not to buy a truck, or where to in­vest money. We may even be able to talk about some­thing akin to op­ti­miza­tion at a macro scale, the de­gree to which the whole econ­omy has put it­self to­gether in a spe­cial con­figu­ra­tion that earns a high rate of re­turn on in­vest­ment. (Though this in­tro­duces prob­lems for my own for­mu­la­tion, as I as­sume a cen­tral prefer­ence or­der­ing /​ util­ity func­tion that an econ­omy doesn’t pos­sess—still, deflated mon­e­tary val­u­a­tions seem like a good proxy.)

The sec­ond bridge is the role of eco­nomics in in­tel­li­gence: if you jump up a meta-level, there are re­peat­able cog­ni­tive al­gorithms un­der­ly­ing the pro­duc­tion of unique in­for­ma­tion. Th­ese cog­ni­tive al­gorithms use some re­sources that are fun­gible, or at least ma­te­rial enough that you can only use the re­source on one task, cre­at­ing a prob­lem of op­por­tu­nity costs. (A unit of time will be an ex­am­ple of this for al­most any al­gorithm.) Thus we have Omo­hun­dro’s re­source bal­ance prin­ci­ple, which says that the in­side of an effi­ciently or­ga­nized mind should have a com­mon cur­rency in ex­pected utilons.

Says Robin:

‘Eliezer has just raised the is­sue of how to define “in­tel­li­gence”, a con­cept he clearly wants to ap­ply to a very wide range of pos­si­ble sys­tems. He wants a quan­ti­ta­tive con­cept that is “not parochial to hu­mans,” ap­plies to sys­tems with very “differ­ent util­ity func­tions,” and that sum­ma­rizes the sys­tem’s perfor­mance over a broad “not … nar­row prob­lem do­main.” My main re­sponse is to note that this may just not be pos­si­ble. I have no ob­jec­tion to look­ing, but it is not ob­vi­ous that there is any such use­ful broadly-ap­pli­ca­ble “in­tel­li­gence” con­cept.’

Well, one might run into some trou­ble as­sign­ing a to­tal or­der­ing to all in­tel­li­gences, as op­posed to a par­tial or­der­ing. But that in­tel­li­gence as a con­cept is use­ful—es­pe­cially the way that I’ve defined it—that I must strongly defend. Our cur­rent sci­ence has ad­vanced fur­ther on some prob­lems than oth­ers. Right now, there is bet­ter un­der­stand­ing of the steps car­ried out to con­struct a car, than the cog­ni­tive al­gorithms that in­vented the unique car de­sign. But they are both, to some de­gree, reg­u­lar and re­peat­able; we don’t all have differ­ent brain ar­chi­tec­tures.

I gen­er­ally in­veigh against fo­cus­ing on rel­a­tively minor be­tween-hu­man vari­a­tions when dis­cussing “in­tel­li­gence”. It is con­tro­ver­sial what role is played in the mod­ern econ­omy by such vari­a­tions in what­ever-IQ-tests-try-to-mea­sure. Any­one who de­nies that some such a role ex­ists would be a poor de­luded fool in­deed. But, on the whole, we needn’t ex­pect “the role played by IQ vari­a­tions” to be at all the same sort of ques­tion as “the role played by in­tel­li­gence”.

You will surely find no cars, if you take away the mys­te­ri­ous “in­tel­li­gence” that pro­duces, from out of a vast ex­po­nen­tial space, the in­for­ma­tion that de­scribes one par­tic­u­lar con­figu­ra­tion of steel etc. con­sti­tut­ing a car de­sign. Without op­ti­miza­tion to con­jure cer­tain in­for­ma­tional pat­terns out of vast search spaces, the mod­ern econ­omy evap­o­rates like a puff of smoke.

So you need some ac­count of where the car de­sign comes from.

Why should you try to give the same ac­count of “in­tel­li­gence” across differ­ent do­mains? When some­one de­signs a car, or an air­plane, or a hedge-fund trad­ing strat­egy, aren’t these differ­ent de­signs?

Yes, they are differ­ent in­for­ma­tional goods.

And wasn’t it a differ­ent set of skills that pro­duced them? You can’t just take a car de­signer and plop them down in a hedge fund.

True, but where did the differ­ent skills come from?

From go­ing to differ­ent schools.

Where did the differ­ent schools come from?

They were built by differ­ent aca­demic lineages, com­pound­ing knowl­edge upon knowl­edge within a line of spe­cial­iza­tion.

But where did so many differ­ent aca­demic lineages come from? And how is this trick of “com­pound­ing knowl­edge” re­peated over and over?

Keep mov­ing meta, and you’ll find a reg­u­lar­ity, some­thing re­peat­able: you’ll find hu­mans, with com­mon hu­man genes that con­struct com­mon hu­man brain ar­chi­tec­tures.

No, not ev­ery dis­ci­pline puts the same rel­a­tive strain on the same brain ar­eas. But they are all us­ing hu­man parts, man­u­fac­tured by mostly-com­mon DNA. Not all the adult brains are the same, but they learn into unique adult­hood start­ing from a much more reg­u­lar un­der­ly­ing set of learn­ing al­gorithms. We should ex­pect less var­i­ance in in­fants than in adults.

And all the adap­ta­tions of the hu­man brain were pro­duced by the (rel­a­tively much struc­turally sim­pler) pro­cesses of nat­u­ral se­lec­tion. Without that ear­lier and less effi­cient op­ti­miza­tion pro­cess, there wouldn’t be a hu­man brain de­sign, and hence no hu­man brains.

Sub­tract the hu­man brains ex­e­cut­ing re­peat­able cog­ni­tive al­gorithms, and you’ll have no unique adult­hoods pro­duced by learn­ing; and no grown hu­mans to in­vent the cul­tural con­cept of sci­ence; and no chains of dis­cov­er­ies that pro­duce sci­en­tific lineages; and no en­g­ineers who at­tend schools; and no unique in­no­va­tive car de­signs; and thus, no cars.

The moral be­ing that you can gen­er­al­ize across do­mains, if you keep trac­ing back the causal chain and keep go­ing meta.

It may be harder to talk about “in­tel­li­gence” as a com­mon fac­tor in the full causal ac­count of the econ­omy, as to talk about the re­peated op­er­a­tion that puts to­gether many in­stan­ti­a­tions of the same car de­sign—but there is a com­mon fac­tor, and the econ­omy could hardly ex­ist with­out it.

As for gen­er­al­iz­ing away from hu­mans—well, what part of the no­tion of “effi­cient cross-do­main op­ti­miza­tion” ought to ap­ply only to hu­mans?