The Robots, AI, and Unemployment Anti-FAQ

Q. Are the cur­rent high lev­els of un­em­ploy­ment be­ing caused by ad­vances in Ar­tifi­cial In­tel­li­gence au­tomat­ing away hu­man jobs?

A. Con­ven­tional eco­nomic the­ory says this shouldn’t hap­pen. Sup­pose it costs 2 units of la­bor to pro­duce a hot dog and 1 unit of la­bor to pro­duce a bun, and that 30 units of la­bor are pro­duc­ing 10 hot dogs in 10 buns. If au­toma­tion makes it pos­si­ble to pro­duce a hot dog us­ing 1 unit of la­bor in­stead, con­ven­tional eco­nomics says that some peo­ple should shift from mak­ing hot dogs to buns, and the new equil­ibrium should be 15 hot dogs in 15 buns. On stan­dard eco­nomic the­ory, im­proved pro­duc­tivity—in­clud­ing from au­tomat­ing away some jobs—should pro­duce in­creased stan­dards of liv­ing, not long-term un­em­ploy­ment.

Q. Sounds like a lovely the­ory. As the proverb goes, the tragedy of sci­ence is a beau­tiful the­ory slain by an ugly fact. Ex­per­i­ment trumps the­ory and in re­al­ity, un­em­ploy­ment is ris­ing.

A. Sure. Ex­cept that the happy equil­ibrium with 15 hot dogs in buns, is ex­actly what hap­pened over the last four cen­turies where we went from 95% of the pop­u­la­tion be­ing farm­ers to 2% of the pop­u­la­tion be­ing farm­ers (in agri­cul­turally self-suffi­cient de­vel­oped coun­tries). We don’t live in a world where 93% of the peo­ple are un­em­ployed be­cause 93% of the jobs went away. The first thought of au­toma­tion re­mov­ing a job, and thus the econ­omy hav­ing one fewer job, has not been the way the world has worked since the In­dus­trial Revolu­tion. The parable of the hot dog in the bun is how economies re­ally, ac­tu­ally worked in real life for cen­turies. Au­toma­tion fol­lowed by re-em­ploy­ment went on for liter­ally cen­turies in ex­actly the way that the stan­dard lovely eco­nomic model said it should. The idea that there’s a limited amount of work which is de­stroyed by au­toma­tion is known in eco­nomics as the “lump of labour fal­lacy”.

Q. But now peo­ple aren’t be­ing reem­ployed. The jobs that went away in the Great Re­ces­sion aren’t com­ing back, even as the stock mar­ket and cor­po­rate prof­its rise again.

A. Yes. And that’s a new prob­lem. We didn’t get that when the Model T au­to­mo­bile mech­a­nized the en­tire horse-and-buggy in­dus­try out of ex­is­tence. The difficulty with sup­pos­ing that au­toma­tion is pro­duc­ing un­em­ploy­ment is that au­toma­tion isn’t new, so how can you use it to ex­plain this new phe­nomenon of in­creas­ing long-term un­em­ploy­ment?

Baxter robot

Q. Maybe we’ve fi­nally reached the point where there’s no work left to be done, or where all the jobs that peo­ple can eas­ily be re­trained into can be even more eas­ily au­to­mated.

A. You talked about jobs go­ing away in the Great Re­ces­sion and then not com­ing back. Well, the Great Re­ces­sion wasn’t pro­duced by a sud­den in­crease in pro­duc­tivity, it was pro­duced by… I don’t want to use fancy terms like “ag­gre­gate de­mand shock” so let’s just call it prob­lems in the fi­nan­cial sys­tem. The point is, in pre­vi­ous re­ces­sions the jobs came back strongly once NGDP rose again. (Nom­i­nal Gross Do­mes­tic Product—roughly the to­tal amount of money be­ing spent in face-value dol­lars.) Now there’s been a re­ces­sion and the jobs aren’t com­ing back (in the US and EU), even though NGDP has risen back to its pre­vi­ous level (at least in the US). If the prob­lem is au­toma­tion, and we didn’t ex­pe­rience any sud­den leap in au­toma­tion in 2008, then why can’t peo­ple get back at least the jobs they used to have, as they did in pre­vi­ous re­ces­sions? Some­thing has gone wrong with the en­g­ine of reem­ploy­ment.

Q. And you don’t think that what’s gone wrong with the en­g­ine of reem­ploy­ment is that it’s eas­ier to au­to­mate the lost jobs than to hire some­one new?

A. No. That’s some­thing you could say just as eas­ily about the ‘lost’ jobs from hand-weav­ing when me­chan­i­cal looms came along. Some new ob­sta­cle is pre­vent­ing jobs lost in the 2008 re­ces­sion from com­ing back. Which may in­deed mean that jobs elimi­nated by au­toma­tion are also not com­ing back. And new high school and col­lege grad­u­ates en­ter­ing the la­bor mar­ket, like­wise usu­ally a good thing for an econ­omy, will just end up be­ing sad and un­em­ployed. But this must mean some­thing new and awful is hap­pen­ing to the pro­cesses of em­ploy­ment—it’s not be­cause the kind of au­toma­tion that’s hap­pen­ing to­day is differ­ent from au­toma­tion in the 1990s, 1980s, 1920s, or 1870s; there were skil­led jobs lost then, too. It should also be noted that au­toma­tion has been a com­par­a­tively small force this decade next to shifts in global trade—which have also been go­ing on for cen­turies and have also pre­vi­ously been a hugely pos­i­tive eco­nomic force. But if some­thing is gen­er­ally wrong with reem­ploy­ment, then it might be pos­si­ble for in­creased trade with China to re­sult in per­ma­nently lost jobs within the US, in di­rect con­trast to the way it’s worked over all pre­vi­ous eco­nomic his­tory. But just like new col­lege grad­u­ates end­ing up un­em­ployed, some­thing else must be go­ing very wrong—that wasn’t go­ing wrong in 1960 - for any­thing so un­usual to hap­pen!

Q. What if what’s changed is that we’re out of new jobs to cre­ate? What if we’ve already got enough hot dog buns, for ev­ery kind of hot dog bun there is in the la­bor mar­ket, and now AI is au­tomat­ing away the last jobs and the last of the de­mand for la­bor?

A. This does not square with our be­ing un­able to re­cover the jobs that ex­isted be­fore the Great Re­ces­sion. Or with lots of the world liv­ing in poverty. If we imag­ine the situ­a­tion be­ing much more ex­treme than it ac­tu­ally is, there was a time when pro­fes­sion­als usu­ally had per­sonal cooks and maids—as Agatha Christie said, “When I was young I never ex­pected to be so poor that I could not af­ford a ser­vant, or so rich that I could af­ford a mo­tor car.” 

 Many peo­ple would hire per­sonal cooks or maids if we could af­ford them, which is the sort of new ser­vice that ought to come into ex­is­tence if other jobs were elimi­nated—the rea­son maids be­came less com­mon is that they were offered bet­ter jobs, not be­cause de­mand for that form of hu­man la­bor stopped ex­ist­ing. Or to be less ex­treme, there are lots of busi­nesses who’d take nearly-free em­ploy­ees at var­i­ous oc­cu­pa­tions, if those em­ploy­ees could be hired liter­ally at min­i­mum wage and le­gal li­a­bil­ity wasn’t an is­sue. Right now we haven’t run out of want or use for hu­man la­bor, so how could “The End of De­mand” be pro­duc­ing un­em­ploy­ment right now? The fun­da­men­tal fact that’s driven em­ploy­ment over the course of pre­vi­ous hu­man his­tory is that it is a very strange state of af­fairs for some­body sit­ting around do­ing noth­ing, to have noth­ing bet­ter to do. We do not liter­ally have noth­ing bet­ter for un­em­ployed work­ers to do. Our civ­i­liza­tion is not that ad­vanced. So we must be do­ing some­thing wrong (which we weren’t do­ing wrong in 1950).

Q. So what is wrong with “reem­ploy­ment”, then?

A. I know less about macroe­co­nomics than I know about AI, but even I can see all sorts of changed cir­cum­stances which are much more plau­si­ble sources of novel em­ploy­ment dys­func­tion than the rel­a­tively steady progress of au­toma­tion. In terms of de­vel­oped coun­tries that seem to be do­ing okay on reem­ploy­ment, Aus­tralia hasn’t had any drops in em­ploy­ment and their mon­e­tary policy has kept nom­i­nal GDP growth on a much stead­ier keel—us­ing their cen­tral bank to reg­u­larize the num­ber of face-value Aus­tralian dol­lars be­ing spent—which an in­creas­ing num­ber of in­fluen­tial econ­blog­gers think the US and even more so the EU have been get­ting catas­troph­i­cally wrong. Though that’s a long story.[1] Ger­many saw un­em­ploy­ment drop from 11% to 5% from 2006-2012 af­ter im­ple­ment­ing a se­ries of la­bor mar­ket re­forms, though there were other things go­ing on dur­ing that time. (Ger­many has twice the num­ber of robots per cap­ita as the US, which prob­a­bly isn’t sig­nifi­cant to their larger macroe­co­nomic trends, but would be a strange fact if robots were the lead­ing cause of un­em­ploy­ment.) La­bor mar­kets and mon­e­tary policy are both ma­jor, ob­vi­ous, widely-dis­cussed can­di­dates for what could’ve changed be­tween now and the 1950s that might make reem­ploy­ment harder. And though I’m not a lead­ing econ­blog­ger, some other ob­vi­ous-seem­ing thoughts that oc­cur to me are:

* Many in­dus­tries that would oth­er­wise be ac­cessible to rel­a­tively less skil­led la­bor, have much higher bar­ri­ers to en­try now than in 1950. Taxi medal­lions, gov­ern­ments sav­ing us from the ter­ror of un­li­censed hair­cuts, fees and reg­u­la­tory bur­dens as­so­ci­ated with new busi­nesses—all things that could’ve plau­si­bly changed be­tween now and the pre­vi­ous four cen­turies. This doesn’t ap­ply only to un­skil­led la­bor, ei­ther; in 1900 it was a lot eas­ier, legally speak­ing, to set up shop as a doc­tor. (Yes, the av­er­age doc­tor was sub­stan­tially worse back then. But ask your­self whether some sim­ple, repet­i­tive med­i­cal surgery should re­ally, truly re­quire 11 years of med­i­cal school and res­i­dency, rather than a 2-year vo­ca­tional train­ing pro­gram for some­one with high dex­ter­ity and good fo­cus.) Th­ese sorts of bar­ri­ers to en­try al­low peo­ple who are cur­rently em­ployed in that field to ex­tract value from peo­ple try­ing to get jobs in that field (and from the gen­eral pop­u­la­tion too, of course). In any one sec­tor this wouldn’t hurt the whole econ­omy too much, but if it hap­pens ev­ery­where at once, that could be the prob­lem.

* True effec­tive marginal tax rates on low-in­come fam­i­lies have gone up to­day com­pared to the 1960s, af­ter all phas­ing-out benefits are taken into ac­count, count­ing fed­eral and state taxes, city sales taxes, and so on. I’ve seen figures tossed around like 70% and worse, and this seems like the sort of thing that could eas­ily trash reem­ploy­ment.[2]

* Per­haps com­pa­nies are, for some rea­son, less will­ing to hire pre­vi­ously un­skil­led peo­ple and train them on the job. Em­piri­cally this seems to be some­thing that is more true to­day than in the 1950s. If I were to guess at why, I would say that em­ploy­ees mov­ing more from job to job, and fewer life-long jobs, makes it less re­ward­ing for em­ploy­ers to in­vest in train­ing an em­ployee; and also col­lege is more uni­ver­sal now than then. Which means that em­ploy­ers might try to rely on col­leges to train em­ploy­ees, and this is a func­tion col­leges can’t ac­tu­ally han­dle be­cause:

* The US ed­u­ca­tional sys­tem is ei­ther get­ting worse at train­ing peo­ple to han­dle new jobs, or get­ting so much more ex­pen­sive that peo­ple can’t af­ford re­train­ing, for var­i­ous other rea­sons. (Plus, we are re­ally stun­ningly stupid about match­ing ed­u­ca­tional sup­ply to la­bor de­mand. How com­pletely ridicu­lous is it to ask high school stu­dents to de­cide what they want to do with the rest of their lives and give them nearly no sup­port in do­ing so? Sup­port like, say, spend­ing a day apiece watch­ing twenty differ­ent jobs and then an­other week at their top three choices, with salary charts and pro­jec­tions and prob­a­bil­ities of grad­u­at­ing that sub­ject given their test scores? The more so con­sid­er­ing this is a cen­tral al­lo­ca­tion ques­tion for the en­tire econ­omy? But I have no par­tic­u­lar rea­son to be­lieve this part has got­ten worse since 1960.)

* The fi­nan­cial sys­tem is star­ing much more at the in­side of its eye­lids now than in the 1980s. This could be mak­ing it harder for ex­pand­ing busi­nesses to get loans at terms they would find ac­cept­able, or mak­ing it harder for ex­pand­ing busi­nesses to ac­cess cap­i­tal mar­kets at ac­cept­able terms, or in­terfer­ing with cen­tral banks’ at­tempts to reg­u­larize nom­i­nal de­mand, or act­ing as a brake on the sys­tem in some other fash­ion.

* Hiring a new em­ployee now ex­poses an em­ployer to more down­side risk of be­ing sued, or risk of be­ing un­able to fire the new em­ployee if it turns out to be a bad de­ci­sion. Hu­man be­ings, in­clud­ing em­ploy­ers, are very averse to down­side risk, so this could plau­si­bly be a ma­jor ob­sta­cle to reem­ploy­ment. Such risks are a plau­si­ble ma­jor fac­tor in mak­ing the de­ci­sion to hire some­one he­do­nically un­pleas­ant for the per­son who has to make that de­ci­sion, which could’ve changed be­tween now and 1950. (If your sym­pa­thies are with em­ploy­ees rather than em­ploy­ers, please con­sider that, nonethe­less, if you pass any pro­tec­tive mea­sure that makes the de­ci­sion to hire some­body less pleas­ant for the hirer, fewer peo­ple will be hired and this is not good for peo­ple seek­ing em­ploy­ment. Many la­bor mar­ket reg­u­la­tions trans­fer wealth or job se­cu­rity to the already-em­ployed at the ex­pense of the un­em­ployed, and these have been in­creas­ing over time.)

* Tyler Cowen’s Zero Marginal Product Work­ers hy­poth­e­sis: Any­one long-term-un­em­ployed has now been swept into a group of peo­ple who have less than zero av­er­age marginal pro­duc­tivity, due to some of the peo­ple in this pool be­ing nega­tive-marginal-product work­ers who will de­stroy value, and em­ploy­ers not be­ing able to tell the differ­ence. We need some new fac­tor to ex­plain why this wasn’t true in 1950, and ob­vi­ous can­di­dates would be (1) le­gal li­a­bil­ity mak­ing past-em­ployer refer­ences un­re­li­able and (2) ex­panded use of col­lege cre­den­tial­ing sweep­ing up more of the pos­i­tive-product work­ers so that the av­er­age product of the un­cre­den­tialed work­ers drops.

* There’s a the­sis (whose most no­table pro­po­nent I know is Peter Thiel, though this is not ex­actly how Thiel phrases it) that real, ma­te­rial tech­nolog­i­cal change has been dy­ing. If you can build a fea­ture-app and flip it to Google for $20M in an ac­qui-hire, why bother try­ing to in­vent the next Model T? Maybe work­ing on hard tech­nol­ogy prob­lems us­ing math and sci­ence un­til you can build a liquid fluoride tho­rium re­ac­tor, has been made to seem less at­trac­tive to brilli­ant young kids than flip­ping a $20M com­pany to Google or be­com­ing a hedge-fund trader (and this is truer to­day rel­a­tive to 1950).[3]

* Closely re­lated to the above: Maybe change in atoms in­stead of bits has been reg­u­lated out of ex­is­tence. The ex­pected biotech rev­olu­tion never hap­pened be­cause the FDA is just too much of a road­block (it adds a great deal of ex­pense, sig­nifi­cant risk, and most of all, de­lays the re­turns be­yond ven­ture cap­i­tal time hori­zons). It’s plau­si­ble we’ll never see a city with a high-speed all-robotic all-elec­tric car fleet be­cause the gov­ern­ment, af­ter lob­by­ing from var­i­ous in­dus­tries, will re­quire hu­man at­ten­dants on ev­ery car—for safety rea­sons, of course! If cars were in­vented nowa­days, the horse-and-sad­dle in­dus­try would surely try to ar­range for them to be reg­u­lated out of ex­is­tence, or sued out of ex­is­tence, or limited to the same speed as horses to en­sure ex­ist­ing bug­gies re­mained safe. Pa­tents are also an in­creas­ing drag on in­no­va­tion in its most frag­ile stages, and may shortly bring an end to the re­main­ing life in soft­ware star­tups as well. (But note that this the­sis, like the one above, seems hard-pressed to ac­count for jobs not com­ing back af­ter the Great Re­ces­sion. It is not con­ven­tional macroe­co­nomics that re-em­ploy­ment af­ter a re­ces­sion re­quires macro sec­tor shifts or new kinds of tech­nol­ogy jobs. The above is more of a Great Stag­na­tion the­sis of “What hap­pened to pro­duc­tivity growth?” than a Great Re­ces­sion the­sis of “Why aren’t the jobs com­ing back?”[4])

Q. Some of those ideas sounded more plau­si­ble than oth­ers, I have to say.

A. Well, it’s not like they could all be true si­mul­ta­neously. There’s only a fixed effect size of un­em­ploy­ment to be ex­plained, so the more likely it is that any one of these fac­tors played a big role, the less we need to sup­pose that all the other fac­tors were im­por­tant; and per­haps what’s Really Go­ing On is some­thing else en­tirely. Fur­ther­more, the ‘real cause’ isn’t always the fac­tor you want to fix. If the Euro­pean Union’s un­em­ploy­ment prob­lems were ‘origi­nally caused’ by la­bor mar­ket reg­u­la­tion, there’s no rule say­ing that those prob­lems couldn’t be mostly fixed by in­sti­tut­ing an NGDP level tar­get­ing regime. This might or might not work, but the point is that there’s no law say­ing that to fix a prob­lem you have to fix its origi­nal his­tor­i­cal cause.

Q. Re­gard­less, if the en­g­ine of re-em­ploy­ment is bro­ken for what­ever rea­son, then AI re­ally is kil­ling jobs—a marginal job au­to­mated away by ad­vances in AI al­gorithms won’t come back.

A. Then it’s odd to see so many news ar­ti­cles talk­ing about AI kil­ling jobs, when plain old non-AI com­puter pro­gram­ming and the In­ter­net have af­fected many more jobs than that. The buyer or­der­ing books over the In­ter­net, the spread­sheet re­plac­ing the ac­coun­tant—these pro­cesses are not strongly rely­ing on the sort of al­gorithms that we would usu­ally call ‘AI’ or ‘ma­chine learn­ing’ or ‘robotics’. The main role I can think of for ac­tual AI al­gorithms be­ing in­volved, is in com­puter vi­sion en­abling more au­toma­tion. And many man­u­fac­tur­ing jobs were already au­to­mated by robotic arms even be­fore robotic vi­sion came along. Most com­puter pro­gram­ming is not AI pro­gram­ming, and most au­toma­tion is not AI-driven. And then on near-term scales, like changes over the last five years, trade shifts and fi­nan­cial shocks and new la­bor mar­ket en­trants are more pow­er­ful eco­nomic forces than the slow con­tin­u­ing march of com­puter pro­gram­ming. (Au­toma­tion is a weak eco­nomic force in any given year, but cu­mu­la­tive and di­rec­tional over decades. Trade shifts and fi­nan­cial shocks are stronger forces in any sin­gle year, but might go in the op­po­site di­rec­tion the next decade. Thus, even gen­er­al­ized au­toma­tion via com­puter pro­gram­ming is still an un­likely culprit for any sud­den drop in em­ploy­ment as oc­curred in the Great Re­ces­sion.)

Q. Okay, you’ve per­suaded me that it’s ridicu­lous to point to AI while talk­ing about mod­ern-day un­em­ploy­ment. What about fu­ture un­em­ploy­ment?

A. Like af­ter the next ten years? We might or might not see robot-driven cars, which would be gen­uinely based in im­proved AI al­gorithms, and would au­to­mate away an­other bite of hu­man la­bor. Even then, the to­tal num­ber of peo­ple driv­ing cars for money would just be a small part of the to­tal global econ­omy; most hu­mans are not paid to drive cars most of the time. Also again: for AI or pro­duc­tivity growth or in­creased trade or im­mi­gra­tion or grad­u­at­ing stu­dents to in­crease un­em­ploy­ment, in­stead of re­sult­ing in more hot dogs and buns for ev­ery­one, you must be do­ing some­thing ter­ribly wrong that you weren’t do­ing wrong in 1950.

Q. How about timescales longer than ten years? There was one class of la­bor­ers per­ma­nently un­em­ployed by the au­to­mo­bile rev­olu­tion, namely horses. There are a lot fewer horses nowa­days be­cause there is liter­ally noth­ing left for horses to do that ma­chines can’t do bet­ter; horses’ marginal la­bor pro­duc­tivity dropped be­low their cost of liv­ing. Could that hap­pen to hu­mans too, if AI ad­vanced far enough that it could do all the la­bor?

A. If we imag­ine that in fu­ture decades ma­chine in­tel­li­gence is slowly go­ing past the equiv­a­lent of IQ 70, 80, 90, eat­ing up more and more jobs along the way… then I defer to Robin Han­son’s anal­y­sis in Eco­nomic Growth Given Ma­chine In­tel­li­gence, in which, as the ab­stract says, “Machines com­ple­ment hu­man la­bor when [hu­mans] be­come more pro­duc­tive at the jobs they perform, but ma­chines also sub­sti­tute for hu­man la­bor by tak­ing over hu­man jobs. At first, com­ple­men­tary effects dom­i­nate, and hu­man wages rise with com­puter pro­duc­tivity. But even­tu­ally sub­sti­tu­tion can dom­i­nate, mak­ing wages fall as fast as com­puter prices now do.”

Q. Could we already be in this sub­sti­tu­tion regime -

A. No, no, a dozen times no, for the dozen rea­sons already men­tioned. That sen­tence in Han­son’s pa­per has noth­ing to do with what is go­ing on right now. The fu­ture can­not be a cause of the past. Fu­ture sce­nar­ios, even if they seem to as­so­ci­ate the con­cept of AI with the con­cept of un­em­ploy­ment, can­not ra­tio­nally in­crease the prob­a­bil­ity that cur­rent AI is re­spon­si­ble for cur­rent un­em­ploy­ment.

Q. But AI will in­evitably be­come a prob­lem later?

A. Not nec­es­sar­ily. We only get the Han­so­nian sce­nario if AI is broadly, steadily go­ing past IQ 70, 80, 90, etc., mak­ing an in­creas­ingly large por­tion of the pop­u­la­tion fully ob­so­lete in the sense that there is liter­ally no job any­where on Earth for them to do in­stead of noth­ing, be­cause for ev­ery task they could do there is an AI al­gorithm or robot which does it more cheaply. That sce­nario isn’t the only pos­si­bil­ity.

Q. What other pos­si­bil­ities are there?

A. Lots, since what Han­son is talk­ing about is a new un­prece­dented phe­nomenon ex­trap­o­lated over new fu­ture cir­cum­stances which have never been seen be­fore and there are all kinds of things which could po­ten­tially go differ­ently within that. Han­son’s pa­per may be the first ob­vi­ous ex­trap­o­la­tion from con­ven­tional macroe­co­nomics and steady AI trendlines, but that’s hardly a sure bet. Ac­cu­rate pre­dic­tion is hard, es­pe­cially about the fu­ture, and I’m pretty sure Han­son would agree with that.

Q. I see. Yeah, when you put it that way, there are other pos­si­bil­ities. Like, Ray Kurzweil would pre­dict that brain-com­puter in­ter­faces would let hu­mans keep up with com­put­ers, and then we wouldn’t get mass un­em­ploy­ment.

A. The fu­ture would be more un­cer­tain than that, even grant­ing Kurzweil’s hy­pothe­ses—it’s not as sim­ple as pick­ing one fu­tur­ist and as­sum­ing that their fa­vorite as­sump­tions cor­re­spond to their fa­vorite out­come. You might get mass un­em­ploy­ment any­way if hu­mans with brain-com­puter in­ter­faces are more ex­pen­sive or less effec­tive than pure au­to­mated sys­tems. With to­day’s tech­nol­ogy we could de­sign robotic rigs to am­plify a horse’s mus­cle power—maybe, we’re still work­ing on that tech for hu­mans—but it took around an ex­tra cen­tury af­ter the Model T to get to that point, and a plain old car is much cheaper.

Q. Bah, any­one can nod wisely and say “Uncer­tain, the fu­ture is.” Stick your neck out, Yoda, and state your opinion clearly enough that you can later be proven wrong. Do you think we will even­tu­ally get to the point where AI pro­duces mass un­em­ploy­ment?

A. My own guess is a mod­er­ately strong ‘No’, but for rea­sons that would sound like a com­plete sub­ject change rel­a­tive to all the macroe­co­nomic phe­nom­ena we’ve been dis­cussing so far. In par­tic­u­lar I re­fer you to “In­tel­li­gence Ex­plo­sion Microe­co­nomics: Re­turns on cog­ni­tive rein­vest­ment”, a pa­per re­cently refer­enced on Scott Sum­ner’s blog as rele­vant to this is­sue.

Q. Hold on, let me read the ab­stract and… what the heck is this?

A. It’s an ar­gu­ment that you don’t get the Han­so­nian sce­nario or the Kurzweilian sce­nario, be­cause if you look at the his­tor­i­cal course of ho­minid evolu­tion and try to as­sess the in­puts of marginally in­creased cu­mu­la­tive evolu­tion­ary se­lec­tion pres­sure ver­sus the cog­ni­tive out­puts of ho­minid brains, and in­fer the cor­re­spond­ing curve of re­turns, then ask about a rein­vest­ment sce­nario -

Q. English.

A. Ar­guably, what you get is I. J. Good’s sce­nario where once an AI goes over some thresh­old of suffi­cient in­tel­li­gence, it can self-im­prove and in­crease in in­tel­li­gence far past the hu­man level. This sce­nario is for­mally termed an ‘in­tel­li­gence ex­plo­sion’, in­for­mally ‘hard take­off’ or ‘AI-go-FOOM’. The re­sult­ing pre­dic­tions are strongly dis­tinct from tra­di­tional eco­nomic mod­els of ac­cel­er­at­ing tech­nolog­i­cal growth (we’re not talk­ing about Moore’s Law here). Since it should take ad­vanced gen­eral AI to au­to­mate away most or all hu­manly pos­si­ble la­bor, my guess is that AI will in­tel­li­gence-ex­plode to su­per­hu­man in­tel­li­gence be­fore there’s time for mod­er­ately-ad­vanced AIs to crowd hu­mans out of the global econ­omy. (See also sec­tion 3.10 of the afore­men­tioned pa­per.) Wide­spread eco­nomic adop­tion of a tech­nol­ogy comes with a de­lay fac­tor that wouldn’t slow down an AI rewrit­ing its own source code. This means we don’t see the sce­nario of hu­man pro­gram­mers grad­u­ally im­prov­ing broad AI tech­nol­ogy past the 90, 100, 110-IQ thresh­old. An ex­plo­sion of AI self-im­prove­ment ut­terly de­rails that sce­nario, and sends us onto a com­pletely differ­ent track which con­fronts us with wholly dis­similar ques­tions.

Q. Okay. What effect do you think a su­per­hu­manly in­tel­li­gent self-im­prov­ing AI would have on un­em­ploy­ment, es­pe­cially the bot­tom 25% who are already strug­gling now? Should we re­ally be try­ing to cre­ate this tech­nolog­i­cal won­der of self-im­prov­ing AI, if the end re­sult is to make the world’s poor even poorer? How is some­one with a high-school ed­u­ca­tion sup­posed to com­pete with a ma­chine su­per­in­tel­li­gence for jobs?

A. I think you’re ask­ing an overly nar­row ques­tion there.

Q. How so?

A. You might be think­ing about ‘in­tel­li­gence’ in terms of the con­trast be­tween a hu­man col­lege pro­fes­sor and a hu­man jan­i­tor, rather than the con­trast be­tween a hu­man and a chim­panzee. Hu­man in­tel­li­gence more or less cre­ated the en­tire mod­ern world, in­clud­ing our in­ven­tion of money; twenty thou­sand years ago we were just run­ning around with bow and ar­rows. And yet on a biolog­i­cal level, hu­man in­tel­li­gence has stayed roughly the same since the in­ven­tion of agri­cul­ture. Go­ing past hu­man-level in­tel­li­gence is change on a scale much larger than the In­dus­trial Revolu­tion, or even the Agri­cul­tural Revolu­tion, which both took place at a con­stant level of in­tel­li­gence; hu­man na­ture didn’t change. As Vinge ob­served, build­ing some­thing smarter than you im­plies a fu­ture that is fun­da­men­tally differ­ent in a way that you wouldn’t get from bet­ter medicine or in­ter­plane­tary travel.

Q. But what does hap­pen to peo­ple who were already eco­nom­i­cally dis­ad­van­taged, who don’t have in­vest­ments in the stock mar­ket and who aren’t shar­ing in the prof­its of the cor­po­ra­tions that own these su­per­in­tel­li­gences?

A. Um… we ap­pear to be us­ing sub­stan­tially differ­ent back­ground as­sump­tions. The no­tion of a ‘su­per­in­tel­li­gence’ is not that it sits around in Gold­man Sachs’s base­ment trad­ing stocks for its cor­po­rate mas­ters. The con­crete illus­tra­tion I of­ten use is that a su­per­in­tel­li­gence asks it­self what the fastest pos­si­ble route is to in­creas­ing its real-world power, and then, rather than both­er­ing with the digi­tal coun­ters that hu­mans call money, the su­per­in­tel­li­gence solves the pro­tein struc­ture pre­dic­tion prob­lem, emails some DNA se­quences to on­line pep­tide syn­the­sis labs, and gets back a batch of pro­teins which it can mix to­gether to cre­ate an acous­ti­cally con­trol­led equiv­a­lent of an ar­tifi­cial ri­bo­some which it can use to make sec­ond-stage nan­otech­nol­ogy which man­u­fac­tures third-stage nan­otech­nol­ogy which man­u­fac­tures di­a­mon­doid molec­u­lar nan­otech­nol­ogy and then… well, it doesn’t re­ally mat­ter from our per­spec­tive what comes af­ter that, be­cause from a hu­man per­spec­tive any tech­nol­ogy more ad­vanced than molec­u­lar nan­otech is just overkill. A su­per­in­tel­li­gence with molec­u­lar nan­otech does not wait for you to buy things from it in or­der for it to ac­quire money. It just moves atoms around into what­ever molec­u­lar struc­tures or large-scale struc­tures it wants.

Q. How would it get the en­ergy to move those atoms, if not by buy­ing elec­tric­ity from ex­ist­ing power plants? So­lar power?

A. In­deed, one pop­u­lar spec­u­la­tion is that op­ti­mal use of a star sys­tem’s re­sources is to dis­assem­ble lo­cal gas gi­ants (Jupiter in our case) for the raw ma­te­ri­als to build a Dyson Sphere, an en­clo­sure that cap­tures all of a star’s en­ergy out­put. This does not in­volve buy­ing so­lar pan­els from hu­man man­u­fac­tur­ers, rather it in­volves self-repli­cat­ing ma­chin­ery which builds copies of it­self on a rapid ex­po­nen­tial curve -

Q. Yeah, I think I’m start­ing to get a pic­ture of your back­ground as­sump­tions. So let me ex­pand the ques­tion. If we grant that sce­nario rather than the Han­so­nian sce­nario or the Kurzweilian sce­nario, what sort of effect does that have on hu­mans?

A. That de­pends on the ex­act ini­tial de­sign of the first AI which un­der­goes an in­tel­li­gence ex­plo­sion. Imag­ine a vast space con­tain­ing all pos­si­ble mind de­signs. Now imag­ine that hu­mans, who all have a brain with a cere­bel­lum, tha­la­mus, a cere­bral cor­tex or­ga­nized into roughly the same ar­eas, neu­rons firing at a top speed of 200 spikes per sec­ond, and so on, are one tiny lit­tle dot within this space of all pos­si­ble minds. Differ­ent kinds of AIs can be vastly more differ­ent from each other than you are differ­ent from a chim­panzee. What hap­pens af­ter AI, de­pends on what kind of AI you build—the ex­act se­lected point in mind de­sign space. If you can solve the tech­ni­cal prob­lems and wis­dom prob­lems as­so­ci­ated with build­ing an AI that is nice to hu­mans, or nice to sen­tient be­ings in gen­eral, then we all live hap­pily ever af­ter­ward. If you build the AI in­cor­rectly… well, the AI is un­likely to end up with a spe­cific hate for hu­mans. But such an AI won’t at­tach a pos­i­tive value to us ei­ther. “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for some­thing else.” The hu­man species would end up dis­assem­bled for spare atoms, af­ter which hu­man un­em­ploy­ment would be zero. In nei­ther al­ter­na­tive do we end up with poverty-stricken un­em­ployed hu­mans hang­ing around be­ing sad be­cause they can’t get jobs as jan­i­tors now that star-strid­ing nan­otech-wield­ing su­per­in­tel­li­gences are tak­ing all the jan­i­to­rial jobs. And so I con­clude that ad­vanced AI caus­ing mass hu­man un­em­ploy­ment is, all things con­sid­ered, un­likely.

Q. Some of the back­ground as­sump­tions you used to ar­rive at that con­clu­sion strike me as re­quiring ad­di­tional sup­port be­yond the ar­gu­ments you listed here.

A. I recom­mend In­tel­li­gence Ex­plo­sion: Ev­i­dence and Im­port for an overview of the gen­eral is­sues and liter­a­ture, Ar­tifi­cial In­tel­li­gence as a pos­i­tive and nega­tive fac­tor in global risk for a sum­mary of some of the is­sues around build­ing AI cor­rectly or in­cor­rectly, and the afore­men­tioned In­tel­li­gence Ex­plo­sion Microe­co­nomics for some ideas about an­a­lyz­ing the sce­nario of an AI in­vest­ing cog­ni­tive la­bor in im­prov­ing its own cog­ni­tion. The last in par­tic­u­lar is an im­por­tant open prob­lem in eco­nomics if you’re a smart young economist read­ing this, al­though since the fate of the en­tire hu­man species could well de­pend on the an­swer, you would be fool­ish to ex­pect there’d be as many pa­pers pub­lished about that as squir­rel mi­gra­tion pat­terns. Nonethe­less, bright young economists who want to say some­thing im­por­tant about AI should con­sider an­a­lyz­ing the microe­co­nomics of re­turns on cog­ni­tive (re)in­vest­ments, rather than post-AI macroe­co­nomics which may not ac­tu­ally ex­ist de­pend­ing on the an­swer to the first ques­tion. Oh, and Nick Bostrom at the Oxford Fu­ture of Hu­man­ity In­sti­tute is sup­posed to have a forth­com­ing book on the in­tel­li­gence ex­plo­sion; that book isn’t out yet so I can’t link to it, but Bostrom per­son­ally and FHI gen­er­ally have pub­lished some ex­cel­lent aca­demic pa­pers already.

Q. But to sum up, you think that AI is definitely not the is­sue we should be talk­ing about with re­spect to un­em­ploy­ment.

A. Right. From an eco­nomic per­spec­tive, AI is a com­pletely odd place to fo­cus your con­cern about mod­ern-day un­em­ploy­ment. From an AI per­spec­tive, mod­ern-day un­em­ploy­ment trends are a mod­er­ately odd rea­son to be wor­ried about AI. Still, it is scar­ily true that in­creased au­toma­tion, like in­creased global trade or new grad­u­ates or any­thing else that ought prop­erly to pro­duce a stream of em­ploy­able la­bor to the benefit of all, might per­versely op­er­ate to in­crease un­em­ploy­ment if the bro­ken reem­ploy­ment en­g­ine is not fixed.

Q. And with re­spect to fu­ture AI… what is it you think, ex­actly?

A. I think that with re­spect to mod­er­ately more ad­vanced AI, we prob­a­bly won’t see in­trin­sic un­avoid­able mass un­em­ploy­ment in the eco­nomic world as we know it. If re-em­ploy­ment stays bro­ken and new col­lege grad­u­ates con­tinue to have trou­ble find­ing jobs, then there are plau­si­ble sto­ries where fu­ture AI ad­vances far enough (but not too far) to be a sig­nifi­cant part of what’s free­ing up new em­ploy­able la­bor which bizarrely can­not be em­ployed. I wouldn’t con­sider this my main-line, av­er­age-case guess; I wouldn’t ex­pect to see it in the next 15 years or as the re­sult of just robotic cars; and if it did hap­pen, I wouldn’t call AI the ‘prob­lem’ while cen­tral banks still hadn’t adopted NGDP level tar­get­ing. And then with re­spect to very ad­vanced AI, the sort that might be pro­duced by AI self-im­prov­ing and go­ing FOOM, ask­ing about the effect of ma­chine su­per­in­tel­li­gence on the con­ven­tional hu­man la­bor mar­ket is like ask­ing how US-Chi­nese trade pat­terns would be af­fected by the Moon crash­ing into the Earth. There would in­deed be effects, but you’d be miss­ing the point.

Q. Thanks for clear­ing that up.

A. No prob­lem.

ADDED 8/​30/​13: Tyler Cowen’s re­ply to this was one I hadn’t listed:

Think of the ma­chines of the in­dus­trial rev­olu­tion as get­ting un­der­way some­time in the 1770s or 1780s. The big wage gains for Bri­tish work­ers don’t re­ally come un­til the 1840s. Depend­ing on your ex­act start­ing point, that is over fifty years of la­bor mar­ket prob­lems from au­toma­tion.

See here for the rest of Tyler’s re­ply.

Taken at face value this might sug­gest that if we wait 50 years ev­ery­thing will be all right. Kevin Drum replies that in 50 years there might be no hu­man jobs left, which is pos­si­ble but wouldn’t be an effect we’ve seen already, rather a pre­dic­tion of novel things yet to come.

Though Tyler also says, “A sec­ond point is that now we have a much more ex­ten­sive net­work of gov­ern­ment benefits and also reg­u­la­tions which in­crease the fixed cost of hiring la­bor” and this of course was already on my list of things that could be trash­ing mod­ern reem­ploy­ment un­like-in-the-1840s.

‘Brett’ in MR’s com­ments sec­tion also counter-claims:

The spread of steam-pow­ered ma­chin­ery and in­dus­tri­al­iza­tion from tex­tiles/​min­ing/​steel to all man­ner of Bri­tish in­dus­tries didn’t re­ally get go­ing un­til the 1830s and 1840s. Be­fore that, it was mostly piece-meal, with some ar­eas pick­ing up the tech­nol­ogy faster than oth­ers, while the over­all econ­omy didn’t change that dras­ti­cally (hence the min­i­mal changes in over­all wages).

[1] The core idea in mar­ket mon­e­tarism is very roughly some­thing like this: A cen­tral bank can con­trol the to­tal amount of money and thereby con­trol any sin­gle eco­nomic vari­able mea­sured in money, i.e., con­trol one nom­i­nal vari­able. A cen­tral bank can’t di­rectly con­trol how many peo­ple are em­ployed, be­cause that’s a real vari­able. You could, how­ever, try to con­trol Nom­i­nal Gross Do­mes­tic In­come (NGDI) or the to­tal amount that peo­ple have available to spend (as mea­sured in your cur­rency). If the cen­tral bank com­mits to an NGDI level tar­get then any short­falls are made up the next year—if your NGDI growth tar­get is 5% and you only get 4% in one year then you try for 6% the year af­ter that. NGDI level tar­get­ing would mean that all the com­pa­nies would know that, col­lec­tively, all the cus­tomers in the coun­try would have 5% more money (mea­sured in dol­lars) to spend in the next year than the pre­vi­ous year. This is usu­ally called “NGDP level tar­get­ing” for his­tor­i­cal rea­sons (NGDP is the other side of the equa­tion, what the earned dol­lars are be­ing spent on) but the most ad­vanced mod­ern form of the idea is prob­a­bly “Level-tar­get­ing a mar­ket fore­cast of per-cap­ita NGDI”. Why this is the best nom­i­nal vari­able for cen­tral banks to con­trol is a longer story and for that you’ll have to read up on mar­ket mon­e­tarism. I will note that if you were wor­ried about hy­per­in­fla­tion back when the Fed­eral Re­serve started drop­ping US in­ter­est rates to al­most zero and buy­ing gov­ern­ment bonds by print­ing money… well, you re­ally should note that (a) most economists said this wouldn’t hap­pen, (b) the mar­ket spreads on in­fla­tion-pro­tected Trea­suries said that the mar­ket was an­ti­ci­pat­ing very low in­fla­tion, and that (c) we then ac­tu­ally got in­fla­tion be­low the Fed’s 2% tar­get. You can ar­gue with economists. You can even ar­gue with the mar­ket fore­cast, though in this case you ought to bet money on your be­liefs. But when your fears of hy­per­in­fla­tion are dis­agreed with by economists, the mar­ket fore­cast and ob­served re­al­ity, it’s time to give up on the the­ory that gen­er­ated the false pre­dic­tion. In this case, mar­ket mon­e­tarists would have told you not to ex­pect hy­per­in­fla­tion be­cause NGDP/​NGDI was col­laps­ing and this con­sti­tuted (overly) tight money re­gard­less of what in­ter­est rates or the mon­e­tary base looked like.

[2] Call me a wacky utopian ideal­ist, but I won­der if it might be gen­uinely poli­ti­cally fea­si­ble to re­duce marginal taxes on the bot­tom 20%, if economists on both sides of the usual poli­ti­cal di­vide got to­gether be­hind the idea that in­come taxes (in­clud­ing pay­roll taxes) on the bot­tom 20% are (a) im­moral and (b) do eco­nomic harm far out of pro­por­tion to gov­ern­ment rev­enue gen­er­ated. This would also re­quire some amount of de­creased taxes on the next quin­tile in or­der to avoid high marginal tax rates, i.e., if you sud­denly start pay­ing $2000/​year in taxes as soon as your in­come goes from $19,000/​year to $20,000/​year then that was a 200% tax rate on that par­tic­u­lar ex­tra $1000 earned. The lost tax rev­enue must be made up some­where else. In the cur­rent poli­ti­cal en­vi­ron­ment this prob­a­bly re­quires higher in­come taxes on higher wealth brack­ets rather than any­thing more cre­ative. But if we al­low our­selves to dis­cuss eco­nomic dream­wor­lds, then in­come taxes, cor­po­rate in­come taxes, and cap­i­tal-gains taxes are all very in­effi­cient com­pared to con­sump­tion taxes, land taxes, and ba­si­cally any­thing but in­come and cor­po­rate taxes. This is true even from the per­spec­tive of equal­ity; a rich per­son who earns lots of money, but in­vests it all in­stead of spend­ing it, is benefit­ing the econ­omy rather than them­selves and should not be taxed un­til they try to spend the money on a yacht, at which point you charge a con­sump­tion tax or lux­ury tax (even if that yacht is listed as a busi­ness ex­pense, which should make no differ­ence; con­sump­tion is not more moral when done by busi­nesses in­stead of in­di­vi­d­u­als). If I were given un­limited pow­ers to try to fix the un­em­ploy­ment thing, I’d be re­form­ing the en­tire tax code from scratch to pre­sent the min­i­mum pos­si­ble ob­sta­cles to ex­chang­ing one’s la­bor for money, and as a sec­ond pri­or­ity min­i­mize ob­sta­cles to com­pound rein­vest­ment of wealth. But try­ing to change any­thing on this scale is prob­a­bly not poli­ti­cally fea­si­ble rel­a­tive to a sim­pler, more un­der­stand­able cru­sade to “Stop tax­ing the bot­tom 20%, it harms our econ­omy be­cause they’re cus­tomers of all those other com­pa­nies and it’s im­moral be­cause they get a raw enough deal already.”

[3] Two pos­si­ble forces for sig­nifi­cant tech­nolog­i­cal change in the 21st cen­tury would be robotic cars and elec­tric cars. Imag­ine a city with an all-robotic all-elec­tric car fleet, dis­patch­ing light cars with only the bat­tery sizes needed for the jour­ney, trav­el­ing at much higher speeds with no crash risk and much lower fuel costs… and low­er­ing rents by greatly ex­tend­ing the effec­tive area of a city, i.e., ex­tend­ing the phys­i­cal dis­tance you can live from the cen­ter of the ac­tion while still get­ting to work on time be­cause your av­er­age speed is 75mph. What comes to mind when you think of robotic cars? Google’s pro­to­type robotic cars. What comes to mind when you think of elec­tric cars? Tesla. In both cases we’re talk­ing about as­cended, post-exit Sili­con Valley moguls try­ing to cre­ate in­dus­trial progress out of the good­ness of their hearts, us­ing money they earned from In­ter­net star­tups. Can you sus­tain a whole econ­omy based on what Elon Musk and Larry Page de­cide are cool?

[4] Cur­rently the con­ver­sa­tion among economists is more like “Why has to­tal fac­tor pro­duc­tivity growth slowed down in de­vel­oped coun­tries?” than “Is pro­duc­tivity grow­ing so fast due to au­toma­tion that we’ll run out of jobs?” Ask them the lat­ter ques­tion and they will, with jus­tice, give you very strange looks. Pro­duc­tivity isn’t grow­ing at high rates, and if it were that ought to cause em­ploy­ment rather than un­em­ploy­ment. This is why the Great Stag­na­tion in pro­duc­tivity is one pos­si­ble ex­plana­tory fac­tor in un­em­ploy­ment, albeit (as men­tioned) not a very good ex­pla­na­tion for why we can’t get back the jobs lost in the Great Re­ces­sion. The idea would have to be that some nat­u­ral rate of pro­duc­tivity growth and sec­toral shift is nec­es­sary for re-em­ploy­ment to hap­pen af­ter re­ces­sions, and we’ve lost that nat­u­ral rate; but so far as I know this is not con­ven­tional macroe­co­nomics.