Do Earths with slower economic growth have a better chance at FAI?

I was raised as a good and proper child of the En­light­en­ment who grew up read­ing The In­cred­ible Bread Ma­chine and A Step Farther Out, tak­ing for granted that eco­nomic growth was a huge in-prac­tice com­po­nent of hu­man util­ity (plau­si­bly the ma­jor­ity com­po­nent if you asked your­self what was the ma­jor differ­ence be­tween the 21st cen­tury and the Mid­dle Ages) and that the “Small is Beau­tiful” /​ “Sus­tain­able Growth” crowds were liv­ing in im­pos­si­ble dream­wor­lds that re­jected quan­ti­ta­tive think­ing in fa­vor of protest­ing against nu­clear power plants.

And so far as I know, such a view would still be an ex­cel­lent first-or­der ap­prox­i­ma­tion if we were go­ing to carry on into the fu­ture by steady tech­nolog­i­cal progress: Eco­nomic growth = good.

But sup­pose my main-line pro­jec­tion is cor­rect and the “prob­a­bil­ity of an OK out­come” /​ “as­tro­nom­i­cal benefit” sce­nario es­sen­tially comes down to a race be­tween Friendly AI and unFriendly AI. So far as I can tell, the most likely rea­son we wouldn’t get Friendly AI is the to­tal se­rial re­search depth re­quired to de­velop and im­ple­ment a strong-enough the­ory of sta­ble self-im­prove­ment with a pos­si­ble side or­der of failing to solve the goal trans­fer prob­lem. Rel­a­tive to UFAI, FAI work seems like it would be math­ier and more in­sight-based, where UFAI can more eas­ily cob­ble to­gether lots of pieces. This means that UFAI par­allelizes bet­ter than FAI. UFAI also prob­a­bly benefits from brute-force com­put­ing power more than FAI. Both of these im­ply, so far as I can tell, that slower eco­nomic growth is good news for FAI; it length­ens the dead­line to UFAI and gives us more time to get the job done. I have some­times thought half-jok­ingly and half-an­throp­i­cally that I ought to try to find in­vest­ment sce­nar­ios based on a con­tinued Great Stag­na­tion and an in­definite Great Re­ces­sion where the whole de­vel­oped world slowly goes the way of Spain, be­cause these sce­nar­ios would ac­count for a ma­jor­ity of sur­viv­ing Everett branches.

Roughly, it seems to me like higher eco­nomic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would pre­fer wor­lds in which this re­search can pro­ceed at a rel­a­tively less fren­z­ied pace and still suc­ceed, wor­lds in which the de­fault timelines to UFAI ter­mi­nate in 2055 in­stead of 2035.

I have var­i­ous cute ideas for things which could im­prove a coun­try’s eco­nomic growth. The chance of these things even­tu­at­ing seems small, the chance that they even­tu­ate be­cause I write about them seems tiny, and they would be good mainly for en­ter­tain­ment, links from econ­blogs, and pos­si­bly marginally im­press­ing some peo­ple. I was think­ing about col­lect­ing them into a post called “The Nice Things We Can’t Have” based on my pre­dic­tion that var­i­ous forces will block, e.g., the all-robotic all-elec­tric car grid which could be rel­a­tively triv­ial to build us­ing pre­sent-day tech­nol­ogy—that we are too far into the Great Stag­na­tion and the bu­reau­cratic ma­tu­rity of de­vel­oped coun­tries to get nice things any­more. How­ever I have a cer­tain in­hi­bi­tion against try­ing things that would make ev­ery­one worse off if they ac­tu­ally suc­ceeded, even if the prob­a­bil­ity of suc­cess is tiny. And it’s not com­pletely im­pos­si­ble that we’ll see some ac­tual ex­per­i­ments with small na­tion-states in the next few decades, that some of the peo­ple do­ing those ex­per­i­ments will have read Less Wrong, or that suc­cess­ful ex­per­i­ments will spread (if the US ever le­gal­izes robotic cars or tries a city with an all-robotic fleet, it’ll be be­cause China or Dubai or New Zealand tried it first). Other EAs (effec­tive al­tru­ists) care much more strongly about eco­nomic growth di­rectly and are try­ing to in­crease it di­rectly. (An ex­tremely un­der­stand­able po­si­tion which would typ­i­cally be taken by good and vir­tu­ous peo­ple).

Throw­ing out re­mote, con­trived sce­nar­ios where some­thing ac­com­plishes the op­po­site of its in­tended effect is cheap and mean­ingless (vide “But what if MIRI ac­com­plishes the op­po­site of its pur­pose due to blah”) but in this case I feel im­pel­led to ask be­cause my main­line vi­su­al­iza­tion has the Great Stag­na­tion be­ing good news. I cer­tainly wish that eco­nomic growth would al­ign with FAI be­cause then my virtues would al­ign and my op­ti­mal poli­cies have fewer down­sides, but I am also aware that wish­ing does not make some­thing more likely (or less likely) in re­al­ity.

To head off some ob­vi­ous types of bad rea­son­ing in ad­vance: Yes, higher eco­nomic growth frees up re­sources for effec­tive al­tru­ism and thereby in­creases re­sources go­ing to FAI, but it also in­creases re­sources go­ing to the AI field gen­er­ally which is mostly push­ing UFAI, and the prob­lem ar­guendo is that UFAI par­allelizes more eas­ily.

Similarly, a planet with gen­er­ally higher eco­nomic growth might de­velop in­tel­li­gence am­plifi­ca­tion (IA) tech­nol­ogy ear­lier. But this gen­eral ad­vance­ment of sci­ence will also ac­cel­er­ate UFAI, so you might just be de­creas­ing the amount of FAI re­search that gets done be­fore IA and de­creas­ing the amount of time available af­ter IA be­fore UFAI. Similarly to the more mun­dane idea that in­creased eco­nomic growth will pro­duce more ge­niuses some of whom can work on FAI; there’d also be more ge­niuses work­ing on UFAI, and UFAI prob­a­bly par­allelizes bet­ter and re­quires less se­rial depth of re­search. If you con­cen­trate on some sin­gle good effect on blah and ne­glect the cor­re­spond­ing speed­ing-up of UFAI timelines, you will ob­vi­ously be able to gen­er­ate spu­ri­ous ar­gu­ments for eco­nomic growth hav­ing a pos­i­tive effect on the bal­ance.

So I pose the ques­tion: “Is slower eco­nomic growth good news?” or “Do you think Everett branches with 4% or 1% RGDP growth have a bet­ter chance of get­ting FAI be­fore UFAI”? So far as I can tell, my cur­rent main­line guesses im­ply, “Everett branches with slower eco­nomic growth con­tain more se­rial depth of cog­ni­tive causal­ity and have more effec­tive time left on the clock be­fore they end due to UFAI, which fa­vors FAI re­search over UFAI re­search”.

This seems like a good pa­ram­e­ter to have a grasp on for any num­ber of rea­sons, and I can’t re­call it pre­vi­ously be­ing de­bated in the x-risk /​ EA com­mu­nity.

EDIT: To be clear, the idea is not that try­ing to de­liber­ately slow world eco­nomic growth would be a max­i­mally effec­tive use of EA re­sources and bet­ter than cur­rent top tar­gets; this seems likely to have very small marginal effects, and many such courses are risky. The ques­tion is whether a good and vir­tu­ous per­son ought to avoid, or al­ter­na­tively seize, any op­por­tu­ni­ties which come their way to help out on world eco­nomic growth.

EDIT 2: Carl Shul­man’s opinion can be found on the Face­book dis­cus­sion here.