Do Earths with slower economic growth have a better chance at FAI?

I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the “Small is Beautiful” /​ “Sustainable Growth” crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.

And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress: Economic growth = good.

But suppose my main-line projection is correct and the “probability of an OK outcome” /​ “astronomical benefit” scenario essentially comes down to a race between Friendly AI and unFriendly AI. So far as I can tell, the most likely reason we wouldn’t get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem. Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.

Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.

I have various cute ideas for things which could improve a country’s economic growth. The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people. I was thinking about collecting them into a post called “The Nice Things We Can’t Have” based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology—that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore. However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny. And it’s not completely impossible that we’ll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it’ll be because China or Dubai or New Zealand tried it first). Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly. (An extremely understandable position which would typically be taken by good and virtuous people).

Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide “But what if MIRI accomplishes the opposite of its purpose due to blah”) but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news. I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.

To head off some obvious types of bad reasoning in advance: Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.

Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier. But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI. Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there’d also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research. If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.

So I pose the question: “Is slower economic growth good news?” or “Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI”? So far as I can tell, my current mainline guesses imply, “Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research”.

This seems like a good parameter to have a grasp on for any number of reasons, and I can’t recall it previously being debated in the x-risk /​ EA community.

EDIT: To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

EDIT 2: Carl Shulman’s opinion can be found on the Facebook discussion here.