Is There a Valley of Bad Civilizational Adequacy?

This post is my attempt to think through an idea that I’ve been mulling over since this discussion on Twitter last November prompted by a question of Matthew Barnett, which I was reminded of while reading the section on energy in Zvi’s recent post on the war in Ukraine. The meaning of the title, “valley of bad civilizational adequacy,” is the idea that as one relaxes constraints on the bounded rationality of hypothetical future collective human policy decisions, the result may initially be a decrease in expected utility captured by humans, due to increased existential risk from unaligned AGI, before one starts the ascent to the peak of optimal rationality.

Preliminary definition: By pro-growth policy I mean a certain set of public policy proposals that aren’t directly about AI, but could shorten the timeline to AGI: less immigration restriction, particularly for high-skill workers; cheaper, denser housing, especially in the SF Bay Area; and cheap energy, by building out lots of nuclear and renewable generation capacity. (Is there anything else that fits this category?)

Main argument

  1. Pro-growth policy can be expected to accelerate AI capabilities research and therefore shorten the timeline to AGI, via the agglomeration effects of more smart people in dense urban tech hubs, decreased energy cost of running ML experiments, and overall economic growth leading to more lucrative investment opportunities and therefore more research funding.

  2. Having less time to solve AI safety would cause a greater increase in AI X-risk than any decrease in AI X-risk resulting from pro-growth policy also accelerating AI-alignment research.

  3. AI X-risk dominates all other considerations.

  4. Therefore pro-growth policy is bad, and AI-risk-alarmist rationalists should not support it, and perhaps should actively oppose it.

Possible counterarguments

(Other than against step 3; the intended audience of this post is people who already accept that.)

The main argument depends on

  • the amount that AI-risk alarmists can affect pro-growth policy,

  • the effect of such changes in pro-growth policy on the timeline to AGI,

  • and the effect of such changes in the AGI timeline on our chances of solving alignment.

One or more of these could be small enough that the AI-risk community’s stance on pro-growth policy is of negligible consequence.

Perhaps pro-growth policy won’t matter because the AGI timeline will be very short, not allowing time for any major political changes and their downstream consequences to play out before the singularity.

Perhaps it’s bad to oppose pro-growth policy because the AGI timeline will be very long: If we have plenty of time, there’s no need to suffer from economic stagnation in the meantime. Furthermore, sufficiently severe stagnation could lead to technological regress, political destabilization that sharply increases and prolongs unnecessary pre-singularity misery, or even the failure of human civilization to ever escape earth.

Even without a very long AGI timeline, perhaps the annual risk of cascading economic and political instability due to tech stagnation, leading to permanent civilizational decline, is so high that it outweighs increased AI X-risk from shortening the AGI timeline.

Perhaps there is no valley of bad civilizational adequacy, or at most a very small valley: A civilization adequate enough to get the rational pursuit of growth right may be likely enough to also get AI risk right that pro-growth policy is positive-EV. E.g. more smart people in dense urban tech hubs might accelerate AI-safety research enough to outweigh the increased risk from also accelerating capabilities research. (This seems less implausible w.r.t. housing and immigration policy than energy policy, since running lots of expensive large-scale ML experiments seems to me to be particularly likely to advance capabilities more than safety.)

I find the cumulative weight of these counterarguments underwhelming, but I also find the conclusion of the main argument very distasteful, and it certainly seems to run counter to the prevailing wisdom of the AI-risk community. Perhaps I am missing something?