Possible takeaways from the coronavirus pandemic for slow AI takeoff

(Cross-posted from personal blog. Summarized in Alignment Newsletter #104. Thanks to Janos Kramar for his helpful feedback on this post.)

Epistemic status: fairly speculative, would appreciate feedback

As the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI’s covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks.

A key element in AI risk scenarios is the speed of takeoff—whether advanced AI is developed gradually or suddenly. Paul Christiano’s post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: “There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.” It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world’s institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome.

  1. Learning from experience. In the slow takeoff scenario, general AI is expected to appear in a world that has already experienced transformative change from less advanced AI, and institutions will have a chance to learn from problems with these AI systems. An analogy could be made with learning from dealing with less “advanced” epidemics like SARS that were not as successful as covid-19 at spreading across the world. While some useful lessons were learned, they were not successfully generalized to covid-19, which had somewhat different properties than these previous pathogens (such as asymptomatic transmission and higher virulence). Similarly, general AI may have somewhat different properties from less advanced AI that would make mitigation strategies more difficult to generalize.

  2. Warning signs. In the coronavirus pandemic response, there has been a lot of variance in how successfully governments acted on warning signs. Western countries had at least a month of warning while the epidemic was spreading in China, which they could have used to stock up on PPE and build up testing capacity, but most did not do so. Experts have warned about the likelihood of a coronavirus outbreak for many years, but this did not lead most governments to stock up on medical supplies. This was a failure to take cheap preventative measures in response to advance warnings about a widely recognized risk with tangible consequences, which is not a good sign for the case where the risk is less tangible and well-understood (such as risk from general AI).

  3. Consensus on the problem. During the covid-19 epidemic, the abundance of warning signs and past experience with previous pandemics created an opportunity for a timely consensus that there is a serious problem. However, it actually took a long time for a broad consensus to emerge—the virus was often dismissed as “overblown” and “just like the flu” as late as March 2020. A timely response to the risk required acting before there was a consensus, thus risking the appearance of overreacting to the problem. I think we can also expect this to happen with advanced AI. Similarly to the discussion of covid-19, there is an unfortunate irony where those who take a dismissive position on advanced AI risks are often seen as cautious, prudent skeptics, while those who advocate early action are portrayed as “panicking” and overreacting. The “moving goalposts” effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the “no fire alarm” hypothesis to hold in the slow takeoff scenario—there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as “overblown” until it is too late to address them.

We can hope that the transformative technological change involved in the slow takeoff scenario will also help create more competent institutions without these weaknesses. We might expect that institutions unable to adapt to the fast pace of change will be replaced by more competent ones. However, we could also see an increasingly chaotic world where institutions fail to adapt without better institutions being formed quickly enough to replace them. Success in the slow takeoff scenario depends on institutional competence and large-scale coordination. Unless more competent institutions are in place by the time general AI arrives, it is not clear to me that slow takeoff would be much safer than fast takeoff.