Possible takeaways from the coronavirus pandemic for slow AI takeoff

Epistemic sta­tus: fairly spec­u­la­tive, would ap­pre­ci­ate feedback

As the covid-19 pan­demic un­folds, we can draw les­sons from it for man­ag­ing fu­ture global risks, such as other pan­demics, cli­mate change, and risks from ad­vanced AI. In this post, I will fo­cus on pos­si­ble im­pli­ca­tions for AI risk. For a broader treat­ment of this ques­tion, I recom­mend FLI’s covid-19 page that in­cludes ex­pert in­ter­views on the im­pli­ca­tions of the pan­demic for other types of risks.

A key el­e­ment in AI risk sce­nar­ios is the speed of take­off—whether ad­vanced AI is de­vel­oped grad­u­ally or sud­denly. Paul Chris­ti­ano’s post on take­off speeds defines slow take­off in terms of the eco­nomic im­pact of AI as fol­lows: “There will be a com­plete 4 year in­ter­val in which world out­put dou­bles, be­fore the first 1 year in­ter­val in which world out­put dou­bles.” It ar­gues that slow AI take­off is more likely than fast take­off, but is not nec­es­sar­ily eas­ier to man­age, since it poses differ­ent challenges, such as large-scale co­or­di­na­tion. This post ex­pands on this point by ex­am­in­ing some par­allels be­tween the coro­n­avirus pan­demic and a slow take­off sce­nario. The up­sides of slow take­off in­clude the abil­ity to learn from ex­pe­rience, act on warn­ing signs, and reach a timely con­sen­sus that there is a se­ri­ous prob­lem. I would ar­gue that the covid-19 pan­demic had these prop­er­ties, but most of the world’s in­sti­tu­tions did not take ad­van­tage of them. This sug­gests that, un­less our in­sti­tu­tions im­prove, we should not ex­pect the slow AI take­off sce­nario to have a good de­fault out­come.

  1. Learn­ing from ex­pe­rience. In the slow take­off sce­nario, gen­eral AI is ex­pected to ap­pear in a world that has already ex­pe­rienced trans­for­ma­tive change from less ad­vanced AI, and in­sti­tu­tions will have a chance to learn from prob­lems with these AI sys­tems. An anal­ogy could be made with learn­ing from deal­ing with less “ad­vanced” epi­demics like SARS that were not as suc­cess­ful as covid-19 at spread­ing across the world. While some use­ful les­sons were learned, they were not suc­cess­fully gen­er­al­ized to covid-19, which had some­what differ­ent prop­er­ties than these pre­vi­ous pathogens (such as asymp­tomatic trans­mis­sion and higher viru­lence). Similarly, gen­eral AI may have some­what differ­ent prop­er­ties from less ad­vanced AI that would make miti­ga­tion strate­gies more difficult to gen­er­al­ize.

  2. Warn­ing signs. In the coro­n­avirus pan­demic re­sponse, there has been a lot of var­i­ance in how suc­cess­fully gov­ern­ments acted on warn­ing signs. Western coun­tries had at least a month of warn­ing while the epi­demic was spread­ing in China, which they could have used to stock up on PPE and build up test­ing ca­pac­ity, but most did not do so. Ex­perts have warned about the like­li­hood of a coro­n­avirus out­break for many years, but this did not lead most gov­ern­ments to stock up on med­i­cal sup­plies. This was a failure to take cheap pre­ven­ta­tive mea­sures in re­sponse to ad­vance warn­ings about a widely rec­og­nized risk with tan­gible con­se­quences, which is not a good sign for the case where the risk is less tan­gible and well-un­der­stood (such as risk from gen­eral AI).

  3. Con­sen­sus on the prob­lem. Dur­ing the covid-19 epi­demic, the abun­dance of warn­ing signs and past ex­pe­rience with pre­vi­ous pan­demics cre­ated an op­por­tu­nity for a timely con­sen­sus that there is a se­ri­ous prob­lem. How­ever, it ac­tu­ally took a long time for a broad con­sen­sus to emerge—the virus was of­ten dis­missed as “overblown” and “just like the flu” as late as March 2020. A timely re­sponse to the risk re­quired act­ing be­fore there was a con­sen­sus, thus risk­ing the ap­pear­ance of over­re­act­ing to the prob­lem. I think we can also ex­pect this to hap­pen with ad­vanced AI. Similarly to the dis­cus­sion of covid-19, there is an un­for­tu­nate irony where those who take a dis­mis­sive po­si­tion on ad­vanced AI risks are of­ten seen as cau­tious, pru­dent skep­tics, while those who ad­vo­cate early ac­tion are por­trayed as “pan­ick­ing” and over­re­act­ing. The “mov­ing goal­posts” effect, where new ad­vances in AI are dis­missed as not real AI, could con­tinue in­definitely as in­creas­ingly ad­vanced AI sys­tems are de­ployed. I would ex­pect the “no fire alarm” hy­poth­e­sis to hold in the slow take­off sce­nario—there may not be a con­sen­sus on the im­por­tance of gen­eral AI un­til it ar­rives, so risks from ad­vanced AI would con­tinue to be seen as “overblown” un­til it is too late to ad­dress them.

We can hope that the trans­for­ma­tive tech­nolog­i­cal change in­volved in the slow take­off sce­nario will also help cre­ate more com­pe­tent in­sti­tu­tions with­out these weak­nesses. We might ex­pect that in­sti­tu­tions un­able to adapt to the fast pace of change will be re­placed by more com­pe­tent ones. How­ever, we could also see an in­creas­ingly chaotic world where in­sti­tu­tions fail to adapt with­out bet­ter in­sti­tu­tions be­ing formed quickly enough to re­place them. Suc­cess in the slow take­off sce­nario de­pends on in­sti­tu­tional com­pe­tence and large-scale co­or­di­na­tion. Un­less more com­pe­tent in­sti­tu­tions are in place by the time gen­eral AI ar­rives, it is not clear to me that slow take­off would be much safer than fast take­off.

(Cross-posted from per­sonal blog. Thanks to Janos Kra­mar for his helpful feed­back on this post.)