Like MIRI, the authors were interested in coping with a threat that the world had never faced before (on such a large scale), and one that could arrive with little notice. The authors make the point that people tend to think in terms of linear growth and decay rather than exponential growth and decay
I sometimes think of “Limits to Growth” and “Unfriendly AI” as rivals for the trophy of worst existential risk, with the folks who are most concerned about one of the risks viewing the folks concerned about the other with deep suspicion. (“Huh… your pet disaster will never happen; we should be worrying about mine instead!”).
It’s certainly useful to identify the common ground between the camps. Both scenarios involving taking existing exponential trends very seriously. In both cases, society as a whole has not been taking the trends seriously (or is even actively engaged in dismissal or ridicule of the future concerns). In both cases the lack of preparedness means that there are unlikely to be good results.
I sometimes think of “Limits to Growth” and “Unfriendly AI” as rivals for the trophy of worst existential risk, with the folks who are most concerned about one of the risks viewing the folks concerned about the other with deep suspicion. (“Huh… your pet disaster will never happen; we should be worrying about mine instead!”).
It’s certainly useful to identify the common ground between the camps. Both scenarios involving taking existing exponential trends very seriously. In both cases, society as a whole has not been taking the trends seriously (or is even actively engaged in dismissal or ridicule of the future concerns). In both cases the lack of preparedness means that there are unlikely to be good results.