Expert opinions about future AI development span a wide range, from predictions that we will reach ASI soon and then humanity goes extinct, to predictions that AI progress will plateau soon, resulting in weaker AI that presents much more mundane risks and benefits. However, non-experts often encounter only a single narrative, depending on things like their social circles and social media feed.
In our paper, we propose a taxonomy that divides expert views on AIs into three main clusters. We hope to provide a concise and digestible way to explain the most common views and the crucial disagreements between them.
You can read the paper at: doctrines.ai. Here is a brief summary:
Dominance doctrine. The Dominance doctrine predicts that the first actor to develop sufficiently advanced AI will gain a decisive strategic advantage over all others. This is the natural consequence of two beliefs commonly held by experts in this category: that AI itself can accelerate AI research, thus cementing the leader’s advantage in a race to ASI, and that ASI provides a decisive military advantage over opponents.
Extinction doctrine. The Extinction doctrine predicts that humanity will lose control over ASI, resulting in its extinction or permanent disempowerment. It more or less agrees with the Dominance doctrine about the pace and scope of AI development, but it predicts that robust control methods for such powerful AI will not be developed in time.
Replacement doctrine. This doctrine assumes that AI development will plateau soon and ASI will not be developed in the near future. Expectations about the effects of weaker AI are much more fragmented. It’s clear that they could provide accelerated scientific and economic progress, but there are also many concerns that they could cause geopolitical and economic destabilization. Some of these concerns are about widespread unemployment, extreme concentrations of power, and large-scale micro-targeted manipulation.
The vast majority of experts who believe that ASI will be developed soon are in the Dominance or Extinction doctrines. There is broad agreement that an ASI-wielding actor would hold a decisive strategic advantage over all others; the question then is whether those deploying ASI systems would be able to maintain control of them (Dominance doctrine), or lose control of them resulting in humanity’s permanent disempowerment and likely extinction (Extinction doctrine).
Those who believe that ASI will not be developed soon generally argue that AI progress is about to plateau, and belong to the Replacement doctrine. This does not necessarily correspond to a laissez-faire approach toward AI development, since there are still many risks posed by weaker AI that we are not well-equipped to deal with.
Three main views on the future of AI
Expert opinions about future AI development span a wide range, from predictions that we will reach ASI soon and then humanity goes extinct, to predictions that AI progress will plateau soon, resulting in weaker AI that presents much more mundane risks and benefits. However, non-experts often encounter only a single narrative, depending on things like their social circles and social media feed.
In our paper, we propose a taxonomy that divides expert views on AIs into three main clusters. We hope to provide a concise and digestible way to explain the most common views and the crucial disagreements between them.
You can read the paper at: doctrines.ai. Here is a brief summary:
Dominance doctrine. The Dominance doctrine predicts that the first actor to develop sufficiently advanced AI will gain a decisive strategic advantage over all others. This is the natural consequence of two beliefs commonly held by experts in this category: that AI itself can accelerate AI research, thus cementing the leader’s advantage in a race to ASI, and that ASI provides a decisive military advantage over opponents.
Extinction doctrine. The Extinction doctrine predicts that humanity will lose control over ASI, resulting in its extinction or permanent disempowerment. It more or less agrees with the Dominance doctrine about the pace and scope of AI development, but it predicts that robust control methods for such powerful AI will not be developed in time.
Replacement doctrine. This doctrine assumes that AI development will plateau soon and ASI will not be developed in the near future. Expectations about the effects of weaker AI are much more fragmented. It’s clear that they could provide accelerated scientific and economic progress, but there are also many concerns that they could cause geopolitical and economic destabilization. Some of these concerns are about widespread unemployment, extreme concentrations of power, and large-scale micro-targeted manipulation.
The vast majority of experts who believe that ASI will be developed soon are in the Dominance or Extinction doctrines. There is broad agreement that an ASI-wielding actor would hold a decisive strategic advantage over all others; the question then is whether those deploying ASI systems would be able to maintain control of them (Dominance doctrine), or lose control of them resulting in humanity’s permanent disempowerment and likely extinction (Extinction doctrine).
Those who believe that ASI will not be developed soon generally argue that AI progress is about to plateau, and belong to the Replacement doctrine. This does not necessarily correspond to a laissez-faire approach toward AI development, since there are still many risks posed by weaker AI that we are not well-equipped to deal with.