A Visualization of Nick Bostrom’s Superintelligence

Through a series of diagrams, this article will walk through key concepts in Nick Bostrom’s Superintelligence. The book is full of heavy content, and though well written, its scope and depth can make it difficult to grasp the concepts and mentally hold them together. The motivation behind making these diagrams is not to repeat an explanation of the content, but rather to present the content in such a way that the connections become clear. Thus, this article is best read and used as a supplement to Superintelligence.

Note: Superintelligence is now available in the UK. The hardcover is coming out in the US on September 3. The Kindle version is already available in the US as well as the UK.

Roadmap: there are two diagrams, both presented with an accompanying description. The two diagrams are combined into one mega-diagram at the end.

Figure 1: Pathways to Superintelligence

Figure 1 displays the five pathways toward superintelligence that Bostrom describes in chapter 2 and returns to in chapter 14 of the text. According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence. Biological cognition, i.e., the enhancement of human intelligence, may yield a weak form of superintelligence on its own. Additionally, improvements to biological cognition could feed back into driving the progress of artificial intelligence or whole brain emulation. The arrows from networks and organizations likewise indicate technologies feeding back into AI and whole brain emulation development.

Artificial intelligence and whole brain emulation are two pathways that can lead to fully realized superintelligence. Note that neuromorphic is listed under artificial intelligence, but an arrow connects from whole brain emulation to neuromorphic. In chapter 14, Bostrom suggests that neuromorphic is a potential outcome of incomplete or improper whole brain emulation. Synthetic AI includes all the approaches to AI that are not neuromorphic; other terms that have been used are algorithmic or de novo AI.

Figure 1 also includes some properties of superintelligence. In regard to its capabilities, Bostrom discusses software and hardware advantages of a superintelligence in chapter 3, when describing possible forms of superintelligence. In chapter 6, Bostrom discusses the superpowers a superintelligence may have. The term “task-specific superpowers” refers to Table 8, which contains tasks (e.g., strategizing or technology research), and corresponding skill sets (e.g., forecasting, planning, or designing) which a superintelligence may have. Capability control, discussed in chapter 9, is the limitation of a superintelligence’s abilities. It is a response to the problem of preventing undesirable outcomes. As the problem is one for human programmers to analyze and address, capability control appears in green.

In addition to what a superintelligence might do, Bostrom discusses why it would do those things, i.e., what its motives will be. There are two main theses—the orthogonality thesis and the instrumental convergence thesis—both of which are expanded upon in chapter 7. Motivation selection, found in chapter 9, is another method to avoid undesirable outcomes. Motivation selection is the loading of desirable goals and purposes into the superintelligence, which would potentially render capability control unnecessary. As motivation selection is another problem for human programmers, it also appears in green.

Figure 2: Outcomes of Superintelligence

Figure 2 maps the types of superintelligence to the outcomes. It also introduces some terminology which goes beyond general properties of superintelligence, and breaks up the types of superintelligence as well. There are two axes which divide superintelligence. One is the polarity, i.e., the possibility of a singleton or multipolar scenario. The other is the difference between friendly and unfriendly superintelligence. Polarity is slightly between superintelligence properties and outcomes; it refers to a combination of human actions and design of superintelligence, as well as actions of a superintelligence. Thus, polarity terms appear in both the superintelligence and the outcomes areas of Figure 2. Since safety profiles are a consequence of many components of superintelligence, those terms appear in the outcomes area.

Bostrom describes singletons in the most detail. An unfriendly singleton leads to existential risks, including scenarios which Bostrom describes in chapter 8. In contrast, a friendly superintelligence leads to acceptable outcomes. Acceptable outcomes are not envisioned in as great detail as existential risks; however, chapter 13 discusses how a superintelligence operating under coherent extrapolated volition or one of the morality models would behave. This could be seen as an illustration of what a successful attempt at friendly superintelligence would yield. A multipolar scenario of superintelligence is more difficult to predict; Bostrom puts forth various visions found in chapter 11. The one which receives the most attention is the algorithmic economy, based on Robin Hanson’s ideas.

Figure 3: A Visualization of Nick Bostrom’s Superintelligence

Finally, figures 1 and 2 are put together for the full diagram in Figure 3. As Figure 3 is an overview of the book’s contents, it includes chapter numbers for parts of the diagram. This allows Figure 3 to act as a quick reference and guide readers to the right part of Superintelligence for more information.


Acknowledgements

Thanks to Steven Greidinger for co-creating the first versions of the diagrams, to Robby Bensinger for his insights on the final revisions, to Alex Vermeer for his suggestions on early drafts, and to Luke Muehlhauser for the time to work on this project during an internship at the Machine Intelligence Research Institute.