BOOK DRAFT: ‘Ethics and Superintelligence’ (part 2)

Below is part 2 of the first draft of my book Ethics and Superintelligence. Your comments and constructive criticisms are much appreciated.

This is not a book for a mainstream audience. Its style is that of contemporary Anglophone philosophy. Compare to, for example, Chalmers’ survey article on the singularity.

Bibliographic references and links to earlier parts are provided here.

Part 2 is below...

***

Late in the Industrial Revolution, Samuel Butler (1863) worried about what might happen when machines become more capable than the humans who designed them:

…we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

…the time will come when the machines will hold the real supremacy over the world and its inhabitants…

By the time of the computer, Alan Turing (1950) realized that machines will one day be capable of genuine thought:

I believe that at the end of the century… one will be able to speak of machines thinking without expecting to be contradicted.

Turing (1951/​2004) concluded:

…it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers… At some stage therefore we should have to expect the machines to take control…

All-powerful machines are a staple of science fiction, but one of the first serious arguments that such a scenario is likely came from the statistician I.J. Good (1965):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Vernor Vinge (1993) called this future event the “technological singularity.” Though there are several uses of the term “singularity” in futurist circles (Yudkowky 2007), I will always use the term to refer to Good’s predicted intelligence explosion.

David Chalmers (2010) introduced another terminological convention that I will borrow:

Let us say that AI is artificial intelligence of human level or greater (that is, at least as intelligent as an average human). Let us say that AI+ is artificial intelligence of greater than human level (that is, more intelligent than the most intelligent human). Let us say that AI++ (or superintelligence) is AI of far greater than human level (say, at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse).

With this in place, Chalmers formalized Good’s argument like so:

1. There will be AI (before long, absent defeaters).

2. If there is AI, there will be AI+ (soon after, absent defeaters).

3. If there is AI+, there will be AI++ (soon after, absent defeaters).

4. Therefore, there will be AI++ (before too long, absent defeaters).

I will defend Chalmers’ argument in greater detail than he has, using “before long” to mean “within 150 years,” using “soon after” to mean “within two decades,” and using “before too long” to mean “within two centuries.” My definitions here are similar to Chalmers’ definitions, but more precise.

Following Chalmers, by “defeaters” I mean “anything that prevents intelligent systems (human or artificial) from manifesting their capacities to create intelligent systems.” Defeaters include “disasters, disinclination, and active prevention.”

Disasters include catastrophic events that would severely impede scientific progress, such as supervolcano eruption, asteroid impact, cosmic rays, climate change, pandemic, nuclear war, biological warfare, an explosion of nanotechnology, and so on. The risk of such disasters and others are assessed in Bostrom & Cirkovic (2008).

Disinclination refers to a lack of interest in developing AI of human-level general intelligence. Given the enormous curiosity of the human species, and the power that human-level AI could bring its creators, I think long-term disinclination is unlikely.

Active prevention of the development of human-level artificial intelligence has already been advocated by Thomas Metzinger (2004), though not because of the risk to humans. Rather, Metzinger is concerned about the risk to artificial agents. Early AIs will inevitably be poorly designed, which could lead to enormous subjective suffering for them that we cannot predict. One might imagine an infant from near Cherynobl whose parts are so malformed by exposure to nuclear radiation during development that its short existence is a living hell. In working toward human-level artificial intelligence, might we be developing millions of internally malformed beings that suffer horrible subjective experiences but are unable to tell us so?

It is difficult to predict the likelihood of the active prevention of AI development, but the failure of humanity to halt the development of ever more powerful nuclear weapons (Norris & Kristensen 2009) – even after tasting their destructive power – does not inspire optimism.

Later, we will return to consider these potential defeaters again. For now, let us consider the premises of Chalmers’ argument.

***