FAI FAQ draft: What is the Singularity?

I invite your feedback on this snippet from the forthcoming Friendly AI FAQ. This one is an answer to the question “What is the Singularity?”

_____

There are many types of mathematical and physical singularities, but in this FAQ we use the term ‘Singularity’ to refer to the technological singularity.

There are also many things someone might have in mind when they refer to a ‘technological Singularity’ (Sandberg 2010). Below, we’ll explain just three of them (Yudkowsky 2007):

  1. Intelligence explosion

  2. Event horizon

  3. Accelerating change

Intelligence explosion

Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica (MacKenzie 1995). By the late 1990s, ‘expert systems’ had surpassed human skill for a wide range of tasks (Nilsson 2009). In 1997, IBM’s Deep Blue computer beat the world chess champion (Campbell et al. 2002), and in 2011 IBM’s Watson computer beat the best human players at a much more complicated game: Jeopardy! (Markoff 2011). Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results (King et al. 2009; King 2011).

Computers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an ‘intelligence explosion’ resulting in a machine superintelligence (Good 1965).

Event horizon

Vernor Vinge (1993) wrote that the arrival of machine superintelligence represents an ‘event horizon’ beyond which humans cannot model the future, because events beyond the Singularity will be stranger than science fiction: too weird for human minds to predict. So far, all social and technological progress has resulted from human brains, but humans cannot predict what future radically different and more powerful intelligences will create. He made an analogy to the event horizon of a black hole, beyond which the predictive power of physics at the gravitational singularity breaks down.

Accelerating Change

A third concept of technological singularity refers to accelerating change in technological development.

Ray Kurzweil (2005) has done the most to promote this idea. He suggests that although we expect linear technological change, information technological progress is exponential, and so the future will be more different than most of us expect. Technological progress enables even faster technological progress. Kurzweil suggests that technological progress may become so fast that humans cannot keep up unless they amplify their own intelligence by integrating themselves with machines.