Review of Kurzweil, ‘The Singularity is Near’

Ray Kurzweil’s writings are the best-known expression of Singularity memes, so I figured it’s about time I read his 2005 best-seller The Singularity is Near.

Though earlier users of the term “technological Singularity” used it to refer to the arrival of machine superintelligence (an event beyond which our ability to predict the future breaks down), Kurzweil’s Singularity is more vaguely defined:

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.

Kurzweil says that people don’t expect the Singularity because they don’t realize that technological progress is largely exponential, not linear:

People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases, over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematician’s perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duratio. As a result, even sophisticated commentators, when considering the future, typically extrapolate the current pace of change over the next ten years or one hundred years to determine their expectations...

But a serious assessment of the history of technology reveals that technological change is exponential… You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological… the acceleration of progress and growth applies to each of them.

Kurzweil has many examples:

Consider Gary Kasparov, who scorned the pathetic state of computer chess in 1992. Yet the relentless doubling of computer power every year enabled a computer to defeat him only five years later...

[Or] consider the biochemists who, in 1990, were skeptical of the goal of transcribing the entire human genome in a mere fifteen years. These scientists had just spent an entire year transcribing a mere one ten-thousandth of the genome. So… it seemed natural to them that it would take a century, if not longer, before the genome could be sequenced. [The complete genome was sequenced in 2003.]

He emphasizes that people often fail to account for how progress in one field will feed on accelerating progress in another:

Can the pace of technological progress continue to speed up indefinitely? Isn’t there a point at which humans are unable to think fast enough to keep up? For unenhanced humans, clearly so. But what would 1,000 scientists, each 1,000 times more intelligent than human scientists today, and each operating 1,000 times faster that contemporary humans (because the information processing in their primarily nonbiological brains is faster) accomplish? One chronological year would be like a millennium for them… an hour would result in a century of progress (in today’s terms).

Kurzweil’s second chapter aims to convince us that Moore’s law of exponential growth in computing power is not an anomaly: the “law of accelerating returns” holds for a wide variety of technologies, evolutionary developments, and paradigm shifts. The chapter is full of logarithmic plots for bits of DRAM per dollar, microprocessor clock speed, processor performance in MIPS, growth in Genbank, hard drive bits per dollar, internet hosts, nanotech science citations, and more.

The chapter is a wake-up call to those not used to thinking about exponential change, but one gets the sense that Kurzweil has cherry-picked his examples. Plenty of technologies have violated his law of accelerating returns, and Kurzweil doesn’t mention them.

This cherry-picking is one of the two persistent problems with The Singularity is Near. The second persistent problem is detailed storytelling. Kurzweil would make fewer false predictions if he made statements about the kinds of changes we can expect and then gave examples as illustrations, instead of giving detailed stories about the future as his actual predictions.

My third major issue with the book is not a “problem” so much as it is a decision about the scope of the book. Human factors (sociology, psychology, politics) are largely ignored in the book , but would have been illuminating to include if done well — and certainly, they are important for technological forecasting.

It’s a big book with many specific claims, so there are hundreds of detailed criticisms I could make (e.g. about his handling of AI risks), but I prefer to keep this short. Kurzweil’s vision of the future is more similar to what I expect is correct than most people’s pictures of the future are, and he should be applauded for finding a way to bring transhumanist ideas to the mainstream culture.