Richard Carrier on the Singularity

Recently I stumbled upon Richard Carrier’s essay “Are We Doomed” (June 5, 2009), when asked to comment about the Singularity, said the following:

I agree the Singularity stuff is often muddled nonsense. I just don’t know many advocates of it. Those who do advocate it are often unrealistic about the physical limits of technology, and particularly the nature of IQ. They base their “predictions” on two implausible assumptions: that advancement of IQ is potentially unlimited (I am fairly certain it will be bounded by complexity theory: at a certain point it just won’t be possible to think any faster or sounder or more creatively) and that high IQ is predictive of accelerating technological advancement. History proves otherwise: even people ten times smarter than people like me produce no more extensive or revolutionary technological or scientific output, much less invent more technologies or make more discoveries—in fact, by some accounts they often produce less in those regards than people of more modest (though still high) intelligence.

However, Singularity fans are right about two things: machines will outthink humans (and be designing better versions of themselves than we ever could) within fifty to a hundred years (if advocates predict this will happen sooner, then they are being unrealistic), and the pace of technological advancement will accelerate. However, this is already accounted for by existing models of technological advancement, e.g. Moore’s Law holds that computers double in processing power every three years, Haik’s Law holds that LED’s double in efficiency every three years, and so on (similar laws probably hold for other technologies, these are just two that have been proven so far). Thus, that technological progress accelerates is already predicted. The Singularity simply describes one way this pace will be maintained: by the recruitment of AI.

It therefore doesn’t predict anything remarkable, and certainly doesn’t deserve such a pretentious name. Because there will be a limit, an end point, and it won’t resemble a Singularity: there is a physical limit on how fast thoughts can be thunk and how fast manufacturing can occur, quantum mechanical limits that can never be overcome, by any technology. Once we reach that point, the pace of technological advancement will cease to be geometric and will become linear, or in some cases stop altogether. For instance, once we reach the quantum mechanical limit of computational speed and component size, no further advances will be possible in terms of Moore’s Law (even Kurzweil’s theory that it will continue in the form of expansion in size ignores the fact that we can already do this now, yet we don’t see moon-sized computers anywhere—a fact that reveals an importantly overlooked reality: what things cost).

Ironically, the same has been discovered about actual singularities: they, too, don’t really exist, and for the same quantum mechanical reasons (see my discussion here).

What do you think?