Intelligence Explosion analysis draft: How long before digital intelligence?

Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on. This section is less complete than the others; missing text is indicated with brackets: [].

_____

We do not know what it takes to build a digital intelligence. Because of this, we do not know what groundwork will be needed to understand intelligence, nor how long it may take to get there.

Worse, it’s easy to think we do know. Studies show that except for weather forecasters (Murphy and Winkler 1984), nearly all of us give inaccurate probability estimates when we try, and in particular we are overconfident of our predictions (Lichtenstein, Fischoff, and Phillips 1982; Griffin and Tversky 1992; Yates et al. 2002). Experts, too, often do little better than chance (Tetlock 2005), and are outperformed by crude computer algorithms (Grove and Meehl 1996; Grove et al. 2000; Tetlock 2005). So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.

But uncertainty is not a “get out of prediction free” card. You either will or will not save for retirement or support AI risk mitigation. The outcomes of these choices will depend, among other things, on whether digital intelligence arrives in the near future. Should you plan as though there are 5050 odds of reaching digital intelligence in the next 30 years? Are you 99% confident that digital intelligence won’t arrive in the next 30 years? Or is it somewhere in between?

Other than using one’s guts for prediction or deferring to an expert, how might one estimate the time until digital intelligence? We consider several strategies below.

Time since Dartmouth. We have now seen 60 years of work toward digital intelligence since the seminal Dartmouth conference on AI, but digital intelligence has not yet arrived. This seems, intuitively, like strong evidence that digital intelligence won’t arrive in the next minute, good evidence it won’t arrive in the next year, and significant but far from airtight evidence that it won’t arrive in the next few decades. Such intuitions can be formalized into models that, while simplistic, can form a useful starting point for estimating the time to digital intelligence.1

Simple hardware extrapolation. Vinge (1993) wrote: “Based on [hardware trends], I believe that the creation of greater-than-human intelligence will occur [between 2005 and 2030].” Vinge seems to base his estimates on estimates of the “raw hardware power that is present in organic brains.” In a 2003 reprint of his article, Vinge notes the insufficiency of this reasoning: even if we have the hardware sufficient for AI, we may not have the software problem solved.

Extrapolating the requirements for whole brain emulation. One way to solve the software problem is to scan and emulate the human brain. Thus Ray Kurzweil (2005) extrapolates our progress in hardware, brain scanning, and our understanding of the brain to predict that (low resolution) whole brain emulation can be achieved by 2029. Many neuroscientists think this estimate is too optimistic, but the basic approach has promise.

Tracking progress in machine intelligence. Many folks intuitively estimate the time until digital intelligence by asking what proportion of human abilities today’s software can match, and how quickly machines are catching up. However, it is not clear how to divide up the space of “human abilities,” nor how much each one matters. We also don’t know whether machine progress will be linear or include a sudden jump. Watching an infant’s progress in learning calculus might lead one to conclude the child will not learn it until the year 3000, until suddenly the child learns it in a spurt at age 17. Still, machine progress in chess performance has been regular,2 and it may be worth checking whether a measure can be found for which both: (a) progress is smooth enough to extrapolate; and (b) when performance rises to a certain level, we can expect digital intelligence.3

Estimating progress in scientific research output. Imagine a man digging a ten-kilometer ditch. If he digs 100 meters in one day, you might predict the ditch will be finished in 100 days. But what if 20 more diggers join him, and they are all given steroids? Now the ditch might not take so long. Analogously, when predicting progress toward digital intelligence it may be useful to consider not how much progress is made per year, but instead how much progress is made per unit of research effort. Thus, if we expect jumps in the amount of effective research effort (for reasons given in section 2.2.), we should expect analogous jumps in progress toward digital intelligence.

Given the long history of confident false predictions within AI, and the human tendency toward overconfidence in general, it would seem misguided to be 90% confident that AI will succeed in the coming decade.4 But 90% confidence that digital intelligence will not arrive before the end of the century also seems wrong, given that (a) many seemingly difficult AI benchmarks have been reached, (b) many factors, such as more hardware and automated science, may well accelerate progress toward digital intelligence, and (c) whole brain emulation may well be a relatively straightforward engineering problem that will succeed by 2070 if not 2030. There is a significant probability that digital intelligence will arrive within a century, and additional research can improve our estimates (as we discuss in section 5).

________
1 We can make a simple formal model of this by assuming (with much simplification) that every year a coin is tossed to determine whether we will get AI that year, and that we are initially unsure of the weighting on that coin. [add more: The 60 years of no AI that we’ve had so far is then highly unlikely under models where the coin comes up “AI” on 90% of years (the probability of this would be 10^-60), or even that it comes up “AI” in 10% of all years (probability 0.18%, or one time in 500), whereas it’s the expected case if the coin comes up “AI” in, say, 1% of all years, or for that matter in 0.0001% of all years. Thus, depending on one’s prior over coin weightings, one should in this toy model update strongly against coin weightings in which AI would be likely in the next minute, or even year, while leaving the relative probabilities of “AI in 200 years” and “AI in 2 million years” more or less untouched.]
2 See http://​​lukeprog.com/​​special/​​chess.pdf.
3 It is probably also worth keeping crude track of “progress in AI” as a whole, even though there is no guarantee progress would be linear. [It would be nice to add a crude attempt to nevertheless quantify which areas of human intelligence have been substantially matched by machines. Ideal would be to take some canonical-sounding article from some decades ago that listed domains we haven’t matched with computers, and then to note something like: of these domains, (3) has been solved, and (1) and (4) have seen substantial progress. GEB has a suitable listing, but might be better to use a more canonical article if we can find one.]
4 Unless, that is, you have a kind of evidence that is strongly different from the kinds of evidence possessed by the many researchers since Dartmouth who incorrectly predicted that their particular AI paradigm, or human-level AI in general, was about to succeed.