Define a “high-level machine intelligence” (HLMI) as one that can carry out most human professions at least as well as a typical human. [...] For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI to exist?
29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.
(This excludes how many said “never”; I can’t find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a “Philosophy and Theory of AI” conference, an “Artificial General Intelligence” conference, an “Impacts and Risks of Artificial General Intelligence” conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we’d “never” have a 10% chance of HLMI, 4.1% (7 / 170) said “never” for 50% probability, and 16.5% (28 / 170) said “never” for 90%.)
In Bostrom’s Superintelligence (pp. 19-20), he cites the pooled results:
The combined sample gave the following (median) estimate: 10% probability of HLMI by 2022, 50% probability by 2040, and 90% probability by 2075. [...]
These numbers should be taken with some grains of salt: sample sizes are quite small and not necessarily representative of the general expert population. They are, however, in concordance with results from other surveys.
The survey results are also in line with some recently published interviews with about two-dozen researchers in AI-related fields. For example, Nils Nilsson has spent a long and productive career working on problems in search, planning, knowledge representation, and robotics, he has authored textbooks in artificial intelligence; and he has recently completed the most comprehensive history of the field written to date. When asked about arrival dates for [AI able to perform around 80% of jobs as well or better than humans perform], he offered the following opinion: 10% chance: 2030[;] 50% chance: 2050[;] 90% chance: 2100[.]
Judging from the published interview transcripts, Professor Nilsson’s probability distribution appears to be quite representative of many experts in the area—though again it must be emphasized that there is a wide spread of opinion: there are practitioners who are substantially more boosterish, confidently expecting HLMI in the 2020-40 range, and others who are confident either that it will never happen or that it is indefinitely far off. In addition, some interviewees feel that the notion of a “human level” of artificial intelligence is ill-defined or misleading, or are for other reasons reluctant to go on record with a quantitative prediction.
My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.
Luke has pretty much the same view as Bostrom. I don’t know as much about Eliezer’s views, but the last time I talked with him about this (in 2014), he didn’t expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke’s: “We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”
Of course, there is a huge problem with expert surveys—at the meta-level they have a very poor predictive track record. There is the famous example that Stuart Russell likes to cite, where rutherford said “anyone who looked for a source of power in the transformation of the atoms was talking moonshine”—a day before leo szilard created a successful fission chain reaction. There is also the similar example of the Wright Brothers—some unknown guys without credentials claim to have cracked aviation when recognized experts like Langley have just failed in a major way and respected scientists such as Lord Kelvin claim the whole thing is impossible. The wright brothers then report their first successful manned flight and no newspaper will even publish it.
Müller and Bostrom’s 2014 ‘Future progress in artificial intelligence: A survey of expert opinion’ surveyed the 100 top-cited living authors in Microsoft Academic Search’s “artificial intelligence” category, asking the question:
29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.
(This excludes how many said “never”; I can’t find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a “Philosophy and Theory of AI” conference, an “Artificial General Intelligence” conference, an “Impacts and Risks of Artificial General Intelligence” conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we’d “never” have a 10% chance of HLMI, 4.1% (7 / 170) said “never” for 50% probability, and 16.5% (28 / 170) said “never” for 90%.)
In Bostrom’s Superintelligence (pp. 19-20), he cites the pooled results:
Luke has pretty much the same view as Bostrom. I don’t know as much about Eliezer’s views, but the last time I talked with him about this (in 2014), he didn’t expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke’s: “We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.”
Thanks!
Of course, there is a huge problem with expert surveys—at the meta-level they have a very poor predictive track record. There is the famous example that Stuart Russell likes to cite, where rutherford said “anyone who looked for a source of power in the transformation of the atoms was talking moonshine”—a day before leo szilard created a successful fission chain reaction. There is also the similar example of the Wright Brothers—some unknown guys without credentials claim to have cracked aviation when recognized experts like Langley have just failed in a major way and respected scientists such as Lord Kelvin claim the whole thing is impossible. The wright brothers then report their first successful manned flight and no newspaper will even publish it.