Q&A with experts on risks from AI #2

[Click here to see a list of all interviews]

I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI.

(Note: I am also asking Reinforcement Learning /​ Universal AI researchers, teams like the one that build IBM Watson, organisations like DARPA and various companies. Some haven’t replied yet while I still have to write others.)

Nils John Nilsson is one of the founding researchers in the discipline of Artificial intelligence. He is the Kumagai Professor of Engineering, Emeritus in Computer Science at Stanford University. He is particularly famous for his contributions to search, planning, knowledge representation, and robotics. [Wikipedia] [Homepage] [Google Scholar]

Peter J. Bentley, a British author and computer scientist based at University College London. [Wikipedia] [Homepage]

David Alan Plaisted is a computer science professor at the University of North Carolina at Chapel Hill. [Wikipedia] [Homepage]

Hector Levesque is a Canadian academic and researcher in artificial intelligence. He does research in the area of knowledge representation and reasoning in artificial intelligence. [Wikipedia] [Homepage]

The Interview:

Nils Nilsson: I did look at the lesswrong.com Web page. Its goals are extremely important! One problem about blogs like these is that I think they are read mainly by those who agree with what they say—the already converted. We need to find ways to get the Fox News viewers to understand the importance of what these blogs are saying. How do we get to them?

Before proceeding to deal with your questions, you might be interested in the following:

1. An article called “Rationally-Shaped Artificial Intelligence” by Steve Omohundro about robot “morality.” It’s at:

http://​​selfawaresystems.com/​​2011/​​10/​​07/​​rationally-shaped-artificial-intelligence/​​

2. I have a draft book on “beliefs” that deals with (among other things) how to evaluate them. See:

http://​​ai.stanford.edu/​​~nilsson/​​Beliefs.html

(Comments welcome!)

3. Of course, you must know already about David Kahneman’s excellent book, “Thinking, Fast and Slow.”

Here are some (maybe hasty!) answers/​responses to your questions.

David Plaisted: Your questions are very interesting and I’ve had such questions for a long time, actually. I’ve been surprised that people are not more concerned about such things.

However, right now the problems with AI seem so difficult that I’m not worried about these issues.

Peter J. Bentley: I think intelligence is extremely hard to define, even harder to measure, and human-level intelligence is a largely meaningless phrase. All life on Earth has human-level intelligence in one sense for we have all evolved for the same amount of time and we are equally able to survive in our appropriate niches and solve problems relevant to us in highly effective ways.

There is no danger from clever AI—only from stupid AI that is so bad that it kills us by accident. I wish we were on track to create something as clever as a frog. We have a long, long way to go. I agree with Pat Hayes on this subject.

I actually find the discussion a little silly. We are *much* more likely to all become a bunch of cyborgs completely reliant on integrated tech (including some clever computers) in the next 100 years. Computers won’t be external entities, they will be a part of us. Many of us already can’t live our modern lives without being plugged into their gadgets for most of their waking lives. Worry about that instead :)

I think your questions are very hard to answer in any rigorous sense, for they involve prediction of future events so far ahead that anything I say is likely to be quite inaccurate. I will try to answer some below, but these are really just my educated guesses.

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/​50%/​90% chance of the development of roughly human-level machine intelligence?

Explanatory remark to Q1:

P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/​50%/​90%

Nils Nilsson: Because human intelligence is so multi-faceted, your question really should be divided into each of the many components of intelligence. For example, on language translation, AI probably already exceeds the performance of many translators. On integrating symbolic expressions in calculus, AI (or computer science generally) is already much better than humans. AI does better on many planning and scheduling tasks. On chess, same! On the Jeopardy! quiz show, same!

A while back I wrote an essay about a replacement for the Turing test. It was called the “Employment Test.” (See: http://​​ai.stanford.edu/​​~nilsson/​​OnlinePubs-Nils/​​General_Essays/​​AIMag26-04-HLAI.pdf) How many of the many, many jobs that humans do can be done by machines? I’ll rephrase your question to be: When will AI be able to perform around 80% of these jobs as well or better than humans perform?

10% chance: 2030
50% chance: 2050
90% chance: 2100

David Plaisted: It seems that the development of human level intelligence is always later than people think it will be. I don’t have an idea how long this might take.

Peter J. Bentley: That depends on what you mean by human level intelligence and how it is measured. Computers can already surpass us at basic arithmetic. Some machine learning methods can equal us in recognition of patterns in images. Most other forms of “AI” are tremendously bad at tasks we perform well. The human brain is the result of a several billion years of evolution at molecular scales to macro scales. Our evolutionary history spans unimaginable numbers of generations, challenges, environments, predators, etc. For an artificial brain to resemble ours, it must necessarily go through a very similar evolutionary history. Otherwise it may be a clever machine, but its intelligence will be in areas that do not necessarily resemble human intelligence.

Hector Levesque: No idea. There’s a lot of factors beyond wars etc mentioned. It’s tough to make these kind of predictions.

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:

P(human extinction | badly done AI) = ?

(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)

Nils Nilsson: 0.01% probability during the current century. Beyond that, who knows?

David Plaisted: I think people will be so concerned about the misuse of intelligent computers that they will take safeguards to prevent such problems. To me it seems more likely that disaster will come on the human race from nuclear or biological weapons, or possibly some natural disaster.

Peter J. Bentley: If this were ever to happen, it is most likely to be because the AI was too stupid and we relied on it too much. It is *extremely* unlikely for any AI to become “self aware” and take over the world as they like to show in the movies. It’s more likely that your pot plant will take over the world.

Hector Levesque: Low. The probability of human extinction by other means (e.g. climate problems, micro biology etc) is sufficiently higher that if we were to survive all of them, surviving the result of AI work would be comparatively easy.

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/​days/​< 5 years?

Explanatory remark to Q3:

P(superhuman intelligence within hours | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?

Nils Nilsson: I’ll assume that you mean sometime during this century, and that my “employment test” is the measure of superhuman intelligence.

hours: 5%
days: 50%
<5 years: 90%

David Plaisted: This would require a lot in terms of robots being able to build hardware devices or modify their own hardware. I suppose they could also modify their software to do this, but right now it seems like a far out possibility.

Peter J. Bentley: It won’t happen. Has nothing to do with internet connections or speeds. The question is rather silly.

Hector Levesque: Good. An automated human level intelligence is achieved, it ought to be able to learn what humans know more quickly.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Explanatory remark to Q4:

How much money is currently required to mitigate possible risks from AI (to be instrumental in maximizing your personal long-term goals, e.g. surviving this century), less/​no more/​little more/​much more/​vastly more?

Nils Nilsson: Work on this problem should be ongoing, I think, with the work on AGI. We should start, now, with “little more,” and gradually expand through the “much” and “vastly” as we get closer to AGI.

David Plaisted: Yes, some kind of ethical system should be built into robots, but then one has to understand their functioning well enough to be sure that they would not get around it somehow.

Peter J. Bentley: Humans are the ultimate in killers. We have taken over the planet like a plague and wiped out a large number of existing species. “Intelligent” computers would be very very stupid if they tried to get in our way. If they have any intelligence at all, they will be very friendly. We are the dangerous ones, not them.

Hector Levesque: It’s always important to watch for risks with any technology. AI technology is no different.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Explanatory remark to Q5:

What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?

Nils Nilsson: I think the risk of terrorists getting nuclear weapons is a greater risk than AI will be during this century. They would certainly use them if they had them—they would be doing the work of Allah in destroying the Great Satan. Other than that, I think global warming and other environmental problems will have a greater negative impact than AI will have during this century. I believe technology can save us from the risks associated with new viruses. Bill Joy worries about nano-dust, but I don’t know enough about that field to assess its possible negative impacts. Then, of course, there’s the odd meteor. Probably technology will save us from that.

David Plaisted: There are risks with any technology, even computers as we have them now. It depends on the form of government and the nature of those in power whether technology is used for good or evil more than on the nature of the technology itself. Even military technology can be used for repression and persecution. Look at some countries today that use technology to keep their people in subjection.

Peter J. Bentley: No.

Hector Levesque: See above Q2. I think AI risks are smaller than others.

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

Nils Nilsson: Not as high as it should be. Some, like Steven Omohundro, Wendell Wallach and Colin Allen (“Moral Machines: Teaching Robots Right from Wrong”), Patrick Lin (“Robot Ethics”), and Ronald Arkin (“Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/​Reactive Robot Architecture”) are among those thinking and writing about these problems. You probably know of several others.

David Plaisted: Probably not as high as it should be.

Peter J. Bentley: The whole idea is blown up out of all proportion. There is no real risk and will not be for a very long time. We are also well aware of the potential risks.

Hector Levesque: Low. Technology in the area is well behind what was predicted in the past, and so concern for risks is correspondingly low.

Q7: Can you think of any milestone such that if it were ever reached you would expect human-level machine intelligence to be developed within five years thereafter?

Nils Nilsson: Because human intelligence involves so many different abilities, I think AGI will require many different technologies with many different milestones. I don’t think there is a single one. I do think, though, that the work that Andrew Ng, Geoff Hinton, and (more popularly) Jeff Hawkins and colleagues are doing on modeling learning in the neo-cortex using deep Bayes networks is on the right track.

Thanks for giving me the opportunity to think about your questions, and I hope to stay in touch with your work!

David Plaisted: I think it depends on the interaction of many different capabilities.

Peter J. Bentley: Too many advances are needed to describe here...

Hector Levesque: Reading comprehension at the level of a 10-year old.

Q8: Are you familiar with formal concepts of optimal AI design which relate to searches over complete spaces of computable hypotheses or computational strategies, such as Solomonoff induction, Levin search, Hutter’s algorithm M, AIXI, or Gödel machines?

David Plaisted: My research does not specifically relate to those kinds of questions.