Q&A with Jürgen Schmidhuber on risks from AI

[Click here to see a list of all interviews]

I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI.

Below you will find some thoughts on the topic by Jürgen Schmidhuber, a computer scientist and AI researcher who wants to build an optimal scientist and then retire.

The Interview:

Q: What probability do you assign to the possibility of us being wiped out by badly done AI?

Jürgen Schmidhuber: Low for the next few months.

Q: What probability do you assign to the possibility of a human level AI, respectively sub-human level AI, to self-modify its way up to massive superhuman intelligence within a matter of hours or days?

Jürgen Schmidhuber: High for the next few decades, mostly because some of our own work seems to be almost there:

Q: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Jürgen Schmidhuber: From a paper of mine:

All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are “good”? The survivors will define this in hindsight, since only survivors promote their values.

Q: What is the current level of awareness of possible risks from AI within the artificial intelligence community, relative to the ideal level?

Jürgen Schmidhuber: Some are interested in this, but most don’t think it’s relevant right now.

Q: How do risks from AI compare to other existential risks, e.g. advanced nanotechnology?

Jürgen Schmidhuber: I guess AI risks are less predictable.

(In his response to my questions he also added the following.)

Jürgen Schmidhuber: Recursive Self-Improvement: The provably optimal way of doing this was published in 2003. From a recent survey paper:

The fully self-referential Goedel machine [1,2] already is a universal AI that is at least theoretically optimal in a certain sense. It may interact with some initially unknown, partially observable environment to maximize future expected utility or reward by solving arbitrary user-defined computational tasks. Its initial algorithm is not hardwired; it can completely rewrite itself without essential limits apart from the limits of computability, provided a proof searcher embedded within the initial algorithm can first prove that the rewrite is useful, according to the formalized utility function taking into account the limited computational resources. Self-rewrites may modify /​ improve the proof searcher itself, and can be shown to be globally optimal, relative to Goedel’s well-known fundamental restrictions of provability. To make sure the Goedel machine is at least asymptotically optimal even before the first self-rewrite, we may initialize it by Hutter’s non-self-referential but asymptotically fastest algorithm for all well-defined problems HSEARCH [3], which uses a hardwired brute force proof searcher and (justifiably) ignores the costs of proof search. Assuming discrete input/​output domains X/​Y, a formal problem specification f : X → Y (say, a functional description of how integers are decomposed into their prime factors), and a particular x in X (say, an integer to be factorized), HSEARCH orders all proofs of an appropriate axiomatic system by size to find programs q that for all z in X provably compute f(z) within time bound tq(z). Simultaneously it spends most of its time on executing the q with the best currently proven time bound tq(x). Remarkably, HSEARCH is as fast as the fastest algorithm that provably computes f(z) for all z in X, save for a constant factor smaller than 1 + epsilon (arbitrary real-valued epsilon > 0) and an f-specific but x-independent additive constant. Given some problem, the Goedel machine may decide to replace its HSEARCH initialization by a faster method suffering less from large constant overhead, but even if it doesn’t, its performance won’t be less than asymptotically optimal.

All of this implies that there already exists the blueprint of a Universal AI which will solve almost all problems almost as quickly as if it already knew the best (unknown) algorithm for solving them, because almost all imaginable problems are big enough to make the additive constant negligible. The only motivation for not quitting computer science research right now is that many real-world problems are so small and simple that the ominous constant slowdown (potentially relevant at least before the first Goedel machine self-rewrite) is not negligible. Nevertheless, the ongoing efforts at scaling universal AIs down to the rather few small problems are very much informed by the new millennium’s theoretical insights mentioned above, and may soon yield practically feasible yet still general problem solvers for physical systems with highly restricted computational power, say, a few trillion instructions per second, roughly comparable to a human brain power.

[1] J. Schmidhuber. Goedel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006.

[2] J. Schmidhuber. Ultimate cognition à la Goedel. Cognitive Computation, 1(2):177-193, 2009.

[3] M. Hutter. The fastest and shortest algorithm for all well-defined problems. International Journal of
Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber’s SNF grant 20-61847).

[4] J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine
arts. Connection Science, 18(2):173-187, 2006.

[5] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions
on Autonomous Mental Development, 2(3):230-247, 2010.

A dozen earlier papers on (not yet theoretically optimal) recursive self-improvement since 1987 are here: http://​​www.idsia.ch/​​~juergen/​​metalearner.html

Anonymous

At this point I would also like to give a short roundup. Most experts I wrote haven’t responded at all so far, although a few did but asked me not to publish their answers. Some of them are well-known even outside of their field of expertise and respected even here on LW.

I will paraphrase some of the responses I got below:

Anonymous expert 01: I think the so-called Singularity is unlikely to come about in the foreseeable future. I already know about the SIAI and I think that the people who are involved with it are well-meaning, thoughtful and highly intelligent. But I personally think that they are naïve as far as the nature of human intelligence goes. None of them seems to have a realistic picture about the nature of thinking.

Anonymous expert 02: My opinion is that some people hold much stronger opinions on this issue than justified by our current state of knowledge.

Anonymous expert 03: I believe that the biggest risk from AI is that at some point we will become so dependent on it that we lose our cognitive abilities. Today people are losing their ability to navigate with maps, thanks to GPS. But such a loss will be nothing compared to what we might lose by letting AI solve more important problems for us.

Anonymous expert 04: I think these are nontrivial questions and that risks from AI have to be taken seriously. But I also believe that many people have made scary-sounding but mostly unfounded speculations. In principle an AI could take over the world, but currently AI presents no threat. At some point, it will become a more pressing issue. In the mean time, we are much more likely to destroy ourselves by other means.