To defuse risks from AI you would have to argue that what we currently know AI to be capable of can not be extrapolated to encompass the full spectrum of the human potential, or that those skills can not be combined to create a coherent framework of agency.
Below are a few examples that hint at the possibility that AI (or artificial algorithmic creation) has already transcended human capabilities in several narrow fields of expertise.
Algorithmic intelligence can be creative and inventive:
We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation.
Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.
This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems—a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way—yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery. (Davidson 1997)
When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that “engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design”. (Williams, Crossley and Lang 2001)
The HR (or Hardy-Ramanujan) program invents and analyses definitions in areas of pure mathematics, including finite algebras, graph theory and number theory. While working in number theory, HR recently invented a new integer sequence, the refactorable numbers, which are defined and developed here.
A computer program written by researchers at Argonne National Laboratory in Illinois has come up with a major mathematical proof that would have been called creative if a human had thought of it. In doing so, the computer has, for the first time, got a toehold into pure mathematics, a field described by its practitioners as more of an art form than a science. And the implications, some say, are profound, showing just how powerful computers can be at reasoning itself, at mimicking the flashes of logical insight or even genius that have characterized the best human minds.
Improvements of algorithms can in many cases lead to dramatic performance gains:
Everyone knows Moore’s Law – a prediction made in 1965 by Intel co-founder Gordon Moore that the density of transistors in integrated circuits would continue to double every 1 to 2 years. (…) Even more remarkable – and even less widely understood – is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed.
The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade. It’s difficult to quantify the improvement, though, because it is as much in the realm of quality as of execution time.
In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.
To defuse risks from AI you would have to argue that what we currently know AI to be capable of can not be extrapolated to encompass the full spectrum of the human potential, or that those skills can not be combined to create a coherent framework of agency.
I think you misunderstand, the question isn’t what could defuse worries about UFAI by demonstrating the risks to be lower than previously believed (e. g. proving strong AI to be unfeasible), it’s about what could reduce the actual existent risk.
No, I think that short of a demonstration that strong AI is unfeasible, there is no way to actually defuse the risk enough that it would matter much. Even a very sophisticated autistic (limited set of abilities) AI, that never undergoes recursive self-improvement, but which does nonetheless possess some superhuman capabilities (which any AI has: superior memory, direct data access etc.), could pose an existential risk.
Take for example what is happening in Syria right now. The only reason that they do not squelch any protests is that nobody can supervise or control that many people. Give them an AGI, that can watch a few million security cameras and control thousands of drones, and they will destroy most of all human values by implementing a world-wide dictatorship or theocracy.
You seem to be implying that if both the authorities and the insurgents have access to equally powerful AGI, then this works to the net benefit of the authorities.
I am skeptical of that premise, especially in the context of open revolt as we’re seeing in Syria. I don’t think lack of eyeballs on cameras is a significant mechanism there; plain old ordinary human secret police would do fine for that, since people are protesting openly. The key dynamic I see is that the regime isn’t confident that the police or army will obey orders if driven to use lethal force on a large scale.
I don’t see how AI would change that dynamic. If both sides have it, the protesters can optimize their actions to stay within the sphere of uncertainty, even as the government is trying to act as aggressively as it can, without risking the military joining the rebels.
Today, we already have much more sophisticated weapons, communication, information storage, and information retrieval technology than was ever available before. It doesn’t appear to have been a clear net benefit for either freedom or tyranny.
Do you envision AGI strengthening authorities in ways that 20th-century coercive technologies did not?
To defuse risks from AI you would have to argue that what we currently know AI to be capable of can not be extrapolated to encompass the full spectrum of the human potential, or that those skills can not be combined to create a coherent framework of agency.
Below are a few examples that hint at the possibility that AI (or artificial algorithmic creation) has already transcended human capabilities in several narrow fields of expertise.
We already know that AI is capable of following:
Beat humans at chess.
Autonomously generate functional genomics hypotheses.
Discover laws of physics on its own.
Create original, modern music.
Identify pedestrians and avoid collisions.
Answer questions posed in natural language.
Transcribe and translate spoken language.
Recognize images of handwritten, typewritten or printed text.
Produce human speech.
Traverse difficult terrain.
Algorithmic intelligence can be creative and inventive:
— The Automation of Science
— Computer Program Self-Discovers Laws of Physics
— Genetic Algorithms and Evolutionary Computation
— Triumph of the Cyborg Composer
— Refactorable Numbers—A Machine Invention
— Computer Math Proof Shows Reasoning Power
Improvements of algorithms can in many cases lead to dramatic performance gains:
— “Progress in Algorithms Beats Moore’s Law”, Page 71 (Report to the President and Congress: Designing a Digital Future: Federally FUnded R&D in Networking and IT)
I think you misunderstand, the question isn’t what could defuse worries about UFAI by demonstrating the risks to be lower than previously believed (e. g. proving strong AI to be unfeasible), it’s about what could reduce the actual existent risk.
No, I think that short of a demonstration that strong AI is unfeasible, there is no way to actually defuse the risk enough that it would matter much. Even a very sophisticated autistic (limited set of abilities) AI, that never undergoes recursive self-improvement, but which does nonetheless possess some superhuman capabilities (which any AI has: superior memory, direct data access etc.), could pose an existential risk.
Take for example what is happening in Syria right now. The only reason that they do not squelch any protests is that nobody can supervise or control that many people. Give them an AGI, that can watch a few million security cameras and control thousands of drones, and they will destroy most of all human values by implementing a world-wide dictatorship or theocracy.
You seem to be implying that if both the authorities and the insurgents have access to equally powerful AGI, then this works to the net benefit of the authorities.
I am skeptical of that premise, especially in the context of open revolt as we’re seeing in Syria. I don’t think lack of eyeballs on cameras is a significant mechanism there; plain old ordinary human secret police would do fine for that, since people are protesting openly. The key dynamic I see is that the regime isn’t confident that the police or army will obey orders if driven to use lethal force on a large scale.
I don’t see how AI would change that dynamic. If both sides have it, the protesters can optimize their actions to stay within the sphere of uncertainty, even as the government is trying to act as aggressively as it can, without risking the military joining the rebels.
Today, we already have much more sophisticated weapons, communication, information storage, and information retrieval technology than was ever available before. It doesn’t appear to have been a clear net benefit for either freedom or tyranny.
Do you envision AGI strengthening authorities in ways that 20th-century coercive technologies did not?