Minds that make optimal use of small amounts of sensory data

In That alien message, Eliezer made some pretty wild claims:

My moral—that even Einstein did not come within a million light-years of making efficient use of sensory data.

Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

They never suspected a thing. They weren’t very smart, you see, even before taking into account their slower rate of time. Their primitive equivalents of rationalists went around saying things like, “There’s a bound to how much information you can extract from sensory data.” And they never quite realized what it meant, that we were smarter than them, and thought faster.

In the comments, Will Pearson asked for “some form of proof of concept”. It seems that researchers at Cornell—Schmidt and Lipson—have done exactly that. See their video on Guardian Science:

‘Eureka machine’ can discover laws of nature—The machine formulates laws by observing the world and detecting patterns in the vast quantities of data it has collected

Researchers at Cambridge and Aberystwith have gone one step further and implemented an AI system/​robot to perform scientific experiments:

Researchers at Aberystwyth University in Wales and England’s University of Cambridge report in Science today that they designed Adam—they describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker’s yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. Using artificial intelligence, Adam hypothesized that certain genes in baker’s yeast code for specific enzymes that catalyze biochemical reactions. The robot devised experiments to test these beliefs, ran the experiments, and interpreted the results.

The crucial question is: what can we learn about the likely effectiveness of a “superintelligent” AI from the behavior of these AI programs? First of all, let us be clear: this AI is *not* a “superintellgience”, so we shouldn’t expect it to perform at that level. The problem we face is analogous to the problem of extrapolating how fast an olympic sprinter can run from looking at a baby crawling around on the floor. Furthermore, the Cornell machine was given a physical system that was specifically chosen to be easy to analyze, and a representation (equations) that is known to be suited to the problem.

We can certainly state that the program analyzed some data much faster than any human could have done. In a running time probably measured in hours or minutes, it took a huge stream of raw position and velocity data and found the underlying conserved quantities. And given likely algorithmic optimizations and another 10 years’ of Moore’s law, we can safely say that in 10 years’ time, that particular program will run in seconds on a $500 machine or milliseconds on a supercomputer. These results actually surprise me: an AI can automatically and instantly analyze a physical system (albeit a rigged one).

But, of course, one has to ask: how much more narrow-AI work would it take to actually look at video of some bouncing, falling and whirling objects and deduce a general physical law such as the earth’s gravity and the laws governing air resistance, where the objects are not hand-picked to be easy to analyze? This is unclear. But I can see mechanisms whereby this would work, rather than merely having to submit to the overwhelming power of the word “superintelligence”. My suspicion is that with current state-of-the-art object identification technology, video footage of a system of bouncing balls and pendulums and springs would be amenable to this kind of analysis. There may even be a research project in that proposition.

As far as extrapolating the behavior of a superintelligence from the behavior of the Cornell AI or the Adam robot, we should note that no human can look at a complex physical system for a few seconds and just write down the physical law or equation that it obeys. A simple narrow AI has already outperformed humans at one specific task; though it still cannot do most of what a scientist does. We should therefore update our beliefs to assign more weight to the hypothesis that on some particular narrow physical modelling task, a “superintelligence” would vastly outperform us. Personally I was surprised at what such a simple system can do, though with hindsight it is obvious: data from a physical system follows patterns, and statistics can indentify those patterns. Science is not a magic ritual that only humans can perform, rather it is a specific kind of algorithm, and we should expect there to be no special injunction against silicon minds from doing it.