Building Something Smarter

Previously in series: Efficient Cross-Domain Optimization

Once you demystify “intelligence” far enough to think of it as searching possible chains of causality, across learned domains, in order to find actions leading to a future ranked high in a preference ordering...

...then it no longer sounds quite as strange, to think of building something “smarter” than yourself.

There’s a popular conception of AI as a tape-recorder-of-thought, which only plays back knowledge given to it by the programmers—I deconstructed this in Artificial Addition, giving the example of the machine that stores the expert knowledge Plus-Of(Seven, Six) = Thirteen instead of having a CPU that does binary arithmetic.

There’s multiple sources supporting this misconception:

The stereotype “intelligence as book smarts”, where you memorize disconnected “facts” in class and repeat them back.

The idea that “machines do only what they are told to do”, which confuses the idea of a system whose abstract laws you designed, with your exerting moment-by-moment detailed control over the system’s output.

And various reductionist confusions—a computer is “mere transistors” or “only remixes what’s already there” (just as Shakespeare merely regurgitated what his teachers taught him: the alphabet of English letters—all his plays are merely that).

Since the workings of human intelligence are still to some extent unknown, and will seem very mysterious indeed to one who has not studied much cognitive science, it will seem impossible for the one to imagine that a machine could contain the generators of knowledge.

The knowledge-generators and behavior-generators are black boxes, or even invisible background frameworks. So tasking the imagination to visualize “Artificial Intelligence” only shows specific answers, specific beliefs, specific behaviors, impressed into a “machine” like being stamped into clay. The frozen outputs of human intelligence, divorced of their generator and not capable of change or improvement.

You can’t build Deep Blue by programming a good chess move for every possible position. First and foremost, you don’t know exactly which chess positions the AI will encounter. You would have to record a specific move for zillions of positions, more than you could consider in a lifetime with your slow neurons.

But worse, even if you could record and play back “good moves”, the resulting program would not play chess any better than you do. That is the peril of recording and playing back surface phenomena, rather than capturing the underlying generator.

If I want to create an AI that plays better chess than I do, I have to program a search for winning moves. I can’t program in specific moves because then the chess player really won’t be any better than I am. And indeed, this holds true on any level where an answer has to meet a sufficiently high standard. If you want any answer better than you could come up with yourself, you necessarily sacrifice your ability to predict the exact answer in advance—though not necessarily your ability to predict that the answer will be “good” according to a known criterion of goodness. “We never run a computer program unless we know an important fact about the output and we don’t know the output,” said Marcello Herreshoff.

Deep Blue played chess barely better than the world’s top humans, but a heck of a lot better than its own programmers. Deep Blue’s programmers had to know the rules of chess—since Deep Blue wasn’t enough of a general AI to learn the rules by observation—but the programmers didn’t play chess anywhere near as well as Kasparov, let alone Deep Blue.

Deep Blue’s programmers didn’t just capture their own chess-move generator. If they’d captured their own chess-move generator, they could have avoided the problem of programming an infinite number of chess positions. But they couldn’t have beat Kasparov; they couldn’t have built a program that played better chess than any human in the world.

The programmers built a better move generator—one that more powerfully steered the game toward the target of winning game positions. Deep Blue’s programmers surely had some slight ability to find chess moves that aimed at this same target, but their steering ability was much weaker than Deep Blue’s.

It is futile to protest that this is “paradoxical”, since it actually happened.

Equally “paradoxical”, but true, is that Garry Kasparov was not born with a complete library of chess moves programmed into his DNA. Kasparov invented his own moves; he was not explicitly preprogrammed by evolution to make particular moves—though natural selection did build a brain that could learn. And Deep Blue’s programmers invented Deep Blue’s code without evolution explicitly encoding Deep Blue’s code into their genes.

Steam shovels lift more weight than humans can heft, skyscrapers are taller than their human builders, humans play better chess than natural selection, and computer programs play better chess than humans. The creation can exceed the creator. It’s just a fact.

If you can understand steering-the-future, hitting-a-narrow-target as the work performed by intelligence—then, even without knowing exactly how the work gets done, it should become more imaginable that you could build something smarter than yourself.

By building something and then testing it? So that we can see that a design reaches the target faster or more reliably than our own moves, even if we don’t understand how? But that’s not how Deep Blue was actually built. You may recall the principle that just formulating a good hypothesis to test, usually requires far more evidence than the final test that ‘verifies’ it—that Einstein, in order to invent General Relativity, must have already had in hand enough evidence to isolate that one hypothesis as worth testing. Analogously, we can see that nearly all of the optimization power of human engineering must have already been exerted in coming up with good designs to test. The final selection on the basis of good results is only the icing on the cake. If you test four designs that seem like good ideas, and one of them works best, then at most 2 bits of optimization pressure can come from testing—the rest of it must be the abstract thought of the engineer.

There are those who will see it as almost a religious principle that no one can possibly know that a design will work, no matter how good the argument, until it is actually tested. Just like the belief that no one can possibly accept a scientific theory, until it is tested. But this is ultimately more of an injunction against human stupidity and overlooked flaws and optimism and self-deception and the like—so far as theoretical possibility goes, it is clearly possible to get a pretty damn good idea of which designs will work in advance of testing them.

And to say that humans are necessarily at least as good at chess as Deep Blue, since they built Deep Blue? Well, it’s an important fact that we built Deep Blue, but the claim is still a nitwit sophistry. You might as well say that proteins are as smart as humans, that natural selection reacts as fast as humans, or that the laws of physics play good chess.

If you carve up the universe along its joints, you will find that there are certain things, like butterflies and humans, that bear the very identifiable design signature and limitations of evolution; and certain other things, like nuclear power plants and computers, that bear the signature and the empirical design level of human intelligence. To describe the universe well, you will have to distinguish these signatures from each other, and have separate names for “human intelligence”, “evolution”, “proteins”, and “protons”, because even if these things are related they are not at all the same.