Artificial Mysterious Intelligence

Previously in series: Failure By Affective Analogy

I once had a conversation that I still remember for its sheer, purified archetypicality. This was a nontechnical guy, but pieces of this dialog have also appeared in conversations I’ve had with professional AIfolk...

Him: Oh, you’re working on AI! Are you using neural networks?

Me: I think emphatically not.

Him: But neural networks are so wonderful! They solve problems and we don’t have any idea how they do it!

Me: If you are ignorant of a phenomenon, that is a fact about your state of mind, not a fact about the phenomenon itself. Therefore your ignorance of how neural networks are solving a specific problem, cannot be responsible for making them work better.

Him: Huh?

Me: If you don’t know how your AI works, that is not good. It is bad.

Him: Well, intelligence is much too difficult for us to understand, so we need to find some way to build AI without understanding how it works.

Me: Look, even if you could do that, you wouldn’t be able to predict any kind of positive outcome from it. For all you knew, the AI would go out and slaughter orphans.

Him: Maybe we’ll build Artificial Intelligence by scanning the brain and building a neuron-by-neuron duplicate. Humans are the only systems we know are intelligent.

Me: It’s hard to build a flying machine if the only thing you understand about flight is that somehow birds magically fly. What you need is a concept of aerodynamic lift, so that you can see how something can fly even if it isn’t exactly like a bird.

Him: That’s too hard. We have to copy something that we know works.

Me: (reflectively) What do people find so unbearably awful about the prospect of having to finally break down and solve the bloody problem? Is it really that horrible?

Him: Wait… you’re saying you want to actually understand intelligence?

Me: Yeah.

Him: (aghast) Seriously?

Me: I don’t know everything I need to know about intelligence, but I’ve learned a hell of a lot. Enough to know what happens if I try to build AI while there are still gaps in my understanding.

Him: Understanding the problem is too hard. You’ll never do it.

That’s not just a difference of opinion you’re looking at, it’s a clash of cultures.

For a long time, many different parties and factions in AI, adherent to more than one ideology, have been trying to build AI without understanding intelligence. And their habits of thought have become ingrained in the field, and even transmitted to parts of the general public.

You may have heard proposals for building true AI which go something like this:

  1. Calculate how many operations the human brain performs every second. This is “the only amount of computing power that we know is actually sufficient for human-equivalent intelligence”. Raise enough venture capital to buy a supercomputer that performs an equivalent number of floating-point operations in one second. Use it to run the most advanced available neural network algorithms.

  2. The brain is huge and complex. When the Internet becomes sufficiently huge and complex, intelligence is bound to emerge from the Internet. (I get asked about this in 50% of my interviews.)

  3. Computers seem unintelligent because they lack common sense. Program a very large number of “common-sense facts” into a computer. Let it try to reason about the relation of these facts. Put a sufficiently huge quantity of knowledge into the machine, and intelligence will emerge from it.

  4. Neuroscience continues to advance at a steady rate. Eventually, super-MRI or brain sectioning and scanning will give us precise knowledge of the local characteristics of all human brain areas. So we’ll be able to build a duplicate of the human brain by duplicating the parts. “The human brain is the only example we have of intelligence.”

  5. Natural selection produced the human brain. It is “the only method that we know works for producing general intelligence”. So we’ll have to scrape up a really huge amount of computing power, and evolve AI.

What do all these proposals have in common?

They are all ways to make yourself believe that you can build an Artificial Intelligence, even if you don’t understand exactly how intelligence works.

Now, such a belief is not necessarily false! Methods 4 and 5, if pursued long enough and with enough resources, will eventually work. (5 might require a computer the size of the Moon, but give it enough crunch and it will work, even if you have to simulate a quintillion planets and not just one...)

But regardless of whether any given method would work in principle, the unfortunate habits of thought will already begin to arise, as soon as you start thinking of ways to create Artificial Intelligence without having to penetrate the mystery of intelligence.

I have already spoken of some of the hope-generating tricks that appear in the examples above. There is invoking similarity to humans, or using words that make you feel good. But really, a lot of the trick here just consists of imagining yourself hitting the AI problem with a really big rock.

I know someone who goes around insisting that AI will cost a quadrillion dollars, and as soon as we’re willing to spend a quadrillion dollars, we’ll have AI, and we couldn’t possibly get AI without spending a quadrillion dollars. “Quadrillion dollars” is his big rock, that he imagines hitting the problem with, even though he doesn’t quite understand it.

It often will not occur to people that the mystery of intelligence could be any more penetrable than it seems: By the power of the Mind Projection Fallacy, being ignorant of how intelligence works will make it seem like intelligence is inherently impenetrable and chaotic. They will think they possess a positive knowledge of intractability, rather than thinking, “I am ignorant.”

And the thing to remember is that, for these last decades on end, any professional in the field of AI trying to build “real AI”, had some reason for trying to do it without really understanding intelligence (various fake reductions aside).

The New Connectionists accused the Good-Old-Fashioned AI researchers of not being parallel enough, not being fuzzy enough, not being emergent enough. But they did not say, “There is too much you do not understand.”

The New Connectionists catalogued the flaws of GOFAI for years on end, with fiery castigation. But they couldn’t ever actually say: “How exactly are all these logical deductions going to produce ‘intelligence’, anyway? Can you walk me through the cognitive operations, step by step, which lead to that result? Can you explain ‘intelligence’ and how you plan to get it, without pointing to humans as an example?”

For they themselves would be subject to exactly the same criticism.

In the house of glass, somehow, no one ever gets around to talking about throwing stones.

To tell a lie, you have to lie about all the other facts entangled with that fact, and also lie about the methods used to arrive at beliefs: The culture of Artificial Mysterious Intelligence has developed its own Dark Side Epistemology, complete with reasons why it’s actually wrong to try and understand intelligence.

Yet when you step back from the bustle of this moment’s history, and think about the long sweep of science—there was a time when stars were mysterious, when chemistry was mysterious, when life was mysterious. And in this era, much was attributed to black-box essences. And there were many hopes based on the similarity of one thing to another. To many, I’m sure, alchemy just seemed very difficult rather than even seeming mysterious; most alchemists probably did not go around thinking, “Look at how much I am disadvantaged by not knowing about the existence of chemistry! I must discover atoms and molecules as soon as possible!” They just memorized libraries of random things you could do with acid, and bemoaned how difficult it was to create the Philosopher’s Stone.

In the end, though, what happened is that scientists achieved insight, and then things got much easier to do. You also had a better idea of what you could or couldn’t do. The problem stopped being scary and confusing.

But you wouldn’t hear a New Connectionist say, “Hey, maybe all the failed promises of ‘logical AI’ were basically due to the fact that, in their epistemic condition, they had no right to expect their AIs to work in the first place, because they couldn’t actually have sketched out the link in any more detail than a medieval alchemist trying to explain why a particular formula for the Philosopher’s Stone will yield gold.” It would be like the Pope attacking Islam on the basis that faith is not an adequate justification for asserting the existence of their deity.

Yet in fact, the promises did fail, and so we can conclude that the promisers overreached what they had a right to expect. The Way is not omnipotent, and a bounded rationalist cannot do all things. But even a bounded rationalist can aspire not to overpromise—to only say you can do, that which you can do. So if we want to achieve that reliably, history shows that we should not accept certain kinds of hope. In the absence of insight, hopes tend to be unjustified because you lack the knowledge that would be needed to justify them.

We humans have a difficult time working in the absence of insight. It doesn’t reduce us all the way down to being as stupid as evolution. But it makes everything difficult and tedious and annoying.

If the prospect of having to finally break down and solve the bloody problem of intelligence seems scary, you underestimate the interminable hell of not solving it.