My Best and Worst Mistake

Yesterday I covered the young Eliezer’s affective death spiral around something that he called “intelligence”. Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition—consciously, deliberately refused. Indeed, he would have been loath to put any definition on “intelligence” at all.

Why? Because there’s a standard bait-and-switch problem in AI, wherein you define “intelligence” to mean something like “logical reasoning” or “the ability to withdraw conclusions when they are no longer appropriate”, and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, “Lo, I have implemented intelligence!” People came up with poor definitions of intelligence—focusing on correlates rather than cores—and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence. It’s not like Eliezer1996 was out to build a career in Artificial Intelligence. He just wanted a mind that would actually be able to build nanotechnology. So he wasn’t tempted to redefine intelligence for the sake of puffing up a paper.

Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else stupidity: Having seen attempts to define “intelligence” abused so often, I refused to define it at all. What if I said that intelligence was X, and it wasn’t really X? I knew in an intuitive sense what I was looking for—something powerful enough to take stars apart for raw material—and I didn’t want to fall into the trap of being distracted from that by definitions.

Similarly, having seen so many AI projects brought down by physics envy—trying to stick with simple and elegant math, and being constrained to toy systems as a result—I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence. “Except for Bayes’s Theorem,” Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

If you’re wondering why Eliezer2000 thought such a thing—disbelieved in a math of intelligence—well, it’s hard for me to remember this far back. It certainly wasn’t that I ever disliked math. If I had to point out a root cause, it would be reading too few, too popular, and the wrong Artificial Intelligence books.

But then I didn’t think the answers were going to come from Artificial Intelligence; I had mostly written it off as a sick, dead field. So it’s no wonder that I spent too little time investigating it. I believed in the cliche about Artificial Intelligence overpromising. You can fit that into the pattern of “too far in the opposite direction”—the field hadn’t delivered on its promises, so I was ready to write it off. As a result, I didn’t investigate hard enough to find the math that wasn’t fake.

My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

Because I disbelieved that there could be any simple answers to intelligence, I went and I read up on cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, and more than one branch of Artificial Intelligence. When I had what seemed like simple bright ideas, I didn’t stop there, or rush off to try and implement them, because I knew that even if they were true, even if they were necessary, they wouldn’t be sufficient: intelligence wasn’t supposed to be simple, it wasn’t supposed to have an answer that fit on a T-Shirt. It was supposed to be a big puzzle with lots of pieces; and when you found one piece, you didn’t run off holding it high in triumph, you kept on looking. Try to build a mind with a single missing piece, and it might be that nothing interesting would happen.

I was wrong in thinking that Artificial Intelligence the academic field, was a desolate wasteland; and even wronger in thinking that there couldn’t be math of intelligence. But I don’t regret studying e.g. functional neuroanatomy, even though I now think that an Artificial Intelligence should look nothing like a human brain. Studying neuroanatomy meant that I went in with the idea that if you broke up a mind into pieces, the pieces were things like “visual cortex” and “cerebellum”—rather than “stock-market trading module” or “commonsense reasoning module”, which is a standard wrong road in AI.

Studying fields like functional neuroanatomy and cognitive psychology gave me a very different idea of what minds had to look like, than you would get from just reading AI books—even good AI books.

When you blank out all the wrong conclusions and wrong justifications, and just ask what that belief led the young Eliezer to actually do...

Then the belief that Artificial Intelligence was sick and that the real answer would have to come from healthier fields outside, led him to study lots of cognitive sciences;

The belief that AI couldn’t have simple answers, led him to not stop prematurely on one brilliant idea, and to accumulate lots of information;

The belief that you didn’t want to define intelligence, led to a situation in which he studied the problem for a long time before, years later, he started to propose systematizations.

This is what I refer to when I say that this is one of my all-time best mistakes.

Looking back, years afterward, I drew a very strong moral, to this effect:

What you actually end up doing, screens off the clever reason why you’re doing it.

Contrast amazing clever reasoning that leads you to study many sciences, to amazing clever reasoning that says you don’t need to read all those books. Afterward, when your amazing clever reasoning turns out to have been stupid, you’ll have ended up in a much better position, if your amazing clever reasoning was of the first type.

When I look back upon my past, I am struck by the number of semi-accidental successes, the number of times I did something right for the wrong reason. From your perspective, you should chalk this up to the anthropic principle: if I’d fallen into a true dead end, you probably wouldn’t be hearing from me on this blog. From my perspective it remains something of an embarrassment. My Traditional Rationalist upbringing provided a lot of directional bias to those “accidental successes”—biased me toward rationalizing reasons to study rather than not study, prevented me from getting completely lost, helped me recover from mistakes. Still, none of that was the right action for the right reason, and that’s a scary thing to look back on your youthful history and see. One of my primary purposes in writing on Overcoming Bias is to leave a trail to where I ended up by accident—to obviate the role that luck played in my own forging as a rationalist.

So what makes this one of my all-time worst mistakes? Because sometimes “informal” is another way of saying “held to low standards”. I had amazing clever reasons why it was okay for me not to precisely define “intelligence”, and certain of my other terms as well: namely,other people had gone astray by trying to define it. This was a gate through which sloppy reasoning could enter.

So should I have jumped ahead and tried to forge an exact definition right away? No, all the reasons why I knew this was the wrong thing to do, were correct; you can’t conjure the right definition out of thin air if your knowledge is not adequate.

You can’t get to the definition of fire if you don’t know about atoms and molecules; you’re better off saying “that orangey-bright thing”. And you do have to be able to talk about that orangey-bright stuff, even if you can’t say exactly what it is, to investigate fire. But these days I would say that all reasoning on that level is something that can’t be trusted—rather it’s something you do on the way to knowing better, but you don’t trust it, you don’t put your weight down on it, you don’t draw firm conclusions from it, no matter how inescapable the informal reasoning seems.

The young Eliezer put his weight down on the wrong floor tile—stepped onto a loaded trap. To be continued.