Say Not “Complexity”

Once upon a time . . .

This is a story from when I first met Mar­cello, with whom I would later work for a year on AI the­ory; but at this point I had not yet ac­cepted him as my ap­pren­tice. I knew that he com­peted at the na­tional level in math­e­mat­i­cal and com­put­ing olympiads, which sufficed to at­tract my at­ten­tion for a closer look; but I didn’t know yet if he could learn to think about AI.

I had asked Mar­cello to say how he thought an AI might dis­cover how to solve a Ru­bik’s Cube. Not in a pre­pro­grammed way, which is triv­ial, but rather how the AI it­self might figure out the laws of the Ru­bik uni­verse and rea­son out how to ex­ploit them. How would an AI in­vent for it­self the con­cept of an “op­er­a­tor,” or “macro,” which is the key to solv­ing the Ru­bik’s Cube?

At some point in this dis­cus­sion, Mar­cello said: “Well, I think the AI needs com­plex­ity to do X, and com­plex­ity to do Y—”

And I said, “Don’t say ‘com­plex­ity.’ ”

Mar­cello said, “Why not?”

I said, “Com­plex­ity should never be a goal in it­self. You may need to use a par­tic­u­lar al­gorithm that adds some amount of com­plex­ity, but com­plex­ity for the sake of com­plex­ity just makes things harder.” (I was think­ing of all the peo­ple whom I had heard ad­vo­cat­ing that the In­ter­net would “wake up” and be­come an AI when it be­came “suffi­ciently com­plex.”)

And Mar­cello said, “But there’s got to be some amount of com­plex­ity that does it.”

I closed my eyes briefly, and tried to think of how to ex­plain it all in words. To me, say­ing “com­plex­ity” sim­ply felt like the wrong move in the AI dance. No one can think fast enough to de­liber­ate, in words, about each sen­tence of their stream of con­scious­ness; for that would re­quire an in­finite re­cur­sion. We think in words, but our stream of con­scious­ness is steered be­low the level of words, by the trained-in rem­nants of past in­sights and harsh ex­pe­rience . . .

I said, “Did you read ‘A Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Ex­pla­na­tion’?”1

“Yes,” said Mar­cello.

“Okay,” I said. “Say­ing ‘com­plex­ity’ doesn’t con­cen­trate your prob­a­bil­ity mass.”

“Oh,” Mar­cello said, “like ‘emer­gence.’ Huh. So . . . now I’ve got to think about how X might ac­tu­ally hap­pen . . .”

That was when I thought to my­self, “Maybe this one is teach­able.

Com­plex­ity is not a use­less con­cept. It has math­e­mat­i­cal defi­ni­tions at­tached to it, such as Kol­mogorov com­plex­ity and Vap­nik-Cher­vo­nenkis com­plex­ity. Even on an in­tu­itive level, com­plex­ity is of­ten worth think­ing about—you have to judge the com­plex­ity of a hy­poth­e­sis and de­cide if it’s “too com­pli­cated” given the sup­port­ing ev­i­dence, or look at a de­sign and try to make it sim­pler.

But con­cepts are not use­ful or use­less of them­selves. Only us­ages are cor­rect or in­cor­rect. In the step Mar­cello was try­ing to take in the dance, he was try­ing to ex­plain some­thing for free, get some­thing for noth­ing. It is an ex­tremely com­mon mis­step, at least in my field. You can join a dis­cus­sion on ar­tifi­cial gen­eral in­tel­li­gence and watch peo­ple do­ing the same thing, left and right, over and over again—con­stantly skip­ping over things they don’t un­der­stand, with­out re­al­iz­ing that’s what they’re do­ing.

In an eye­blink it hap­pens: putting a non-con­trol­ling causal node be­hind some­thing mys­te­ri­ous, a causal node that feels like an ex­pla­na­tion but isn’t. The mis­take takes place be­low the level of words. It re­quires no spe­cial char­ac­ter flaw; it is how hu­man be­ings think by de­fault, how they have thought since the an­cient times.

What you must avoid is skip­ping over the mys­te­ri­ous part; you must linger at the mys­tery to con­front it di­rectly. There are many words that can skip over mys­ter­ies, and some of them would be le­gi­t­i­mate in other con­texts—“com­plex­ity,” for ex­am­ple. But the es­sen­tial mis­take is that skip-over, re­gard­less of what causal node goes be­hind it. The skip-over is not a thought, but a microthought. You have to pay close at­ten­tion to catch your­self at it. And when you train your­self to avoid skip­ping, it will be­come a mat­ter of in­stinct, not ver­bal rea­son­ing. You have to feel which parts of your map are still blank, and more im­por­tantly, pay at­ten­tion to that feel­ing.

I sus­pect that in academia there is a huge pres­sure to sweep prob­lems un­der the rug so that you can pre­sent a pa­per with the ap­pear­ance of com­plete­ness. You’ll get more ku­dos for a seem­ingly com­plete model that in­cludes some “emer­gent phe­nom­ena,” ver­sus an ex­plic­itly in­com­plete map where the la­bel says “I got no clue how this part works” or “then a mir­a­cle oc­curs.” A jour­nal may not even ac­cept the lat­ter pa­per, since who knows but that the un­known steps are re­ally where ev­ery­thing in­ter­est­ing hap­pens?2

And if you’re work­ing on a rev­olu­tion­ary AI startup, there is an even huger pres­sure to sweep prob­lems un­der the rug; or you will have to ad­mit to your­self that you don’t know how to build the right kind of AI yet, and your cur­rent life plans will come crash­ing down in ru­ins around your ears. But per­haps I am over-ex­plain­ing, since skip-over hap­pens by de­fault in hu­mans. If you’re look­ing for ex­am­ples, just watch peo­ple dis­cussing re­li­gion or philos­o­phy or spiritu­al­ity or any sci­ence in which they were not pro­fes­sion­ally trained.

Mar­cello and I de­vel­oped a con­ven­tion in our AI work: when we ran into some­thing we didn’t un­der­stand, which was of­ten, we would say “magic”—as in, X mag­i­cally does Y”—to re­mind our­selves that here was an un­solved prob­lem, a gap in our un­der­stand­ing. It is far bet­ter to say “magic” than “com­plex­ity” or “emer­gence”; the lat­ter words cre­ate an illu­sion of un­der­stand­ing. Wiser to say “magic,” and leave your­self a place­holder, a re­minder of work you will have to do later.

1 Link:http://​​less­wrong.com/​​ra­tio­nal­ity/​​a-tech­ni­cal-ex­pla­na­tion-of-tech­ni­cal-ex­pla­na­tion.

2 And yes, it some­times hap­pens that all the non-mag­i­cal parts of your map turn out to also be non-im­por­tant. That’s the price you some­times pay, for en­ter­ing into terra incog­nita and try­ing to solve prob­lems in­cre­men­tally. But that makes it even more im­por­tant to know when you aren’t finished yet. Mostly, peo­ple don’t dare to en­ter terra incog­nita at all, for the deadly fear of wast­ing their time.