Artificial Mysterious Intelligence

Pre­vi­ously in se­ries: Failure By Affec­tive Analogy

I once had a con­ver­sa­tion that I still re­mem­ber for its sheer, puri­fied archetyp­i­cal­ity. This was a non­tech­ni­cal guy, but pieces of this di­a­log have also ap­peared in con­ver­sa­tions I’ve had with pro­fes­sional AIfolk...

Him: Oh, you’re work­ing on AI! Are you us­ing neu­ral net­works?

Me: I think em­phat­i­cally not.

Him: But neu­ral net­works are so won­der­ful! They solve prob­lems and we don’t have any idea how they do it!

Me: If you are ig­no­rant of a phe­nomenon, that is a fact about your state of mind, not a fact about the phe­nomenon it­self. There­fore your ig­no­rance of how neu­ral net­works are solv­ing a spe­cific prob­lem, can­not be re­spon­si­ble for mak­ing them work bet­ter.

Him: Huh?

Me: If you don’t know how your AI works, that is not good. It is bad.

Him: Well, in­tel­li­gence is much too difficult for us to un­der­stand, so we need to find some way to build AI with­out un­der­stand­ing how it works.

Me: Look, even if you could do that, you wouldn’t be able to pre­dict any kind of pos­i­tive out­come from it. For all you knew, the AI would go out and slaugh­ter or­phans.

Him: Maybe we’ll build Ar­tifi­cial In­tel­li­gence by scan­ning the brain and build­ing a neu­ron-by-neu­ron du­pli­cate. Hu­mans are the only sys­tems we know are in­tel­li­gent.

Me: It’s hard to build a fly­ing ma­chine if the only thing you un­der­stand about flight is that some­how birds mag­i­cally fly. What you need is a con­cept of aero­dy­namic lift, so that you can see how some­thing can fly even if it isn’t ex­actly like a bird.

Him: That’s too hard. We have to copy some­thing that we know works.

Me: (re­flec­tively) What do peo­ple find so un­bear­ably awful about the prospect of hav­ing to fi­nally break down and solve the bloody prob­lem? Is it re­ally that hor­rible?

Him: Wait… you’re say­ing you want to ac­tu­ally un­der­stand in­tel­li­gence?

Me: Yeah.

Him: (aghast) Se­ri­ously?

Me: I don’t know ev­ery­thing I need to know about in­tel­li­gence, but I’ve learned a hell of a lot. Enough to know what hap­pens if I try to build AI while there are still gaps in my un­der­stand­ing.

Him: Un­der­stand­ing the prob­lem is too hard. You’ll never do it.

That’s not just a differ­ence of opinion you’re look­ing at, it’s a clash of cul­tures.

For a long time, many differ­ent par­ties and fac­tions in AI, ad­her­ent to more than one ide­ol­ogy, have been try­ing to build AI with­out un­der­stand­ing in­tel­li­gence. And their habits of thought have be­come in­grained in the field, and even trans­mit­ted to parts of the gen­eral pub­lic.

You may have heard pro­pos­als for build­ing true AI which go some­thing like this:

  1. Calcu­late how many op­er­a­tions the hu­man brain performs ev­ery sec­ond. This is “the only amount of com­put­ing power that we know is ac­tu­ally suffi­cient for hu­man-equiv­a­lent in­tel­li­gence”. Raise enough ven­ture cap­i­tal to buy a su­per­com­puter that performs an equiv­a­lent num­ber of float­ing-point op­er­a­tions in one sec­ond. Use it to run the most ad­vanced available neu­ral net­work al­gorithms.

  2. The brain is huge and com­plex. When the In­ter­net be­comes suffi­ciently huge and com­plex, in­tel­li­gence is bound to emerge from the In­ter­net. (I get asked about this in 50% of my in­ter­views.)

  3. Com­put­ers seem un­in­tel­li­gent be­cause they lack com­mon sense. Pro­gram a very large num­ber of “com­mon-sense facts” into a com­puter. Let it try to rea­son about the re­la­tion of these facts. Put a suffi­ciently huge quan­tity of knowl­edge into the ma­chine, and in­tel­li­gence will emerge from it.

  4. Neu­ro­science con­tinues to ad­vance at a steady rate. Even­tu­ally, su­per-MRI or brain sec­tion­ing and scan­ning will give us pre­cise knowl­edge of the lo­cal char­ac­ter­is­tics of all hu­man brain ar­eas. So we’ll be able to build a du­pli­cate of the hu­man brain by du­pli­cat­ing the parts. “The hu­man brain is the only ex­am­ple we have of in­tel­li­gence.”

  5. Nat­u­ral se­lec­tion pro­duced the hu­man brain. It is “the only method that we know works for pro­duc­ing gen­eral in­tel­li­gence”. So we’ll have to scrape up a re­ally huge amount of com­put­ing power, and evolve AI.

What do all these pro­pos­als have in com­mon?

They are all ways to make your­self be­lieve that you can build an Ar­tifi­cial In­tel­li­gence, even if you don’t un­der­stand ex­actly how in­tel­li­gence works.

Now, such a be­lief is not nec­es­sar­ily false! Meth­ods 4 and 5, if pur­sued long enough and with enough re­sources, will even­tu­ally work. (5 might re­quire a com­puter the size of the Moon, but give it enough crunch and it will work, even if you have to simu­late a quin­til­lion planets and not just one...)

But re­gard­less of whether any given method would work in prin­ci­ple, the un­for­tu­nate habits of thought will already be­gin to arise, as soon as you start think­ing of ways to cre­ate Ar­tifi­cial In­tel­li­gence with­out hav­ing to pen­e­trate the mys­tery of in­tel­li­gence.

I have already spo­ken of some of the hope-gen­er­at­ing tricks that ap­pear in the ex­am­ples above. There is in­vok­ing similar­ity to hu­mans, or us­ing words that make you feel good. But re­ally, a lot of the trick here just con­sists of imag­in­ing your­self hit­ting the AI prob­lem with a re­ally big rock.

I know some­one who goes around in­sist­ing that AI will cost a quadrillion dol­lars, and as soon as we’re will­ing to spend a quadrillion dol­lars, we’ll have AI, and we couldn’t pos­si­bly get AI with­out spend­ing a quadrillion dol­lars. “Quadrillion dol­lars” is his big rock, that he imag­ines hit­ting the prob­lem with, even though he doesn’t quite un­der­stand it.

It of­ten will not oc­cur to peo­ple that the mys­tery of in­tel­li­gence could be any more pen­e­tra­ble than it seems: By the power of the Mind Pro­jec­tion Fal­lacy, be­ing ig­no­rant of how in­tel­li­gence works will make it seem like in­tel­li­gence is in­her­ently im­pen­e­tra­ble and chaotic. They will think they pos­sess a pos­i­tive knowl­edge of in­tractabil­ity, rather than think­ing, “I am ig­no­rant.”

And the thing to re­mem­ber is that, for these last decades on end, any pro­fes­sional in the field of AI try­ing to build “real AI”, had some rea­son for try­ing to do it with­out re­ally un­der­stand­ing in­tel­li­gence (var­i­ous fake re­duc­tions aside).

The New Con­nec­tion­ists ac­cused the Good-Old-Fash­ioned AI re­searchers of not be­ing par­allel enough, not be­ing fuzzy enough, not be­ing emer­gent enough. But they did not say, “There is too much you do not un­der­stand.”

The New Con­nec­tion­ists cat­a­logued the flaws of GOFAI for years on end, with fiery cas­ti­ga­tion. But they couldn’t ever ac­tu­ally say: “How ex­actly are all these log­i­cal de­duc­tions go­ing to pro­duce ‘in­tel­li­gence’, any­way? Can you walk me through the cog­ni­tive op­er­a­tions, step by step, which lead to that re­sult? Can you ex­plain ‘in­tel­li­gence’ and how you plan to get it, with­out point­ing to hu­mans as an ex­am­ple?”

For they them­selves would be sub­ject to ex­actly the same crit­i­cism.

In the house of glass, some­how, no one ever gets around to talk­ing about throw­ing stones.

To tell a lie, you have to lie about all the other facts en­tan­gled with that fact, and also lie about the meth­ods used to ar­rive at be­liefs: The cul­ture of Ar­tifi­cial Mys­te­ri­ous In­tel­li­gence has de­vel­oped its own Dark Side Episte­mol­ogy, com­plete with rea­sons why it’s ac­tu­ally wrong to try and un­der­stand in­tel­li­gence.

Yet when you step back from the bus­tle of this mo­ment’s his­tory, and think about the long sweep of sci­ence—there was a time when stars were mys­te­ri­ous, when chem­istry was mys­te­ri­ous, when life was mys­te­ri­ous. And in this era, much was at­tributed to black-box essences. And there were many hopes based on the similar­ity of one thing to an­other. To many, I’m sure, alchemy just seemed very difficult rather than even seem­ing mys­te­ri­ous; most al­chemists prob­a­bly did not go around think­ing, “Look at how much I am dis­ad­van­taged by not know­ing about the ex­is­tence of chem­istry! I must dis­cover atoms and molecules as soon as pos­si­ble!” They just mem­o­rized libraries of ran­dom things you could do with acid, and be­moaned how difficult it was to cre­ate the Philoso­pher’s Stone.

In the end, though, what hap­pened is that sci­en­tists achieved in­sight, and then things got much eas­ier to do. You also had a bet­ter idea of what you could or couldn’t do. The prob­lem stopped be­ing scary and con­fus­ing.

But you wouldn’t hear a New Con­nec­tion­ist say, “Hey, maybe all the failed promises of ‘log­i­cal AI’ were ba­si­cally due to the fact that, in their epistemic con­di­tion, they had no right to ex­pect their AIs to work in the first place, be­cause they couldn’t ac­tu­ally have sketched out the link in any more de­tail than a me­dieval al­chemist try­ing to ex­plain why a par­tic­u­lar for­mula for the Philoso­pher’s Stone will yield gold.” It would be like the Pope at­tack­ing Is­lam on the ba­sis that faith is not an ad­e­quate jus­tifi­ca­tion for as­sert­ing the ex­is­tence of their de­ity.

Yet in fact, the promises did fail, and so we can con­clude that the promisers over­reached what they had a right to ex­pect. The Way is not om­nipo­tent, and a bounded ra­tio­nal­ist can­not do all things. But even a bounded ra­tio­nal­ist can as­pire not to over­promise—to only say you can do, that which you can do. So if we want to achieve that re­li­ably, his­tory shows that we should not ac­cept cer­tain kinds of hope. In the ab­sence of in­sight, hopes tend to be un­jus­tified be­cause you lack the knowl­edge that would be needed to jus­tify them.

We hu­mans have a difficult time work­ing in the ab­sence of in­sight. It doesn’t re­duce us all the way down to be­ing as stupid as evolu­tion. But it makes ev­ery­thing difficult and te­dious and an­noy­ing.

If the prospect of hav­ing to fi­nally break down and solve the bloody prob­lem of in­tel­li­gence seems scary, you un­der­es­ti­mate the in­ter­minable hell of not solv­ing it.