Dreams of AI Design

After spend­ing a decade or two liv­ing in­side a mind, you might think you knew a bit about how minds work, right? That’s what quite a few AGI wannabes (peo­ple who think they’ve got what it takes to pro­gram an Ar­tifi­cial Gen­eral In­tel­li­gence) seem to have con­cluded. This, un­for­tu­nately, is wrong.

Ar­tifi­cial In­tel­li­gence is fun­da­men­tally about re­duc­ing the men­tal to the non-men­tal.

You might want to con­tem­plate that sen­tence for a while. It’s im­por­tant.

Liv­ing in­side a hu­man mind doesn’t teach you the art of re­duc­tion­ism, be­cause nearly all of the work is car­ried out be­neath your sight, by the opaque black boxes of the brain. So far be­neath your sight that there is no in­tro­spec­tive sense that the black box is there—no in­ter­nal sen­sory event mark­ing that the work has been del­e­gated.

Did Aris­to­tle re­al­ize that when he talked about the telos, the fi­nal cause of events, that he was del­e­gat­ing pre­dic­tive la­bor to his brain’s com­pli­cated plan­ning mechanisms—ask­ing, “What would this ob­ject do, if it could make plans?” I rather doubt it. Aris­to­tle thought the brain was an or­gan for cool­ing the blood—which he did think was im­por­tant: hu­mans, thanks to their larger brains, were more calm and con­tem­pla­tive.

So there’s an AI de­sign for you! We just need to cool down the com­puter a lot, so it will be more calm and con­tem­pla­tive, and won’t rush head­long into do­ing stupid things like mod­ern com­put­ers. That’s an ex­am­ple of fake re­duc­tion­ism. “Hu­mans are more con­tem­pla­tive be­cause their blood is cooler,” I mean. It doesn’t re­solve the black box of the word con­tem­pla­tive. You can’t pre­dict what a con­tem­pla­tive thing does us­ing a com­pli­cated model with in­ter­nal mov­ing parts com­posed of merely ma­te­rial, merely causal el­e­ments—pos­i­tive and nega­tive voltages on a tran­sis­tor be­ing the canon­i­cal ex­am­ple of a merely ma­te­rial and causal el­e­ment of a model. All you can do is imag­ine your­self be­ing con­tem­pla­tive, to get an idea of what a con­tem­pla­tive agent does.

Which is to say that you can only rea­son about “con­tem­pla­tive-ness” by em­pathic in­fer­ence—us­ing your own brain as a black box with the con­tem­pla­tive­ness lever pul­led, to pre­dict the out­put of an­other black box.

You can imag­ine an­other agent be­ing con­tem­pla­tive, but again that’s an act of em­pathic in­fer­ence—the way this imag­i­na­tive act works is by ad­just­ing your own brain to run in con­tem­pla­tive­ness-mode, not by mod­el­ing the other brain neu­ron by neu­ron. Yes, that may be more effi­cient, but it doesn’t let you build a “con­tem­pla­tive” mind from scratch.

You can say that “cold blood causes con­tem­pla­tive­ness” and then you just have fake causal­ity: You’ve drawn a lit­tle ar­row from a box read­ing “cold blood” to a box read­ing “con­tem­pla­tive­ness,” but you haven’t looked in­side the box—you’re still gen­er­at­ing your pre­dic­tions us­ing em­pa­thy.

You can say that “lots of lit­tle neu­rons, which are all strictly elec­tri­cal and chem­i­cal with no on­tolog­i­cally ba­sic con­tem­pla­tive­ness in them, com­bine into a com­plex net­work that emer­gently ex­hibits con­tem­pla­tive­ness.” And that is still a fake re­duc­tion and you still haven’t looked in­side the black box. You still can’t say what a “con­tem­pla­tive” thing will do, us­ing a non-em­pathic model. You just took a box la­beled “lotsa neu­rons,” and drew an ar­row la­beled “emer­gence” to a black box con­tain­ing your re­mem­bered sen­sa­tion of con­tem­pla­tive­ness, which, when you imag­ine it, tells your brain to em­pathize with the box by con­tem­plat­ing.

So what do real re­duc­tions look like?

Like the re­la­tion­ship be­tween the feel­ing of ev­i­dence-ness, of jus­tifi­ca­tion­ness, and E. T. Jaynes’s Prob­a­bil­ity The­ory: The Logic of Science. You can go around in cir­cles all day, say­ing how the na­ture of ev­i­dence is that it jus­tifies some propo­si­tion, by mean­ing that it’s more likely to be true, but all of these just in­voke your brain’s in­ter­nal feel­ings of ev­i­dence-ness, jus­tifies-ness, like­li­ness. That part is easy—the go­ing around in cir­cles part. The part where you go from there to Bayes’s The­o­rem is hard.

And the fun­da­men­tal men­tal abil­ity that lets some­one learn Ar­tifi­cial In­tel­li­gence is the abil­ity to tell the differ­ence. So that you know you aren’t done yet, nor even re­ally started, when you say, “Ev­i­dence is when an ob­ser­va­tion jus­tifies a be­lief.” But atoms are not ev­i­den­tial, jus­tify­ing, mean­ingful, likely, propo­si­tional, or true; they are just atoms. Only things like count as sub­stan­tial progress. (And that’s only the first step of the re­duc­tion: what are these E and H ob­jects, if not mys­te­ri­ous black boxes? Where do your hy­pothe­ses come from? From your cre­ativity? And what’s a hy­poth­e­sis, when no atom is a hy­poth­e­sis?)

Another ex­cel­lent ex­am­ple of gen­uine re­duc­tion can be found in Judea Pearl’s Prob­a­bil­is­tic Rea­son­ing in In­tel­li­gent Sys­tems: Net­works of Plau­si­ble In­fer­ence[1]. You could go around all day in cir­cles talk about how a cause is some­thing that makes some­thing else hap­pen, and un­til you un­der­stood the na­ture of con­di­tional in­de­pen­dence, you would be hel­pless to make an AI that rea­sons about cau­sa­tion. Be­cause you wouldn’t un­der­stand what was hap­pen­ing when your brain mys­te­ri­ously de­cided that if you learned your bur­glar alarm went off, but you then learned that a small earth­quake took place, you would re­tract your ini­tial con­clu­sion that your house had been bur­glarized.

If you want an AI that plays chess, you can go around in cir­cles in­definitely talk­ing about how you want the AI to make good moves, which are moves that can be ex­pected to win the game, which are moves that are pru­dent strate­gies for defeat­ing the op­po­nent, et cetera; and while you may then have some idea of which moves you want the AI to make, it’s all for naught un­til you come up with the no­tion of a mini-max search tree.

But un­til you know about search trees, un­til you know about con­di­tional in­de­pen­dence, un­til you know about Bayes’s The­o­rem, then it may still seem to you that you have a perfectly good un­der­stand­ing of where good moves and non­mono­tonic rea­son­ing and eval­u­a­tion of ev­i­dence come from. It may seem, for ex­am­ple, that they come from cool­ing the blood.

And in­deed I know many peo­ple who be­lieve that in­tel­li­gence is the product of com­mon­sense knowl­edge or mas­sive par­allelism or cre­ative de­struc­tion or in­tu­itive rather than ra­tio­nal rea­son­ing, or what­ever. But all these are only dreams, which do not give you any way to say what in­tel­li­gence is, or what an in­tel­li­gence will do next, ex­cept by point­ing at a hu­man. And when the one goes to build their won­drous AI, they only build a sys­tem of de­tached lev­ers, “knowl­edge” con­sist­ing of LISP to­kens la­beled ap­ple and the like; or per­haps they build a “mas­sively par­allel neu­ral net, just like the hu­man brain.” And are shocked—shocked!—when noth­ing much hap­pens.

AI de­signs made of hu­man parts are only dreams; they can ex­ist in the imag­i­na­tion, but not trans­late into tran­sis­tors. This ap­plies speci­fi­cally to “AI de­signs” that look like boxes with ar­rows be­tween them and mean­ingful-sound­ing la­bels on the boxes. (For a truly epic ex­am­ple thereof, see any Men­tifex Di­a­gram.)

Later I will say more upon this sub­ject, but I can go ahead and tell you one of the guid­ing prin­ci­ples: If you meet some­one who says that their AI will do XYZ just like hu­mans, do not give them any ven­ture cap­i­tal. Say to them rather: “I’m sorry, I’ve never seen a hu­man brain, or any other in­tel­li­gence, and I have no rea­son as yet to be­lieve that any such thing can ex­ist. Now please ex­plain to me what your AI does, and why you be­lieve it will do it, with­out point­ing to hu­mans as an ex­am­ple.” Planes would fly just as well, given a fixed de­sign, if birds had never ex­isted; they are not kept aloft by analo­gies.

So now you per­ceive, I hope, why, if you wanted to teach some­one to do fun­da­men­tal work on strong AI—bear­ing in mind that this is demon­stra­bly a very difficult art, which is not learned by a su­per­ma­jor­ity of stu­dents who are just taught ex­ist­ing re­duc­tions such as search trees—then you might go on for some length about such mat­ters as the fine art of re­duc­tion­ism, about play­ing ra­tio­nal­ist’s Ta­boo to ex­cise prob­le­matic words? and re­place them with their refer­ents, about an­thro­po­mor­phism, and, of course, about early stop­ping on mys­te­ri­ous an­swers to mys­te­ri­ous ques­tions.


[1] Pearl, Prob­a­bil­is­tic Rea­son­ing in In­tel­li­gent Sys­tems.