Above-Average AI Scientists

Fol­lowup to: The Level Above Mine, Com­pe­tent Elites

(Those who didn’t like the last two posts should definitely skip this one.)

I re­call one fel­low, who seemed like a nice per­son, and who was quite ea­ger to get started on Friendly AI work, to whom I had trou­ble ex­plain­ing that he didn’t have a hope. He said to me:

“If some­one with a Masters in chem­istry isn’t in­tel­li­gent enough, then you’re not go­ing to have much luck find­ing some­one to help you.”

It’s hard to dis­t­in­guish the grades above your own. And even if you’re liter­ally the best in the world, there are still elec­tron or­bitals above yours—they’re just un­oc­cu­pied. Some­one had to be “the best physi­cist in the world” dur­ing the time of An­cient Greece. Would they have been able to vi­su­al­ize New­ton?

At one of the first con­fer­ences or­ga­nized around the tiny lit­tle sub­field of Ar­tifi­cial Gen­eral In­tel­li­gence, I met some­one who was head­ing up a funded re­search pro­ject speci­fi­cally declar­ing AGI as a goal, within a ma­jor cor­po­ra­tion. I be­lieve he had peo­ple un­der him on his pro­ject. He was prob­a­bly paid at least three times as much as I was paid (at that time). His aca­demic cre­den­tials were su­pe­rior to mine (what a sur­prise) and he had many more years of ex­pe­rience. He had ac­cess to lots and lots of com­put­ing power.

And like nearly ev­ery­one in the field of AGI, he was rush­ing for­ward to write code im­me­di­ately—not hold­ing off and search­ing for a suffi­ciently pre­cise the­ory to per­mit sta­ble self-im­prove­ment.

In short, he was just the sort of fel­low that… Well, many peo­ple, when they hear about Friendly AI, say: “Oh, it doesn’t mat­ter what you do, be­cause [some­one like this guy] will cre­ate AI first.” He’s the sort of per­son about whom jour­nal­ists ask me, “You say that this isn’t the time to be talk­ing about reg­u­la­tion, but don’t we need laws to stop peo­ple like this from cre­at­ing AI?”

“I sup­pose,” you say, your voice heavy with irony, “that you’re about to tell us, that this per­son doesn’t re­ally have so much of an ad­van­tage over you as it might seem. Be­cause your the­ory—when­ever you ac­tu­ally come up with a the­ory—is go­ing to be so much bet­ter than his. Or,” your voice be­com­ing even more ironic, “that he’s too mired in bor­ing main­stream method­ol­ogy—”

No. I’m about to tell you that I hap­pened to be seated at the same table as this guy at lunch, and I made some kind of com­ment about evolu­tion­ary psy­chol­ogy, and he turned out to be...

...a cre­ation­ist.

This was the point at which I re­ally got, on a gut level, that there was no test you needed to pass in or­der to start your own AGI pro­ject.

One of the failure modes I’ve come to bet­ter un­der­stand in my­self since ob­serv­ing it in oth­ers, is what I call, “liv­ing in the should-uni­verse”. The uni­verse where ev­ery­thing works the way it com­mon-sen­si­cally ought to, as op­posed to the ac­tual is-uni­verse we live in. There’s more than one way to live in the should-uni­verse, and out­right delu­sional op­ti­mism is only the least sub­tle. Treat­ing the should-uni­verse as your point of de­par­ture—de­scribing the real uni­verse as the should-uni­verse plus a diff—can also be dan­ger­ous.

Up un­til the mo­ment when yon­der AGI re­searcher ex­plained to me that he didn’t be­lieve in evolu­tion be­cause that’s not what the Bible said, I’d been liv­ing in the should-uni­verse. In the sense that I was or­ga­niz­ing my un­der­stand­ing of other AGI re­searchers as should-plus-diff. I saw them, not as them­selves, not as their prob­a­ble causal his­to­ries, but as their de­par­tures from what I thought they should be.

In the uni­verse where ev­ery­thing works the way it com­mon-sen­si­cally ought to, ev­ery­thing about the study of Ar­tifi­cial Gen­eral In­tel­li­gence is driven by the one over­whelming fact of the in­de­scrib­ably huge effects: ini­tial con­di­tions and un­fold­ing pat­terns whose con­se­quences will re­sound for as long as causal chains con­tinue out of Earth, un­til all the stars and galax­ies in the night sky have burned down to cold iron, and maybe long af­ter­ward, or for­ever into in­finity if the true laws of physics should hap­pen to per­mit that. To de­liber­ately thrust your mor­tal brain onto that stage, as it plays out on an­cient Earth the first root of life, is an act so far be­yond “au­dac­ity” as to set the word on fire, an act which can only be ex­cused by the ter­rify­ing knowl­edge that the empty skies offer no higher au­thor­ity.

It had oc­curred to me well be­fore this point, that most of those who pro­claimed them­selves to have AGI pro­jects, were not only failing to be what an AGI re­searcher should be, but in fact, didn’t seem to have any such dream to live up to.

But that was just my liv­ing in the should-uni­verse. It was the cre­ation­ist who broke me of that. My mind fi­nally gave up on con­struct­ing the diff.

When Scott Aaron­son was 12 years old, he: “set my­self the mod­est goal of writ­ing a BASIC pro­gram that would pass the Tur­ing Test by learn­ing from ex­pe­rience and fol­low­ing Asi­mov’s Three Laws of Robotics. I coded up a re­ally nice to­k­enizer and user in­ter­face, and only got stuck on the sub­rou­tine that was sup­posed to un­der­stand the user’s ques­tion and out­put an in­tel­li­gent, Three-Laws-obey­ing re­sponse.” It would be pointless to try and con­struct a diff be­tween Aaron­son12 and what an AGI re­searcher should be. You’ve got to ex­plain Aaron­son12 in for­ward-ex­trap­o­la­tion mode: He thought it would be cool to make an AI and didn’t quite un­der­stand why the prob­lem was difficult.

It was yon­der cre­ation­ist who let me see AGI re­searchers for them­selves, and not as de­par­tures from my ideal.

A cre­ation­ist AGI re­searcher? Why not? Sure, you can’t re­ally be enough of an ex­pert on think­ing to build an AGI, or enough of an ex­pert at think­ing to find the truth amidst deep dark sci­en­tific chaos, while still be­ing, in this day and age, a cre­ation­ist. But to think that his cre­ation­ism is an anomaly, is should-uni­verse think­ing, as if de­sir­able fu­ture out­comes could struc­ture the pre­sent. Most sci­en­tists have the meme that a sci­en­tist’s re­li­gion doesn’t have any­thing to do with their re­search. Some­one who thinks that it would be cool to solve the “hu­man-level” AI prob­lem and cre­ate a lit­tle voice in a box that an­swers ques­tions, and who dreams they have a solu­tion, isn’t go­ing to stop and say: “Wait! I’m a cre­ation­ist! I guess that would make it pretty silly for me to try and build an AGI.”

The cre­ation­ist is only an ex­treme ex­am­ple. A much larger frac­tion of AGI wannabes would speak with rev­er­ence of the “spiritual” and the pos­si­bil­ity of var­i­ous fun­da­men­tal men­tals. If some­one lacks the whole cog­ni­tive ed­ifice of re­duc­ing men­tal events to non­men­tal con­stituents, the ed­ifice that de­ci­sively in­dicts the en­tire su­per­nat­u­ral, then of course they’re not likely to be ex­pert on cog­ni­tion to the de­gree that would be re­quired to syn­the­size true AGI. But nei­ther are they likely to have any par­tic­u­lar idea that they’re miss­ing some­thing. They’re just go­ing with the flow of the memetic wa­ter in which they swim. They’ve got friends who talk about spiritu­al­ity, and it sounds pretty ap­peal­ing to them. They know that Ar­tifi­cial Gen­eral In­tel­li­gence is a big im­por­tant prob­lem in their field, worth lots of ap­plause if they can solve it. They wouldn’t see any­thing in­con­gru­ous about an AGI re­searcher talk­ing about the pos­si­bil­ity of psy­chic pow­ers or Bud­dhist rein­car­na­tion. That’s a sep­a­rate mat­ter, isn’t it?

(Some­one in the au­di­ence is bound to ob­serve that New­ton was a Chris­tian. I re­ply that New­ton didn’t have such a difficult prob­lem, since he only had to in­vent first-year un­der­grad­u­ate stuff. The two ob­ser­va­tions are around equally sen­si­ble; if you’re go­ing to be anachro­nis­tic, you should be anachro­nis­tic on both sides of the equa­tion.)

But that’s still all just should-uni­verse think­ing.

That’s still just de­scribing peo­ple in terms of what they aren’t.

Real peo­ple are not formed of ab­sences. Only peo­ple who have an ideal can be de­scribed as a de­par­ture from it, the way that I see my­self as a de­par­ture from what an Eliezer Yud­kowsky should be.

The re­ally strik­ing fact about the re­searchers who show up at AGI con­fer­ences, is that they’re so… I don’t know how else to put it...

...or­di­nary.

Not at the in­tel­lec­tual level of the big main­stream names in Ar­tifi­cial In­tel­li­gence. Not at the level of John McCarthy or Peter Norvig (whom I’ve both met).

More like… around, say, the level of above-av­er­age sci­en­tists, which I yes­ter­day com­pared to the level of part­ners at a non-big-name ven­ture cap­i­tal firm. Some of whom might well be Chris­ti­ans, or even cre­ation­ists if they don’t work in evolu­tion­ary biol­ogy.

The at­ten­dees at AGI con­fer­ences aren’t liter­ally av­er­age mor­tals, or even av­er­age sci­en­tists. The av­er­age at­tendee at an AGI con­fer­ence is visi­bly one level up from the av­er­age at­tendee at that ran­dom main­stream AI con­fer­ence I talked about yes­ter­day.

Of course there are ex­cep­tions. The last AGI con­fer­ence I went to, I en­coun­tered one bright young fel­low who was fast, in­tel­li­gent, and spoke fluent Bayesian. Ad­mit­tedly, he didn’t ac­tu­ally work in AGI as such. He worked at a hedge fund.

No, se­ri­ously, there are ex­cep­tions. Steve Omo­hun­dro is one ex­am­ple of some­one who—well, I’m not ex­actly sure of his level, but I don’t get any par­tic­u­lar sense that he’s be­low Peter Norvig or John McCarthy.

But even if you just poke around on Norvig or McCarthy’s web­site, and you’ve achieved suffi­cient level your­self to dis­crim­i­nate what you see, you’ll get a sense of a formidable mind. Not in terms of ac­com­plish­ments—that’s not a fair com­par­i­son with some­one younger or tack­ling a more difficult prob­lem—but just in terms of the way they talk. If you then look at the web­site of a typ­i­cal AGI-seeker, even one head­ing up their own pro­ject, you won’t get an equiv­a­lent sense of formidabil­ity.

Un­for­tu­nately, that kind of eye­ball com­par­i­son does re­quire that one be of suffi­cient level to dis­t­in­guish those lev­els. It’s easy to sym­pa­thize with peo­ple who can’t eye­ball the differ­ence: If any­one with a PhD seems re­ally bright to you, or any pro­fes­sor at a uni­ver­sity is some­one to re­spect, then you’re not go­ing to be able to eye­ball the tiny aca­demic sub­field of AGI and de­ter­mine that most of the in­hab­itants are above-av­er­age sci­en­tists for main­stream AI, but be­low the in­tel­lec­tual fire­power of the top names in main­stream AI.

But why would that hap­pen? Wouldn’t the AGI peo­ple be hu­man­ity’s best and bright­est, an­swer­ing the great­est need? Or at least those dar­ing souls for whom main­stream AI was not enough, who sought to challenge their wits against the great­est reser­voir of chaos left to mod­ern sci­ence?

If you for­get the should-uni­verse, and think of the se­lec­tion effect in the is-uni­verse, it’s not difficult to un­der­stand. To­day, AGI at­tracts peo­ple who fail to com­pre­hend the difficulty of AGI. Back in the ear­liest days, a bright mind like John McCarthy would tackle AGI be­cause no one knew the prob­lem was difficult. In time and with re­gret, he re­al­ized he couldn’t do it. To­day, some­one on the level of Peter Norvig knows their own com­pe­ten­cies, what they can do and what they can’t; and they go on to achieve fame and for­tune (and Re­search Direc­tor­ship of Google) within main­stream AI.

And then...

Then there are the com­pletely hope­less or­di­nary pro­gram­mers who wan­der onto the AGI mailing list want­ing to build a re­ally big se­man­tic net.

Or the post­docs moved by some (non-Sin­gu­lar­ity) dream of them­selves pre­sent­ing the first “hu­man-level” AI to the world, who also dream an AI de­sign, and can’t let go of that.

Just nor­mal peo­ple with no no­tion that it’s wrong for an AGI re­searcher to be nor­mal.

In­deed, like most nor­mal peo­ple who don’t spend their lives mak­ing a des­per­ate effort to reach up to­ward an im­pos­si­ble ideal, they will be offended if you sug­gest to them that some­one in their po­si­tion needs to be a lit­tle less im­perfect.

This mis­led the liv­ing daylights out of me when I was young, be­cause I com­pared my­self to other peo­ple who de­clared their in­ten­tions to build AGI, and ended up way too im­pressed with my­self; when I should have been com­par­ing my­self to Peter Norvig, or reach­ing up to­ward E. T. Jaynes. (For I did not then per­ceive the sheer, blank, tow­er­ing wall of Na­ture.)

I don’t mean to bash nor­mal AGI re­searchers into the ground. They are not evil. They are not ill-in­ten­tioned. They are not even dan­ger­ous, as in­di­vi­d­u­als. Only the mob of them is dan­ger­ous, that can learn from each other’s par­tial suc­cesses and ac­cu­mu­late hacks as a com­mu­nity.

And that’s why I’m dis­cussing all this—be­cause it is a fact with­out which it is not pos­si­ble to un­der­stand the over­all strate­gic situ­a­tion in which hu­man­ity finds it­self, the pre­sent state of the game­board. It is, for ex­am­ple, the rea­son why I don’t panic when yet an­other AGI pro­ject an­nounces they’re go­ing to have gen­eral in­tel­li­gence in five years. It also says that you can’t nec­es­sar­ily ex­trap­o­late the FAI-the­ory com­pre­hen­sion of fu­ture re­searchers from pre­sent re­searchers, if a break­through oc­curs that re­pop­u­lates the field with Norvig-class minds.

Even an av­er­age hu­man en­g­ineer is at least six lev­els higher than the blind idiot god, nat­u­ral se­lec­tion, that man­aged to cough up the Ar­tifi­cial In­tel­li­gence called hu­mans, by re­tain­ing its lucky suc­cesses and com­pound­ing them. And the mob, if it re­tains its lucky suc­cesses and shares them, may also cough up an Ar­tifi­cial In­tel­li­gence, with around the same de­gree of pre­cise con­trol. But it is only the col­lec­tive that I worry about as dan­ger­ous—the in­di­vi­d­u­als don’t seem that formidable.

If you your­self speak fluent Bayesian, and you dis­t­in­guish a per­son-con­cerned-with-AGI as speak­ing fluent Bayesian, then you should con­sider that per­son as ex­cepted from this whole dis­cus­sion.

Of course, among peo­ple who de­clare that they want to solve the AGI prob­lem, the su­per­ma­jor­ity don’t speak fluent Bayesian.

Why would they? Most peo­ple don’t.

Part of the se­quence Yud­kowsky’s Com­ing of Age

Next post: “The Mag­ni­tude of His Own Folly

Pre­vi­ous post: “Com­pe­tent Elites