Above-Average AI Scientists

Followup to: The Level Above Mine, Competent Elites

(Those who didn’t like the last two posts should definitely skip this one.)

I recall one fellow, who seemed like a nice person, and who was quite eager to get started on Friendly AI work, to whom I had trouble explaining that he didn’t have a hope. He said to me:

“If someone with a Masters in chemistry isn’t intelligent enough, then you’re not going to have much luck finding someone to help you.”

It’s hard to distinguish the grades above your own. And even if you’re literally the best in the world, there are still electron orbitals above yours—they’re just unoccupied. Someone had to be “the best physicist in the world” during the time of Ancient Greece. Would they have been able to visualize Newton?

At one of the first conferences organized around the tiny little subfield of Artificial General Intelligence, I met someone who was heading up a funded research project specifically declaring AGI as a goal, within a major corporation. I believe he had people under him on his project. He was probably paid at least three times as much as I was paid (at that time). His academic credentials were superior to mine (what a surprise) and he had many more years of experience. He had access to lots and lots of computing power.

And like nearly everyone in the field of AGI, he was rushing forward to write code immediately—not holding off and searching for a sufficiently precise theory to permit stable self-improvement.

In short, he was just the sort of fellow that… Well, many people, when they hear about Friendly AI, say: “Oh, it doesn’t matter what you do, because [someone like this guy] will create AI first.” He’s the sort of person about whom journalists ask me, “You say that this isn’t the time to be talking about regulation, but don’t we need laws to stop people like this from creating AI?”

“I suppose,” you say, your voice heavy with irony, “that you’re about to tell us, that this person doesn’t really have so much of an advantage over you as it might seem. Because your theory—whenever you actually come up with a theory—is going to be so much better than his. Or,” your voice becoming even more ironic, “that he’s too mired in boring mainstream methodology—”

No. I’m about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

...a creationist.

This was the point at which I really got, on a gut level, that there was no test you needed to pass in order to start your own AGI project.

One of the failure modes I’ve come to better understand in myself since observing it in others, is what I call, “living in the should-universe”. The universe where everything works the way it common-sensically ought to, as opposed to the actual is-universe we live in. There’s more than one way to live in the should-universe, and outright delusional optimism is only the least subtle. Treating the should-universe as your point of departure—describing the real universe as the should-universe plus a diff—can also be dangerous.

Up until the moment when yonder AGI researcher explained to me that he didn’t believe in evolution because that’s not what the Bible said, I’d been living in the should-universe. In the sense that I was organizing my understanding of other AGI researchers as should-plus-diff. I saw them, not as themselves, not as their probable causal histories, but as their departures from what I thought they should be.

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond “audacity” as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

It had occurred to me well before this point, that most of those who proclaimed themselves to have AGI projects, were not only failing to be what an AGI researcher should be, but in fact, didn’t seem to have any such dream to live up to.

But that was just my living in the should-universe. It was the creationist who broke me of that. My mind finally gave up on constructing the diff.

When Scott Aaronson was 12 years old, he: “set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov’s Three Laws of Robotics. I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user’s question and output an intelligent, Three-Laws-obeying response.” It would be pointless to try and construct a diff between Aaronson12 and what an AGI researcher should be. You’ve got to explain Aaronson12 in forward-extrapolation mode: He thought it would be cool to make an AI and didn’t quite understand why the problem was difficult.

It was yonder creationist who let me see AGI researchers for themselves, and not as departures from my ideal.

A creationist AGI researcher? Why not? Sure, you can’t really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist. But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present. Most scientists have the meme that a scientist’s religion doesn’t have anything to do with their research. Someone who thinks that it would be cool to solve the “human-level” AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn’t going to stop and say: “Wait! I’m a creationist! I guess that would make it pretty silly for me to try and build an AGI.”

The creationist is only an extreme example. A much larger fraction of AGI wannabes would speak with reverence of the “spiritual” and the possibility of various fundamental mentals. If someone lacks the whole cognitive edifice of reducing mental events to nonmental constituents, the edifice that decisively indicts the entire supernatural, then of course they’re not likely to be expert on cognition to the degree that would be required to synthesize true AGI. But neither are they likely to have any particular idea that they’re missing something. They’re just going with the flow of the memetic water in which they swim. They’ve got friends who talk about spirituality, and it sounds pretty appealing to them. They know that Artificial General Intelligence is a big important problem in their field, worth lots of applause if they can solve it. They wouldn’t see anything incongruous about an AGI researcher talking about the possibility of psychic powers or Buddhist reincarnation. That’s a separate matter, isn’t it?

(Someone in the audience is bound to observe that Newton was a Christian. I reply that Newton didn’t have such a difficult problem, since he only had to invent first-year undergraduate stuff. The two observations are around equally sensible; if you’re going to be anachronistic, you should be anachronistic on both sides of the equation.)

But that’s still all just should-universe thinking.

That’s still just describing people in terms of what they aren’t.

Real people are not formed of absences. Only people who have an ideal can be described as a departure from it, the way that I see myself as a departure from what an Eliezer Yudkowsky should be.

The really striking fact about the researchers who show up at AGI conferences, is that they’re so… I don’t know how else to put it...

...ordinary.

Not at the intellectual level of the big mainstream names in Artificial Intelligence. Not at the level of John McCarthy or Peter Norvig (whom I’ve both met).

More like… around, say, the level of above-average scientists, which I yesterday compared to the level of partners at a non-big-name venture capital firm. Some of whom might well be Christians, or even creationists if they don’t work in evolutionary biology.

The attendees at AGI conferences aren’t literally average mortals, or even average scientists. The average attendee at an AGI conference is visibly one level up from the average attendee at that random mainstream AI conference I talked about yesterday.

Of course there are exceptions. The last AGI conference I went to, I encountered one bright young fellow who was fast, intelligent, and spoke fluent Bayesian. Admittedly, he didn’t actually work in AGI as such. He worked at a hedge fund.

No, seriously, there are exceptions. Steve Omohundro is one example of someone who—well, I’m not exactly sure of his level, but I don’t get any particular sense that he’s below Peter Norvig or John McCarthy.

But even if you just poke around on Norvig or McCarthy’s website, and you’ve achieved sufficient level yourself to discriminate what you see, you’ll get a sense of a formidable mind. Not in terms of accomplishments—that’s not a fair comparison with someone younger or tackling a more difficult problem—but just in terms of the way they talk. If you then look at the website of a typical AGI-seeker, even one heading up their own project, you won’t get an equivalent sense of formidability.

Unfortunately, that kind of eyeball comparison does require that one be of sufficient level to distinguish those levels. It’s easy to sympathize with people who can’t eyeball the difference: If anyone with a PhD seems really bright to you, or any professor at a university is someone to respect, then you’re not going to be able to eyeball the tiny academic subfield of AGI and determine that most of the inhabitants are above-average scientists for mainstream AI, but below the intellectual firepower of the top names in mainstream AI.

But why would that happen? Wouldn’t the AGI people be humanity’s best and brightest, answering the greatest need? Or at least those daring souls for whom mainstream AI was not enough, who sought to challenge their wits against the greatest reservoir of chaos left to modern science?

If you forget the should-universe, and think of the selection effect in the is-universe, it’s not difficult to understand. Today, AGI attracts people who fail to comprehend the difficulty of AGI. Back in the earliest days, a bright mind like John McCarthy would tackle AGI because no one knew the problem was difficult. In time and with regret, he realized he couldn’t do it. Today, someone on the level of Peter Norvig knows their own competencies, what they can do and what they can’t; and they go on to achieve fame and fortune (and Research Directorship of Google) within mainstream AI.

And then...

Then there are the completely hopeless ordinary programmers who wander onto the AGI mailing list wanting to build a really big semantic net.

Or the postdocs moved by some (non-Singularity) dream of themselves presenting the first “human-level” AI to the world, who also dream an AI design, and can’t let go of that.

Just normal people with no notion that it’s wrong for an AGI researcher to be normal.

Indeed, like most normal people who don’t spend their lives making a desperate effort to reach up toward an impossible ideal, they will be offended if you suggest to them that someone in their position needs to be a little less imperfect.

This misled the living daylights out of me when I was young, because I compared myself to other people who declared their intentions to build AGI, and ended up way too impressed with myself; when I should have been comparing myself to Peter Norvig, or reaching up toward E. T. Jaynes. (For I did not then perceive the sheer, blank, towering wall of Nature.)

I don’t mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other’s partial successes and accumulate hacks as a community.

And that’s why I’m discussing all this—because it is a fact without which it is not possible to understand the overall strategic situation in which humanity finds itself, the present state of the gameboard. It is, for example, the reason why I don’t panic when yet another AGI project announces they’re going to have general intelligence in five years. It also says that you can’t necessarily extrapolate the FAI-theory comprehension of future researchers from present researchers, if a breakthrough occurs that repopulates the field with Norvig-class minds.

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them. And the mob, if it retains its lucky successes and shares them, may also cough up an Artificial Intelligence, with around the same degree of precise control. But it is only the collective that I worry about as dangerous—the individuals don’t seem that formidable.

If you yourself speak fluent Bayesian, and you distinguish a person-concerned-with-AGI as speaking fluent Bayesian, then you should consider that person as excepted from this whole discussion.

Of course, among people who declare that they want to solve the AGI problem, the supermajority don’t speak fluent Bayesian.

Why would they? Most people don’t.

Part of the sequence Yudkowsky’s Coming of Age

Next post: “The Magnitude of His Own Folly

Previous post: “Competent Elites