QNR Prospects

Link post

Approximately a book review: Eric Drexler’s QNR paper.

[Epistemic status: very much pushing the limits of my understanding. I’ve likely made several times as many mistakes as in my average blog post. I want to devote more time to understanding these topics, but it’s taken me months to produce this much, and if I delayed this in hopes of producing something better, who knows when I’d be ready.]

This nearly-a-book elaborates on his CAIS paper (mainly chapters 37 through 39), describing a path for AI capability research enables the CAIS approach to remain competitive as capabilities exceed human levels.

AI research has been split between symbolic and connectionist camps for as long as I can remember. Drexler says it’s time to combine those approaches to produce systems which are more powerful than either approach can be by itself.

He suggests a general framework for how to usefully combine neural networks and symbolic AI. It’s built around structures that combine natural language words with neural representations of what those words mean.

Drexler wrote this mainly for AI researchers. I will attempt to explain it to a slightly broader audience.

Components

What are the key features that make this more powerful than GPT-3 alone, or natural language alone?

QNR extends natural language by incorporating features of deep learning, and mathematical structures used in symbolic AI.

  • words are associated with neural representations. Those representations are learned via a process that focuses on learning a single concept at a time with at least GPT-3-level ability to understand the concept. BERT exemplifies how to do this.

  • words can be related to other word via graphs (such as syntax trees).

  • words, or word-like concepts, can be created via compositions of simpler concepts.

  • corresponding to phrases, sentences, books, and entities that we have not yet conceived.

QNR feels much closer than well-known AI’s to how I store concepts within my mind, as opposed to the stripped down version that I’m using here to help you reconstruct some of that in your mind.

Drexler contrasts QNR to foundation models, but I don’t find “foundation models” to be a clear enough concept to be of much value.

Importance?

I’ve noticed several people updating this year toward earlier AGI timelines, based on an increasing number of results demonstrating what look to me like marginally more general-purpose intelligence. [I wrote this before Gato was announced, and have not yet updated on that.]

I’ve also updated toward somewhat faster timelines, but I’m mainly reacting to Drexler’s vision of how to encode knowledge in a more general-purpose, scalable form.

I expect that simple scaling up of GPT-3 will not generate human-level generality with a plausible amount of compute, possibly just because it relearns the basics from scratch each time a new version is tried. With QNR, new services would build on knowledge that is represented in much more sophisticated forms than raw text.

Effects on AI Risks

The QNR approach focuses on enhancing knowledge corpora, not fancier algorithms. It enables the AI industry to create more value, possibly at an accelerating rate, without making software any more agent-like. So it could in principle eliminate the need for risky approaches to AI.

However, that’s not very reassuring by itself, as the QNR approach is likely to accelerate learning abilities of agenty AIs if those are built. Much depends on whether there are researchers who want to pursue more agenty approaches.

I can imagine QNR enabling more equality among leading AI’s if a QNR corpus is widely available.

A QNR-like approach will alter approaches to interpretability.

The widely publicized deep learning results such as GPT-3 create enormous inscrutable floating point matrices, where the representations of the digits of pi are mixed in with representations of philosophical ideas. If I’m trying to find a particular belief in such a matrix, I expect the difficulty of identifying it increases roughly in proportion to how much knowledge is stored.

In contrast, QNR stores lots of knowledge in small to medium-sized matrices.

  • the difficulty of understanding any one component (concept) does not increase much (I think it’s at worst the log of the number of concepts?).

  • some of the concepts will naturally be tied to the corresponding natural language words, and to higher-level concepts such as “the main claims in Bostrom’s Superintelligence”.

  • the software-generated concepts are guided to be bit more likely to correspond to something for which humans have labels.

Deep learning architectures are becoming fairly general-purpose in the sense that they can be applied to many domains, but any particular system still seems to have a fairly specialized goal, and its knowledge is optimized for that goal. E.g. GPT-3 has knowledge that’s broad in the sense of covering many topics, but narrow in the sense that that knowledge is only designed to be accessed via one type of short-lived conversation.

QNR looks more like an important advance, in the sense that it focuses on turning knowledge into more general-purpose corpora, and on causing deep learning to scale better via modularity.

Analogies

Some comparisons which hint at my guess about QNR’s value:

  • the cultural transmission of knowledge that humans developed several million years ago, as described in Henrich’s The Secret of Our Success

  • the printing press

  • the transition from software that uses many goto’s, to software that heavily uses subroutines

I’ll take the analogy of replacing goto’s with subroutines as the closest analogy to QNR. But since not too many readers will have programmed with goto’s, I’m also emphasizing the rise of culture, despite the risk that that will overstate the importance of QNR.

The Goals of a Seed AI

People sometimes react to AI safety warnings by claiming that an AI would be smart enough to understand what humans want, therefore alignment ought to be trivial.

I have been rejecting that on the grounds that when the AGI is just starting to learn, it’s likely that its model(s) of the world will be much too primitive to comprehend “do what humans want”, so it would need to start with a very different goal until it reaches something vaguely resembling human levels of understanding. It should in principle be possible to replace the goal at the right time, but it seemed obviously hard to identify the appropriate time. (I neglected other potentially important concerns, such as whether it’s practical to design systems so that new goals can be swapped in—how would I do that in a human brain?).

QNR provides some hints about how we might swap goals in an existing AGI. I’m less optimistic about that being valuable, and haven’t analyzed it much.

QNR suggests that this whole seed AI scenario reflects an anthropomorphized view of intelligence.

The QNR approach offers some hope that new AGI’s will start with a rich enough description of the world that we’ll have some hope of giving them a goal that resembles “follow humanity’s CEV”. That goal would refer to some fairly abstract concepts in a QNR corpus that was created by a collaboration between humans and a not-too-centralized set of AI’s with relatively specialized goals.

That still leaves some tricky questions, e.g. what happens when the new AGI becomes capable of altering the QNR description of its goal?

Wait a minute. If we’ve got that much general purpose knowledge in a QNR corpus, do we need any additional general-purpose system on top of it? I guess part of the point of Drexler’s CAIS paper is that many AI safety researchers overestimate such a need. There’s likely some need for general-purpose systems on top of that corpus (along the lines of section 39 of Drexler’s CAIS paper), but that may end up being simpler and easier than producing the corpus.

I’m not at all confident that AI’s can be aligned this way. I’m merely saying that I’ve updated from assuming it’s hopeless, to feeling confused.

Why Now?

Few AI projects have produced knowledge that was worth building upon until quite recently.

GPT-3 is roughly how sophisticated something needs to be in order for other projects to be tempted to build on it.

Or maybe that threshold was reached a few years ago, and work on QNR-like approaches aren’t publicized.

Maybe companies consider QNR-like approaches are valuable enough that companies keep them secret.

Or maybe QNR implementations are under development, but won’t produce results worth publicizing until they’re scaled up more.

Modularity

Some of the responses to Drexler’s CAIS paper questioned the value of modularity.

Robin Hanson responded with some decent reasons from economics to expect systems to benefit from modularity.

Why do people disagree about modularity in AI?

ML researchers seem to say that software does better than a researcher at choosing the optimal way to divide a task. Probably some people imagine an AGI with no internal division, but the most serious debate seems to be between inscrutable machine-generated modules versus human-guided module creation. (The debate might be obscured by the more simplistic debate over whether modules are good—my first draft of this section carelessly focused on that).

This might be a misleading dichotomy? Drexler implies that much of the knowledge about good module boundaries should come from collective human wisdom embodied in things like choices of what words and phrases to add to natural languages. That would allow for a good deal of automation regarding module creation, while guiding systems toward module interfaces that humans can understand.

We ought to expect that the benefits of modularity start out subtle, and increase as systems become more complex. I see no reason to think current AI systems are close to the complexity that we’ll see when the field of AI becomes mature.

My impression is that useful human-directed modularity in neural nets involves more overhead than is the case for ordinary software. E.g. QNR seems to require some standardization of semantic space. I don’t have enough expertise to evaluate how much of an obstacle this poses.

Perhaps low-hanging fruit for early AI applications involves AI’s discovering concepts that humans have failed to articulate? Which suggests there will be plenty of value to be added via learning normal human concepts, as AI’s become general-purpose enough to tackle tasks at which humans are relatively good. But GPT-3 seems to be some sort of evidence against this conjecture.

Something about human minds seems more unified than my stereotype of modular software. There’s plenty of evidence that human minds have substantial compartmentalization. Maybe there’s some unifying architecture coordinating those parts, which no attempt at modular deep learning has managed to replicate? Section 39 of Drexler’s CAIS paper outlines what I’d expect such an architecture to look like. Nothing looks especially hard about such an architecture, but lots of mundane difficulties might be delaying it.

I haven’t followed ML research closely enough to have much confidence here, but my intuition says QNR modularity offers important advantages over the more stereotypical ML approach with its inscrutable modularity. I would not be surprised if there’s some third option that’s even more powerful.

Scenarios

The most obvious scenario is that Google uses a QNR-like approach for tasks such as improving ad placement. Word leaks out at a modest pace, and some competitors replicate the basic ideas with delays of several months.

This handful of companies manages to throw enough resources at their internal projects (partly due to accelerating profits) that no other projects become important.

Is a more open scenario feasible?

Wikipedia could become the standard QNR repository by deciding soon to add a QNR entry to all its widely-used pages.

Wikipedia has demonstrated the ability to attract enough manpower to match Google’s ability to throw resources at this kind of task.

When I dreamt up this scenario, I hoped that it would be straightforward to incrementally add new concepts in a distributed fashion. A bit of search suggests that would produce a lower quality result. I hoped each Wikipedia entry could have a stable standardized neural representation of its contents. But the limits to incrementally adding concepts suggest a need for at least modestly more central management than is typical of Wikipedia for developing new ways to represent knowledge.

This scenario would decentralize parts of AI development. How important would this effect be? My tentative guess is somewhat small. Building a QNR corpus of individual natural language words will be one of the easier parts of AI development (both in terms of compute and human brainpower).

A somewhat harder next step after that might be to build up full models of books, structured to represent much of the content of each book, in ways that allow both abstract high level references to books and summaries of individual ideas that make up the book. Copyright law might introduce strange restrictions here.

There would be teams using such a QNR corpus to build sophisticated causal models of the economy, of competition between AI projects, etc. There will be commercial incentives to keep those private. Even if they were public, it would be nontrivial to identify the best ones.

A modest variation on the Wikipedia scenario involves Wikipedia itself doing little, while maybe a dozen small teams experimenting with different variations on the basic QNR structures, each getting most information from public sources such as Wikipedia, and producing output that’s somewhat Wikipedia-like (but which doesn’t rely on decentralized editing). This approach is likely to face difficult trade-offs between limited compute and limited manpower.

I guess I’m disappointed that I can only see weak hopes for QNR driving a decentralized approach.

Scattered Thoughts

Parts of the paper stray from the main focus to speculate on what applications of AI we might want. Section 9 overlaps somewhat with what Arbital was groping toward, leading me to wonder how much Arbital would have done rather differently if it had spent millions on compute. See also section 11.1.3 on autonomous, and semi-autonomous content creation.

* * *

Can I implement a toy version of QNR?

I see no major obstacles to me implementing, in a couple of months, something that would be sufficient to notice the gaps in my knowledge.

That would likely be worth the effort only if it led to me generating something more valuable, such as improved stock market analysis. I see plenty of vague potential there, but I could easily spend a year on that without getting clear evidence as to whether it’s worth my time.

* * *

I’m unsure to what extent Drexler has come up with new ideas that are likely to speed up progress toward AGI, versus compiling a description of ideas that are coalescing from ideas that were inevitable enough that multiple researchers are independently becoming interested in them.

(Not all of the ideas are new. One reference is to a 1668 publication which advocated that distances between representations of concepts ought to reflect their distances in semantic space.)

I have slight concerns that publicizing QNR is a bad, due to risks that it will speed AI development. But most likely multiple AI researchers will independently think of these ideas regardless of what I do.

* * *

I’ll guess that Eliezer’s main complaint about the QNR approach is that it’s not powerful enough. That seems a bit more plausible than my concern about speeding AI development. But it sure looks to me like there are sufficient ideas to get us pretty close to human-level AI services in a decade or so.

I’ll remind you of this quote from Henrich:

we are smart, but not because we stand on the shoulders of giants or are giants ourselves. We stand on the shoulders of a very large pyramid of hobbits.

Closing Thoughts

With ideas such as QNR floating around, it’s hard to see how we could get a full AI winter anytime soon. If we see a decrease in reported progress, it’s more likely due to increased secrecy, or to the difficulty of describing the progress in terms that will impress laymen.

It’s also getting increasingly hard to imagine that we won’t have human-level AGI by 2040.

On the other hand, Drexler has reminded me that there’s a big difference between today’s showy results and human-level generality, so I’ve maybe slightly reduced my probability of human-level AGI within the next 5 years.