New report: Intelligence Explosion Microeconomics

Summary: Intelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, “reinvestable returns on cognitive investments”: what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.

(Sample idea you haven’t heard before: The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)

I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive ‘technological acceleration’. MIRI has an intended-to-be-small-and-technical mailing list for such discussion. In case it’s not clear from context, I (Yudkowsky) am the author of the paper.

Abstract:

I. J. Good’s thesis of the ‘intelligence explosion’ is that a sufficiently advanced machine intelligence could build a smarter version of itself, which could in turn build an even smarter version of itself, and that this process could continue enough to vastly exceed human intelligence. As Sandberg (2010) correctly notes, there are several attempts to lay down return-on-investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with I. J. Good’s intelligence explosion thesis as such.

I identify the key issue as returns on cognitive reinvestment—the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued as evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on the sort of debates which then arise on how to interpret such evidence. I propose that the next step forward in analyzing positions on the intelligence explosion would be to formalize return-on-investment curves, so that each stance can say formally which possible microfoundations they hold to be falsified by historical observations already made. More generally, I pose multiple open questions of ‘returns on cognitive reinvestment’ or ‘intelligence explosion microeconomics’. Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting the outcomes for Earth-originating intelligent life.

The dedicated mailing list will be small and restricted to technical discussants.

This topic was originally intended to be a sequence in Open Problems in Friendly AI, but further work produced something compacted beyond where it could be easily broken up into subposts.

Outline of contents:

1: Introduces the basic questions and the key quantitative issue of sustained reinvestable returns on cognitive investments.

2: Discusses the basic language for talking about the intelligence explosion, and argues that we should pursue this project by looking for underlying microfoundations, not by pursuing analogies to allegedly similar historical events.

3: Goes into detail on what I see as the main arguments for a fast intelligence explosion, constituting the bulk of the paper with the following subsections:

  • 3.1: What the fossil record actually tells us about returns on brain size, given that most of the difference between Homo sapiens and Australopithecus was probably improved software.

  • 3.2: How to divide credit for the human-chimpanzee performance gap between “humans are individually smarter than chimpanzees” and “the hominid transition involved a one-time qualitative gain from being able to accumulate knowledge”.

  • 3.3: How returns on speed (serial causal depth) contrast with returns from parallelism; how faster thought seems to contrast with more thought. Whether sensing and manipulating technologies are likely to present a bottleneck for faster thinkers, or how large of a bottleneck.

  • 3.4: How human populations seem to scale in problem-solving power; some reasons to believe that we scale inefficiently enough for it to be puzzling. Garry Kasparov’s chess match vs. The World, which Kasparov won.

  • 3.5: Some inefficiencies that might cumulate in an estimate of humanity’s net computational efficiency on a cognitive problem.

  • 3.6: What the anthropological record actually tells us about cognitive returns on cumulative selection pressure, given that selection pressures were probably increasing over the course of hominid history. How the observed history would be expected to look different, if there were in fact diminishing returns on cognition.

  • 3.7: How to relate the curves for evolutionary difficulty, human-engineering difficulty, and AI-engineering difficulty, considering that they are almost certainly different.

  • 3.8: Correcting for anthropic bias in trying to estimate the intrinsic ‘difficulty ’of hominid-level intelligence just from observing that intelligence evolved here on Earth.

  • 3.9: The question of whether to expect a ‘local’ (one-project) FOOM or ‘global’ (whole economy) FOOM and how returns on cognitive reinvestment interact with that.

  • 3.10: The great open uncertainty about the minimal conditions for starting a FOOM; why I. J. Good’s postulate of starting from ‘ultraintelligence’ is probably much too strong (sufficient, but very far above what is necessary).

  • 3.11: The enhanced probability of unknown unknowns in the scenario, since a smarter-than-human intelligence will selectively seek out and exploit flaws or gaps in our current knowledge.

4: A tentative methodology for formalizing theories of the intelligence explosion—a project of formalizing possible microfoundations and explicitly stating their alleged relation to historical experience, such that some possibilities can allegedly be falsified.

5: Which open sub-questions seem both high-value and possibly answerable.

6: Formally poses the Open Problem and mentions what it would take for MIRI itself to directly fund further work in this field.