Fast Takeoff in Biological Intelligence

[Speculative and not my area of expertise; probably wrong. Cross-posted from Grand, Unified, Crazy.]

One of the possible risks of artificial intelligence is the idea of “fast” (exponential) takeoff – that once an AI becomes even just a tiny bit smarter than humans, it will be able to recursively self-improve along an exponential curve and we’ll never be able to catch up with it, making it effectively a god in comparison to us poor humans. While human intelligence is improving over time (via natural selection and perhaps whatever causes the Flynn effect) it does so much, much more slowly and in a way that doesn’t seem to be accelerating exponentially.

But maybe gene editing changes that.

Gene editing seems about as close as a biological organism can get to recursively editing its own source code, and with recent advances (CRISPR, etc) we are plausibly much closer to functional genetic manipulation than we are to human-level AI. If this is true, humans could reach fast takeoff in our own biological intelligence well before we build an AI capable of the same thing. In this world we’re probably safe from existential AI risk; if we’re both on the same curve, it only matters who gets started first.

There are a bunch of obvious objections and weaknesses in this analogy which are worth talking through at a high level:

  • The difference between hardware and software seems relevant here. Gene editing seems more like a hardware-level capability, whereas most arguments about fast takeoff in AI talk about recursive improvement of software. It seems easy for a strong AI to recompile itself with a better algorithm, where-as it seems plausibly more difficulty for it to design and then manufacture better hardware.

    This seems like a reasonable objection, though I do have two counterpoints. The first is that, in humans at least, intelligence seems pretty closely linked to hardware. Software also seems important, but hardware puts strong upper bounds on what is possible. The second counterpoint is that our inability to effectively edit our software source code is, in some sense, a hardware problem; if we could genetically build a better human, capable of more direct meta-cognitive editing… I don’t even know what that would look like.

  • Another consideration is generation length. Even talking about hardware replacement, a recursively improving AI should be able to build a new generation on the order of weeks or months. Humans take a minimum of twelve years, and in practice quite a bit more than that most of the time. Even if we end up on the curve first, the different constant factor may dominate.

  • We don’t really understand how our own brains work. Even if we’re quite close to functional genetic editing, maybe we’re still quite far from being able to use it effectively for intelligence optimization. The AI could still effectively get there first.

  • Moloch. In a world where we do successfully reach an exponential take-off curve in our own intelligence long before AI does, Moloch could devour us all. There’s no guarantee that the editing required to make us super-intelligent wouldn’t also change or destroy our values in some fashion. We could end up with exactly the same paperclip-maximizing disaster, just executed by a biological agent with human lineage instead of by a silicon-based computer.

Given all these objections I think it’s fairly unlikely that we reach a useful biological intelligence take-off anytime soon. However if we actually are close, then the most effective spending on AI safety may not be on AI research at all – it could be on genetics and neuroscience.