Examples of AI Increasing AI Progress

Recursive self-improvement is already here.

This point is far from original. It’s been described before, for instance here, in Drexler’s Reframing Superintelligence, and (as I was working on this post) in Jack Clark’s newsletter and even by Yann LeCun. But sometimes I still hear people talk about preparing for “when recursive self-improvement kicks in,” implying that it hasn’t already.

The kinds of recursive self-improvement mentioned here aren’t exactly the frequently-envisioned scenario of a single AI system improving itself unencumbered. They instead rely on humans to make them work, and humans are inevitably slow and thus currently inhibit a discontinuous foom scenario.

It may be tempting to dismiss the kind of recursive self-improvement happening today as not real recursive self-improvement. To think about it as some future event that will start to happen that we need to prepare for. Yes, we need to prepare for increasing amounts of it, but it’s not in the future, it’s in the present.

Here are some currently existing examples (years given for the particular example linked):

  • (2016) Models play against themselves in order to iteratively improve their performance in games, most notably in AlphaGo and its variants.

  • (2016) Some neural architecture search techniques use one neural network to optimize the architectures of different neural networks.

  • (2016) AI is being used to optimize data center cooling, helping reduce the cost of further scaling.

  • (2021) Code generation tools like GitHub Copilot can be helpful to software engineers, including presumably some AI research engineers (anecdotally, I’ve found it helpful when doing engineering). Engineers may thus be faster at designing AI systems, including Copilot-like systems.

  • (2021) Google uses deep reinforcement learning to optimize their AI accelerators.

  • (2022) Neural networks, running on NVIDIA GPUs, have been used to design more efficient GPUs which can in turn run more neural networks.

  • (2022) Neural networks are being used for compiler optimization in the popular LLVM compiler language, which Pytorch’s just-in-time compiler is based on.

Inspired by Victoria Krakovna’s specification gaming spreadsheet, I’ve made a spreadsheet here with these examples. Feel free to submit more here. I think the number of examples will continue to grow, making it useful to keep track of them.

If this feels underwhelming compared with the kinds of recursive self-improvement often written about, you’re right. But consider that the start of an exponential often feels underwhelming. As time goes on, I expect that humans will become less and less involved in the development of AI, with AI automating more and more of the process. This could very well feel sudden, but it won’t be unprecedented: it’s already begun.