[Prediction] We are in an Algorithmic Overhang, Part 2

In [Prediction] We are in an Algorithmic Overhang I made technical predictions without much explanation. In this post I explain my reasoning. This prediction is contingent on there not being a WWIII or equivalent disaster disrupting semiconductor fabrication.


I wouldn’t be surprised if an AI takes over the world in my lifetime. The idea makes me uncomfortable. I question my own sanity. At first I think “no way could the world change that quickly”. Then I remember that technology is advancing exponentially. The world is changing faster than ever has before and the pace is accelerating.

Superintelligence is possible. The laws of physics demand it. If superintelligence is possible then it is inevitable. Why hasn’t we built one yet? There are four[1] candidate limitations:

  • Data. We lack sufficient training data.

  • Hardware. We lack the ability to push atoms around.

  • Software. The core algorithms are too complicated for human beings to code.

  • Theoretical. We’re missing one or more major technical insights.

We’re not limited by data

There is more data available on the Internet than in the genetic code of a human being plus the life experience of a single human being.

We’re not (yet) limited by hardware

This is controversial but I believe throwing more hardware at existing algorithms won’t bring them to human level.

I don’t think we’re limited by our ability to write software

I suspect that the core learning algorithm of human beings could be written in a handful of scientific papers comparable to the length and complexity of Einstein’s Annus Mirabilis. I can’t prove this. It’s just gut instinct. If I’m wrong and the core learning algorithm(s) of human beings is too complicated to write in a handful of scientific papers then superintelligence will not be built by 2121.

Porting a mathematical algorithm to a digital computer is straightforward. Individual inputs like snake detector circuits can be learned by existing machine learning algorithms and fed into the core learning algorithm.

We are definitely limited theoretically

We don’t know how mammalian brains work.

I don’t think there’s a big difference of fundamental architecture between human brains and e.g. mouse brains. Humans do have specialized brain regions for language like Broca’s area but I expect language comprehension would be easy to solve if we had an artificial mouse brain running on a computer.

Figuring out how mammalian brains work would constitute a disruptive innovation. It would re-write the rules of machine learning overnight. The instant this algorithm becomes public it would start a race to an superintelligent AI.

What happens next depends on the the algorithm. If it can be scaled efficiently on CPUs and GPUs then a small group could build the first superintelligence. If sufficient hardware is required then it might be possible to restrict AGI to nation-states the way private ownership of nuclear weapons is regulated. I think such a future is possible but unlikely. More precisely, I predict with >50% confidence that the algorithm will run efficiently enough on CPUs or GPUs (or whatever we have on the shelf) for a venture-backed startup to build a superintelligence on off-the-shelf hardware even though specialized hardware would be far more efficient.


  1. ↩︎

    A fifth explanation is we’re good at pushing atoms around but our universal computers are too inefficient to run a superintelligence because the algorithms behind superintelligence run badly on the von Neumann architecture. This is a variant on the idea of being hardware limited. While plausible, I don’t think it’s very likely because universal computers are universal. ANNs may not (always) run efficiently on them but ANNs do run on them.