Goal-Completeness is like Turing-Completeness for AGI

Turing-completeness is a useful analogy we can use to grasp why AGI will inevitably converge to “goal-completeness”.

By way of definition: An AI whose input is an arbitrary goal, which outputs actions to effectively steer the future toward that goal, is goal-complete.

A goal-complete AI is analogous to a Universal Turing Machine: its ability to optimize toward any other AI’s goal is analogous to a UTM’s ability to run any other TM’s same computation.

Let’s put the analogy to work:

Imagine the year is 1970 and you’re explaining to me how all video games have their own logic circuits.

Steve Wozniak hand-designs a circuit for Breakout (Atari 1976), without using a Turing-complete architecture
Breakout’s gameplay was simple enough to NOT be Turing-complete. That’s why optimizing its circuit by hand could save money.

You’re not wrong, but you’re also apparently not aware of the importance of Turing-completeness and why to expect architectural convergence across video games.

Flash forward to today. The fact that you can literally emulate Doom inside of any modern video game (through a weird tedious process with a large constant-factor overhead, but still) is a profoundly important observation: all video games are computations.

Source

More precisely, two things about the Turing-completeness era that came after the specific-circuit era are worth noticing:

  1. The gameplay specification of sufficiently-sophisticated video games, like most titles being released today, embeds the functionality of Turing-complete computation.

  2. Computer chips replaced application-specific circuits for the vast majority of applications, even for simple video games like Breakout whose specified behavior isn’t Turing-complete.

Expecting Turing-Completeness

From Gwern’s classic page, Surprisingly Turing-Complete:

[Turing Completeness] is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite—it is difficult to write a useful system which does not immediately tip over into TC.

“Surprising” examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult...

Computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.

The Cascading Style Sheets (CSS) language that web pages use for styling HTML is a pretty representative example of surprising Turing Completeness:

Obligatory whimsical demo

If you look at any electronic device today, like your microwave oven, you won’t see a microwave-oven-specific circuit design. What you’ll see in virtually every device is the same two-level architecture:

  1. A Turing-complete chip that can run any program

  2. An installed program specifying application-specific functionality, like a countdown timer

It’s a striking observation that your Philips Sonicare™ toothbrush and the guidance computer on the Apollo moonlander are now architecturally similar. But with a good understanding of Turing-completeness, you could’ve predicted it half a century ago. You could’ve correctly anticipated that the whole electronics industry would abandon application-specific circuits and converge on a Turing-complete architecture.

Expecting Goal-Completeness

If you don’t want to get blindsided by what’s coming in AI, you need to apply the thinking skills of someone who can look at a Breakout circuit board in 1976 and understand why it’s not representative of what’s coming.

When people laugh off AI x-risk because “LLMs are just a feed-forward architecture!” or “LLMs can only answer questions that are similar to something in their data!” I hear them as saying “Breakout just computes simple linear motion!” or “You can’t play Doom inside Breakout!”

OK, BECAUSE AI HASN’T CONVERGED TO GOAL-COMPLETENESS YET. We’re not living in the convergent endgame yet.

When I look at GPT-4, I see the furthest step that’s ever been taken to push out the frontier of outcome-optimization power in an unprecedentedly large and general outcome-representation space (the space of natural-language prompts):

Image

And I can predict that algorithms which keep performing better on these axes will, one way or the other, converge to the dangerous endgame of goal-complete AI.

By the 1980s, by the time you saw Pac-Man in arcades, it was knowable to insightful observers that the Turing-complete convergence was happening. It wasn’t a 100% clear piece of evidence: After all, Pac-Man’s game semantics aren’t Turing Complete AFAIK.

But still, anyone with a deep understanding of computation could tell that the Turing-complete convergence was in progress. They could tell that the complexity of the game was high enough that it was probably already running on a Turing-complete stack. Or that, if it wasn’t, then it would be soon enough.

Image

A video game is a case of executable information-processing instructions. That’s why when a game designer specs out the gameplay to their engineering team, they have no choice but to use computational concepts in their description, such as: what information the game tracks, how the game state determines what’s rendered on the screen, and how various actions update the game state.

It also turned out that word processors and spreadsheets are cases of executing information-processing instructions. So office productivity tools ended up being built on the same convergent architecture as video games.

Oh yeah, and the technologies we use for reading, movies, driving, shopping, cooking… they also turned out to be mostly cases of systems executing information-processing instructions. They too all converged to being lightweight specification cards inserted into Turing-complete hardware.

Recapping the Analogy

Why will AI converge to goal-completeness?

Turing-complete convergence happened because the tools of our pre-computer world were just clumsy ways to process information.

Goal-complete convergence will happen because the tools of our pre-AGI world are still just clumsy ways to steer the future toward desirable outcomes.


Even our blind idiot god, natural selection, recently stumbled onto a goal-complete architecture for animal brains. That’s a sign of how convergent goal-completeness is as a property of sufficiently intelligent agents. As Eliezer puts it: “Having goals is a natural way of solving problems.”

Or as Ilya Sutskever might urge: Feel the goal-complete AGI.