AIs teams will probably be more superintelligent than individual AIs

Link post

Summary

Teams of humans (countries, corporations, governments, etc) are more powerful and intelligent than individual humans. Our prior should be the same for AIs. AI organizations may then face coordination problems like management overhead, the principal-agent problem, defection/​mutiny, and general Moloch-y-ness. This doesn’t especially reduce risk of AI takeover.

Teams of Humans are more Powerful and Intelligent than Individuals

Human society is at such a scale that many goals can only be achieved by a team of many people working in concert. Such goals include winning a war, building a successful company, making a Hollywood movie, and realizing scientific achievements (e.g. building nuclear bombs, going to the moon, eradicating smallpox).

This is not a coincidence, because collaboration gives you a massive power boost. Even if the boost is subadditive (e.g. 100 people together are only 10x more powerful than 1 person), it is still a boost, and therefore the most powerful agents in a society will be teams rather than individuals.

This Probably Applies to AIs Too

I expect the same dynamic will apply to AIs as well: no matter how superintelligent an AI is, a team of such AIs working together would be even more superintelligent[1]. If this is true, we should expect that even after the creation of ASI, the most powerful agents in the world will be coordinated teams, not individual ASI.

Let me acknowledge one objection and provide two rebuttals:

Objection: AI can increase their power by A) coordinating with other AI or B) making themselves smarter, and these both cost the same resource (compute). If B) is always more cost-efficient, then A) will not happen.

Rebuttal A: Currently, LLM training runs are far more expensive than operation, by many orders of magnitude[2]. Even if an AI devoted most of its resources to training a successor system, spending 1% of its compute running copies of itself seems like a free roll to increase its power (including by finding optimizations for the training runs).

Rebuttal B: A single AI will simply be able to make far fewer decisions per second than a team of AI. If speed is ever highly valued (and it usually is), a team can be more powerful by making many good decisions rather than a few great decisions.

A Future of AI Teams

If teaming up is always advantageous, we can still be headed for AI takeover and human disempowerment. However, the resulting world may not be fully under the control of a single AI or a singleton (in Bostrom’s sense of “a single decision-making agency”). Instead, you’d have AIs forming structures similar to corporations or governments, potentially facing the usual follies:

  • Some fraction of the AI’s time would need to be devoted to coordinating, sharing information, and managing.

  • When a task is delegated to an AI instance, it may pursue it in “the wrong way” from the point of view of the delegator (the principal-agent problem).

  • “Leader” AIs may have to fear insubordination, mutiny, or a coup from the AIs they lead. AI teams might face conflict internally and externally due to personal ambition, values differences, and different strategies for pursuing shared goals.

  • Incentives can lead to inadequate equilibria and the strong influence from Moloch. AI companies and AI workers may need to compete for each other, and possibly engage in costly signaling to do so.

  1. ^

    I’m including multiple copies of a single AI as a possibility in “a team of such AIs”.

  2. ^

    I measured ChatGPT-3.5 as producing 60 tokens/​second, and ChatGPT-4 as producing 12 tokens/​second. At current pricing on the maximum context windows, that comes out to ~$300 and ~$4500 for a year, whereas Sam Altman said it cost $100M to train GPT-4. If GPT-4 suddenly became sentient, it could run 200 copies of itself for a year for <1% of what would costs to train a GPT-5.

No comments.