AI Will Multiply

Link post

AI alignment work typically treats AI’s as single entities. While I agree that this is a good approximation, I think they will be better described as a highly coordinated population of agents. Rather than simply grow in size and acquire more resources, AI’s will find it in their best interests to split into many smaller agents.

One reason to multiply is to save resources. It may be more efficient to break into smaller, simpler agents to handle multiple tasks rather than use a single large agent to handle all tasks.

Having many copies can leverage certain efficiencies of scale. It also offers the AI a way to increase it’s capability without needing to worry about creating an aligned agent.

Splitting into smaller components also has the benefit of reducing overall risk. A single agent faces a much higher probability of extinction compared to a population of agents. Having many copies allows the AI to diversify its strategy.

Under certain circumstances, copies may also be able to carry out a Sybil attack that a singleton could not, giving the AI more influence in the world.

Copies can gather resources more effectively as well. When an AI needs to cover a large area but cannot adequately control actions at every point, it makes sense to split into independent agents. This is particularly true for space expeditions, where the speed of light makes communication too slow to be useful for making quick decisions.

For these reasons, making copies is a convergent instrumental subgoal.