Robin Hanson on Lumpiness of AI Services

Link post

Basically, in this post Robin Hanson argues that AI systems will have multiple separate components, rather than be a big uniform lump, arguing by analogy with the multiplicity of firms in the economy. Some excerpts:

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)...

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us...

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task...

All these critics seem to agree with Drexler that it is harder to see and control the insides of services, relative to interfaces between them. Where they disagree is in seeing productive efficiency considerations as perhaps creating large natural service “lumps.” A big lumpy service does a large set of tasks with a wide enough scope, where it would be much less efficient to break that up into many services, and where we should be scared of what this lump might do if driven by the wrong values.

Note the strong parallels with the usual concern about large firms in capitalism. The popular prediction that unregulated capitalism would make a few huge firms is based on more than productive efficiencies; people also fear market power, collusion, and corruption of governance. But big size induced by productive efficiencies of scale is definitely one of the standard concerns.

Economics and business have large literatures not only on the many factors that induce large versus small firms, but also on the particular driver of production efficiencies. This often goes under the label “make versus buy”; making something within a firm rather than buying it from other firms tends to make a firm larger. It tends to be better to make things that need to be tightly coordinated with core firm choices, and where it is harder to make useful arm-length contracts. Without such reasons to be big, smaller tends to be more efficient. Because of these effects, most scholars today don’t think unregulated firms would become huge, contrary to Marx, Engels, and popular opinion.

Alas, as seen in the above criticisms [links in a different spot in the original post], it seems far too common in the AI risk world to presume that past patterns of software and business are largely irrelevant, as AI will be a glorious new shiny unified thing without much internal structure or relation to previous things. (As predicted by far views.) The history of vastly overestimating the ease of making huge firms in capitalism, and the similar typical nubbie error of overestimating the ease of making large unstructured software systems, are seen as largely irrelevant.