Cor­po­ra­tions vs. superintelligences

WikiLast edit: 25 Mar 2017 6:41 UTC by Eliezer Yudkowsky

It is sometimes suggested that corporations are relevant analogies for superintelligences. To evaluate this analogy without simply falling prey to the continuum fallacy, we need to consider which specific thresholds from the standard list of advanced agent properties can reasonably be said to apply in full force to corporations. This suggests roughly the following picture:

Sometimes discussion of analogies between corporations and hostile superintelligences focuses on a purported misalignment with human values.

As mentioned above, corporations are composed of consequentialist agents, and can often deploy consequentialist reasoning to this extent. The humans inside the corporation are not all always pulling in the same direction, and this can lead to non-consequentialist behavior by the corporation considered as a whole; e.g. an executive may not maximize financial gain for the company out of fear of personal legal liability or just other life concerns.

On many occasions some corporations have acted psychopathically with respect to the outside world, e.g. tobacco companies. However, even tobacco companies are still composed entirely of humans who might balk at being e.g. turned into paperclips. It is possible to imagine circumstances under which a Board of Directors might wedge itself into pressing a button that turned everything including themselves into paperclips. However, acting in a unified way to pursue an interest of the corporation that is contrary to the non-financial personal interests of all executives and directors and employees and shareholders, does not well-characterize the behavior of most corporations under most circumstances.

The conditions for the coherence theorems implying consistent expected utility maximization are not met in corporations, as they are not met in the constituent humans. On the whole, the strategic acumen of big-picture corporate strategy seems to behave more like Go than like airplane design, and indeed corporations are usually strategically dumber than their smartest employee and often seem to be strategically dumber than their CEOs. Running down the list of Convergent instrumental strategies suggests that corporations exhibit some such behaviors sometimes, but not all of them nor all of the time. Corporations sometimes act like they wish to survive; but sometimes act like their executives are lazy in the face of competition. The directors and employees of the company will not go to literally any lengths to ensure the corporation’s survival, or protect the corporation’s (nonexistent) representation of its utility function, or converge their decision processes toward optimality (again consider the lack of internal prediction markets to aggregate epistemic capabilities on near-term resolvable events; and the lack of any known method for agglomerating human instrumental strategies into an efficient whole).

Corporations exist in a strongly multipolar world; they operate in a context that includes other corporations of equal size, alliances of corporations of greater size, governments, an opinionated public, and many necessary trade partners, all of whom are composed of humans running at equal speed and of equal or greater intelligence and strategic acumen. Furthermore, many of the resulting compliance pressures are applied directly to the individual personal interests of the directors and managers of the corporation, i.e., the decision-making CEO might face individual legal sanction or public-opinion sanction independently of the corporation’s expected average earnings. Even if the corporation did, e.g., successfully assassinate a rival’s CEO, not all of the resulting benefits to the corporation would accrue to the individuals who had taken the greatest legal risks to run the project.

Potential strong disanalogies to a paperclip maximizer include the following:

To the extent one credits the dissimilarities above as relevant to whatever empirical question is at hand, arguing by analogy from corporations to superintelligences—especially under the banner of “corporations are superintelligences!”—would be an instance of the noncentral fallacy or reference_class_tennis. Using the analogy to argue that “superintelligences are no more dangerous than corporations” would be the “precedented therefore harmless” variation of the harmless supernova fallacy. Using the analogy to argue that “corporations are the real danger,” without having previously argued out that superintelligences are harmless or that superintelligences are sufficiently improbable, would be derailing.