If CEOs (and boards) are also AIs, the analogy breaks. Humans are currently necessary in such positions, their necessity is sufficient to explain the fact that they are there at all, even if there are other reasons it might be a good thing. The situation changes when a system won’t break down without humans in positions of power, it’s not clear that these other reasons have any teeth in practice.
This doesn’t need to be the case, but only in a sense similar to how humanity doesn’t need to build AGIs before it’s ready. It’s a new affordance, and there is a danger that it gets used irresponsibly and leads to bad outcomes. There should be some understanding of how specifically this won’t be happening.
For a legally constituted corporation, the role of CEO is not only one of decision-maker, but also blame-taker: if the company goes into decline, the CEO can be fired; if the company does a sufficiently serious crime, the CEO can be prosecuted and punished (think Jeffrey Skilling of Enron). The presence of a human whose reputation (and possibly freedom) depend on the business’s conduct, conveys some trustworthiness to other humans (investors, trading partners, creditors).
If a company has an AI agent for its top-level decision-maker, then those decisions are made without this kind of responsibility for the outcome. An AI agent cannot meaningfully be fired or punished; it can be turned off, and some chatbot characters sometimes act to avoid such a fate; but I don’t think investors would be wise to count on that.
Now, what about a non-legally-constituted entity, or even a criminal one? Criminal gangs do rely on a big boss to adjudicate disputes, set strategy, and to risk taking a fall if things go sour. But online criminal groups like ransomware gangs or darknet marketplaces might be able to rest their reputation solely on performance rather than on the ability for a human big boss to fall or be punished. I don’t know enough about the sociology of these groups to say.
If CEOs (and boards) are also AIs, the analogy breaks. Humans are currently necessary in such positions, their necessity is sufficient to explain the fact that they are there at all, even if there are other reasons it might be a good thing. The situation changes when a system won’t break down without humans in positions of power, it’s not clear that these other reasons have any teeth in practice.
This doesn’t need to be the case, but only in a sense similar to how humanity doesn’t need to build AGIs before it’s ready. It’s a new affordance, and there is a danger that it gets used irresponsibly and leads to bad outcomes. There should be some understanding of how specifically this won’t be happening.
For a legally constituted corporation, the role of CEO is not only one of decision-maker, but also blame-taker: if the company goes into decline, the CEO can be fired; if the company does a sufficiently serious crime, the CEO can be prosecuted and punished (think Jeffrey Skilling of Enron). The presence of a human whose reputation (and possibly freedom) depend on the business’s conduct, conveys some trustworthiness to other humans (investors, trading partners, creditors).
If a company has an AI agent for its top-level decision-maker, then those decisions are made without this kind of responsibility for the outcome. An AI agent cannot meaningfully be fired or punished; it can be turned off, and some chatbot characters sometimes act to avoid such a fate; but I don’t think investors would be wise to count on that.
Now, what about a non-legally-constituted entity, or even a criminal one? Criminal gangs do rely on a big boss to adjudicate disputes, set strategy, and to risk taking a fall if things go sour. But online criminal groups like ransomware gangs or darknet marketplaces might be able to rest their reputation solely on performance rather than on the ability for a human big boss to fall or be punished. I don’t know enough about the sociology of these groups to say.