But we already align complex systems, whether it’s corporations or software applications, without complete “understanding,” and do so by ensuring they meet certain technical specifications, regulations, or contractual obligations.
We currently have about as much visibility into corporations as we do into large teams of AIs, because both corporations and AIs use english CoT to communicate internally. However, I fear that in the future we’ll have AIs using neuralese/recurrence to communicate with their future selves and with each other.
History is full of examples of corporations being ‘misaligned’ to the governments that in some sense created and own them. (and also to their shareholders, and also to the public, etc. Loads of examples of all kinds of misalignments). Drawing from this vast and deep history, we’ve evolved institutions to deal with these problems. But with AI, we don’t have that history yet, we are flying (relatively) blind.
Moreover, ASI will be qualitatively smarter and than any corporation ever has been.
Moreover, I would say that our current methods for aligning corporations only work as well as they do because the corporations have limited power. They exist in a competitive market with each other, for example. And they only think at the same speed as the governments trying to regulate them. Imagine a corporation that was rapidly growing to be 95% of the entire economy of the USA… imagine further that it is able to make its employees take a drug that makes them smarter and think orders of magnitude faster… I would be quite concerned that the government would basically become a pawn of this corporation. The corporation would essentially become the state. I worry that by default we are heading towards a world where there is a single AGI project in the lead, and that project has an army of ASIs on its datacenters, and the ASIs are all ‘on the same team’ because they are copies of each other and/or were trained in very similar ways…
These are all good points! This is not an easy problem. And generally I agree that for many reasons we don’t want a world where all power is concentrated by one entity—anti-trust laws exist for a reason!
We currently have about as much visibility into corporations as we do into large teams of AIs, because both corporations and AIs use english CoT to communicate internally. However, I fear that in the future we’ll have AIs using neuralese/recurrence to communicate with their future selves and with each other.
History is full of examples of corporations being ‘misaligned’ to the governments that in some sense created and own them. (and also to their shareholders, and also to the public, etc. Loads of examples of all kinds of misalignments). Drawing from this vast and deep history, we’ve evolved institutions to deal with these problems. But with AI, we don’t have that history yet, we are flying (relatively) blind.
Moreover, ASI will be qualitatively smarter and than any corporation ever has been.
Moreover, I would say that our current methods for aligning corporations only work as well as they do because the corporations have limited power. They exist in a competitive market with each other, for example. And they only think at the same speed as the governments trying to regulate them. Imagine a corporation that was rapidly growing to be 95% of the entire economy of the USA… imagine further that it is able to make its employees take a drug that makes them smarter and think orders of magnitude faster… I would be quite concerned that the government would basically become a pawn of this corporation. The corporation would essentially become the state. I worry that by default we are heading towards a world where there is a single AGI project in the lead, and that project has an army of ASIs on its datacenters, and the ASIs are all ‘on the same team’ because they are copies of each other and/or were trained in very similar ways…
These are all good points! This is not an easy problem. And generally I agree that for many reasons we don’t want a world where all power is concentrated by one entity—anti-trust laws exist for a reason!