alignment is structurally impossible under competitive pressur
Alignment contrasts with control, as a means to AI safety.
Alignment roughly means the AI has goals, or values similar to human ones (which are assumed, without much evidence to be similar across humans), so that it will do what we want , because it’s what it wants.
Control means that it doesn’t matter what the AI wants, if it wants anything.
In short, there is plenty of competitive pressure towards control , because no wants an AI they can’t control. Control is part of capability.
Alignment contrasts with control, as a means to AI safety.
Alignment roughly means the AI has goals, or values similar to human ones (which are assumed, without much evidence to be similar across humans), so that it will do what we want , because it’s what it wants.
Control means that it doesn’t matter what the AI wants, if it wants anything.
In short, there is plenty of competitive pressure towards control , because no wants an AI they can’t control. Control is part of capability.