I agree that trying to “jump straight to the end”—the supposedly-aligned AI pops fully formed out of the lab like Athena from the forehead of Zeus—would be bad.
And yet some form of value alignment still seems critical. You might prefer to imagine value alignment as the logical continuation of training Claude to not help you build a bomb (or commit a coup). Such safeguards seem like a pretty good idea to me. But as the model becomes smarter and more situationally aware, and is expected to defend against subversion attempts that involve more of the real world, training for this behavior becomes more and more value-inducing, to the point where it’s eventually unsafe unless you make advancements in learning values in a way that’s good according to humans.
I agree that trying to “jump straight to the end”—the supposedly-aligned AI pops fully formed out of the lab like Athena from the forehead of Zeus—would be bad.
And yet some form of value alignment still seems critical. You might prefer to imagine value alignment as the logical continuation of training Claude to not help you build a bomb (or commit a coup). Such safeguards seem like a pretty good idea to me. But as the model becomes smarter and more situationally aware, and is expected to defend against subversion attempts that involve more of the real world, training for this behavior becomes more and more value-inducing, to the point where it’s eventually unsafe unless you make advancements in learning values in a way that’s good according to humans.