Given that we’re not especially powerful optimizers relative to what’s possible (we’re only powerful relative to what exists on Earth…for now), this is at best an existence proof for the possibility of alignment for optimizers of fairly limited power. This is to say I don’t think this result is very relevant to the discussion of a sharp left turn in AI because, even if someone buys your argument, AI are not necessarily like humans in relevant ways that will be likely to make them aligned with anything in particular.
Given that we’re not especially powerful optimizers relative to what’s possible (we’re only powerful relative to what exists on Earth…for now), this is at best an existence proof for the possibility of alignment for optimizers of fairly limited power. This is to say I don’t think this result is very relevant to the discussion of a sharp left turn in AI because, even if someone buys your argument, AI are not necessarily like humans in relevant ways that will be likely to make them aligned with anything in particular.