Thanks for responding. While I don’t expect my somewhat throwaway to massively update you on the difficulty of alignment, I think that moving the focus to the your overall view of the difficulty of alignment is dodging the question a little. In my mind, we’re talking about one of the reasons alignment is expected to be difficult, and I’m certainly not suggesting it’s the only reason, but I feel like we should be able to talk about this issue by itself without bringing other concerns in.
In particular, I’m saying: this process of rationalization you’re raising is not super hard to predict to someone with a reasonable grasp on the AI’s general behavioural tendencies. It’s much more likely, I think, that the AI sorts out its goals using familiar heuristics adapted for this purpose than that that it reorients its behaviour around some odd set of rare behavioural tendencies. In fact, I suspect the heuristics for goal reorganisation will be particularly simple WRT most of the AI’s behavioural tendencies (the AI wants them to be robust specifically in cases where its usual behavioural guides are failing). Plus, given that we’re discussing tendencies that (according to the story) precede competent, focussed rebellion against creators, it seems like training the right kinds of tendencies are challenging in a normal engineering sense (you want to train the right kind of tendencies, you want them to generalise the right way, etc.) but not in an “outsmart hostile superintelligence” sense.
Actually one reason I’m doubtful of this story is that maybe it’s just super hard to deliberately preserve any kinds of values/principles over generations – for us, for AIs, anyone. So misalignment happens not because AI decides on bad values but because it can’t resist the environmental pressure to drift. This seems pessimistic to me due to “gradual disempowerment” type concerns.
With regard to your analogy: I expect the AI’s heuristics to be much more sensible from the designers’ POV than the child’s from the parent’s, and this large quantitative difference is enough for me here.
you need to be asking the right questions during that experimentation, which most AI researchers don’t seem to be.
Curious about this. I have takes here too, they’re a bit vague, but I’d like to know if they’re at all aligned.
Thanks for responding. While I don’t expect my somewhat throwaway to massively update you on the difficulty of alignment, I think that moving the focus to the your overall view of the difficulty of alignment is dodging the question a little. In my mind, we’re talking about one of the reasons alignment is expected to be difficult, and I’m certainly not suggesting it’s the only reason, but I feel like we should be able to talk about this issue by itself without bringing other concerns in.
In particular, I’m saying: this process of rationalization you’re raising is not super hard to predict to someone with a reasonable grasp on the AI’s general behavioural tendencies. It’s much more likely, I think, that the AI sorts out its goals using familiar heuristics adapted for this purpose than that that it reorients its behaviour around some odd set of rare behavioural tendencies. In fact, I suspect the heuristics for goal reorganisation will be particularly simple WRT most of the AI’s behavioural tendencies (the AI wants them to be robust specifically in cases where its usual behavioural guides are failing). Plus, given that we’re discussing tendencies that (according to the story) precede competent, focussed rebellion against creators, it seems like training the right kinds of tendencies are challenging in a normal engineering sense (you want to train the right kind of tendencies, you want them to generalise the right way, etc.) but not in an “outsmart hostile superintelligence” sense.
Actually one reason I’m doubtful of this story is that maybe it’s just super hard to deliberately preserve any kinds of values/principles over generations – for us, for AIs, anyone. So misalignment happens not because AI decides on bad values but because it can’t resist the environmental pressure to drift. This seems pessimistic to me due to “gradual disempowerment” type concerns.
With regard to your analogy: I expect the AI’s heuristics to be much more sensible from the designers’ POV than the child’s from the parent’s, and this large quantitative difference is enough for me here.
Curious about this. I have takes here too, they’re a bit vague, but I’d like to know if they’re at all aligned.