Namely that I don’t think we can talk sensibly about an AI having “beneficial goal-directedness” without situational awareness. For instance, it’s of little use to have an AI with the goal of “ensuring human flourishing” if it doesn’t understand the meaning of flourishing or human. And, without situational awareness, it can’t understand either; at best we could have some proxy or pointer towards these key concepts.
Another way of saying this is that inner alignment is more important than outer alignment.
The key challenge seems to be to get the AI to generalise properly; even initially poor goals can work if generalised well. For instance, a money-maximising trade-bot AI could be perfectly safe if it notices that money, in its initial setting, is just a proxy for humans being able to satisfy their preferences.
I’ve also called this “generalise properly” part methodological alignment in this comment. And I conjectured that from methodological alignment and inner alignment, outer alignment follows automatically, we shouldn’t even care about it. Which also seems like what you are saying here.
Another way of saying this is that inner alignment is more important than outer alignment.
Interesting. My intuition is the inner alignment has nothing to do with this problem. It seems that different people view the inner vs outer alignment distinction in different ways.
Agreed.
Another way of saying this is that inner alignment is more important than outer alignment.
I’ve also called this “generalise properly” part methodological alignment in this comment. And I conjectured that from methodological alignment and inner alignment, outer alignment follows automatically, we shouldn’t even care about it. Which also seems like what you are saying here.
Interesting. My intuition is the inner alignment has nothing to do with this problem. It seems that different people view the inner vs outer alignment distinction in different ways.