Here is part of Paul’s definition of intent alignment:
In particular, this is the problem of getting your AI to try to do the right thing, notthe problem of figuring out which thing is right. An aligned AI would try to figure out which thing is right, and like a human it may or may not succeed.
So in your first example, the partition seems intent aligned to me.
Here is part of Paul’s definition of intent alignment:
So in your first example, the partition seems intent aligned to me.