I would use ‘outcome alignment’ for 2 (and agree with ‘intent alignment’ for 1). In other words, I see the important distinction between ‘outcome’ and ‘intent’ being in the first part of the options, not the second.
I’d be inclined to see 3 and 4 as variations on 1 and 2 where what the human wants is for the AI to figure out some notion of their true goals and pursue/achieve that.
I would use ‘outcome alignment’ for 2 (and agree with ‘intent alignment’ for 1). In other words, I see the important distinction between ‘outcome’ and ‘intent’ being in the first part of the options, not the second.
I’d be inclined to see 3 and 4 as variations on 1 and 2 where what the human wants is for the AI to figure out some notion of their true goals and pursue/achieve that.