The AI system accepts all previous feedback, but it may or may not trust anticipated future feedback. In particular, it should be trained not to trust feedback it would get by manipulating humans (so that it doesn’t see itself as having an incentive to manipulate humans to give specific sorts of feedback).
I will call this property of feedback “legitimacy”. The AI has a notion of when feedback is legitimate, and it needs to work to keep feedback legitimate (by not manipulating the human).
Legitimacy is good—but if an AI that’s supposed to be intent-aligned to the user would find that it has an “incentive” to purposefully manipulate the user in order to get particular feedback from the user, unless it pretends that it would ignore that feedback, it’s already misaligned and that misalignment should be dealt with directly IMO—this feels to me like a band-aid over a much more serious problem.
Legitimacy is good—but if an AI that’s supposed to be intent-aligned to the user would find that it has an “incentive” to purposefully manipulate the user in order to get particular feedback from the user, unless it pretends that it would ignore that feedback, it’s already misaligned and that misalignment should be dealt with directly IMO—this feels to me like a band-aid over a much more serious problem.