This does not seem to be the case when the AIs are unable to read each other’s minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility function from the participants’ brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don’t (necessarily) have to care about future me.
This does not seem to be the case when the AIs are unable to read each other’s minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility function from the participants’ brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don’t (necessarily) have to care about future me.