What if it follows human norms with dangerously superhuman skill.
Suppose humans had a really strong norm that you were allowed to say whatever you like, and encouraged to say things others will find interesting.
Among humans, the most we can exert is a small optimization for the not totally dull.
The AI produces a sequence that effectively hacks the human brain and sets interest to maximum.
I agree this is a potential problem, I would classify this under the problem category “Powerful AI likely leads to rapidly evolving technologies, which might require rapidly changing norms” above.
We already have a problem of this sort with media being an amplifier for controversy instead of truth.