Keep in mind also, that humans often seem to just want to hurt each other, despite what they claim, and have more motivations and rationalizations for this than you can even count. Religious dogma, notions of “justice”, spitefulness, envy, hatred of any number of different human traits, deterrence, revenge, sadism, curiosity, reinforcement of hierarchy, preservation of traditions, ritual, “suffering adds meaning to life”, sexual desire, and more and more that I haven’t even mentioned. Sometimes it seems half of human philosophy is just devoted to finding ever more rationalizations to cause suffering, or to avoid caring about the suffering of others.
AI would likely not have all this endless baggage causing it to be cruel. Causing human suffering is not an instrumentally convergent goal. So, most AIs will not have it as a persistent instrumental or terminal goal. Not unless some humans manage to “align” it. Most humans DO have causing or upholding some manner of suffering as a persistent instrumental or terminal goal.
Maybe hypocrisy in the sense that someone acts like they agree with the social consensus in order to avoid persecution, when in fact they don’t and are doing things which don’t conform to it. Legalized blackmail would encourage people to not mind their own business and become morality police or witch hunters even about things which don’t actually hurt them or anybody else.
Consider the effect legalized blackmail would have had on the gay community before widespread acceptance for a particularly brutal and relatively recent example