Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”