The offender, for eir part, should stop offending as soon as ey realizes that the amount of pain eir actions cause is greater than the amount of annoyance it would take to avoid the offending action, even if ey can’t understand why it would cause any pain at all.
In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason. And indeed, we can see this happening to some extent: when people take unreasonable offense and create drama to gain concessions, their feelings are usually quite sincere.
You say, “pretending to be offended for personal gain is… less common in reality than it is in people’s imaginations.” That is indeed true, but only because people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing’s to be gained.
Beautifully put. So according to your objection, if I want to increase net utility, I have two considerations to make:
reducing the offense I cause directly increases net utility (Yvain)
reducing the offense I cause creates a world with stronger incentives for offense-taking, which is likely to substantially decrease net utility in the long-term (Vladmir_M)
This seems like a very hard calculation. My intuition is that item 2 is more important since it’s a higher level of action, and I’m that kind of guy. But how do I rationally make this computation without my own biases coming in? My own opinions on “draw Mohammed day” have always been quite fuzzy and flip-floppy, for example.
I have a bad head for history. Do you know of anyone who has done this for me, ala Jared Diamond, for the case of free speech? It seems like it may still be hard to find someone who is plausibly unbiased on such a topic.
Perhaps “freedom of speech” (or whatever variable to call it) is so tightly bundled with other variables—most of all affluence—that it’s impossible to asses properly.
OTOH, if this bundling is evident across nations, cultures and time, it probably means that it truly is an important part of a net desirable society?
I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
Voluntary self-modification also requires a pre-existing desire to self-modify. I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him. The only point at which I would take such a pill is if I already cared enough about the honor of Mohammed that I was willing to die for him. Since people have risked their lives and earned lots of prison time protesting the Mohammed cartoons, even before they started any self-modification they must have had strong feelings about the issue.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you? I think you might be thinking of attempts to create in-group cohesion and signal loyalty by uniting against a common “offensive” enemy, something that I agree is common. But these attempts cannot be phrased in the consequentialist manner I suggested earlier and still work—they depend on a “we are all good, the other guy is all evil” mentality.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop. Even if you don’t like the latter part, I think the advice for the former might still be useful.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
It’s a Schellingian idea: in conflict situations, it is often a rational strategy to pre-commit to act irrationally (i.e. without regards to cost and benefit) unless the opponent yields. The idea in this case is that I’ll self-modify to care about X far more than I initially do, and thus pre-commit to lash out if anyone does it.
If we have a dispute and I credibly signal that I’m going to flip out and create drama out of all proportion to the issue at stake, you’re faced with a choice between conceding to my demands or getting into an unpleasant situation that will cost more than the matter of dispute is worth. I’m sure you can think of many examples where people successfully get the upper hand in disputes using this strategy. The only way to disincentivize such behavior is to pre-commit credibly to be defiant in face of threats of drama. In contrast, if you act like a (naive) utilitarian, you are exceptionally vulnerable to this strategy, since I don’t even need drama to get what I want, if I can self-modify to care tremendously about every single thing I want. (Which I won’t do if I’m a good naive utilitarian myself, but the whole point is that it’s not a stable strategy.)
Now, the key point is that such behavior is usually not consciously manipulative and calculated. On the contrary—someone flipping out and creating drama for a seemingly trivial reason is likely to be under God-honest severe distress, feeling genuine pain of offense and injustice. This is a common pattern in human social behavior: humans are extremely good at detecting faked emotions and conscious manipulation, and as a result, we have evolved so that our brains lash out with honest strong emotion that is nevertheless directed by some module that performs game-theoretic assessment of the situation. This of course prompts strategic responses from others, leading to a strategic arms race without end.
The further crucial point is that these game-theoretic calculators in our brains are usually smart enough to assess whether the flipping out strategy is likely to be successful, given what might be expected in response. Basically, it is a part of the human brain that responds to rational incentives even though it’s not under the control of the conscious mind. With this in mind, you can resolve the seeming contradiction between the sincerity of the pain of offense and the fact that it responds to rational incentives.
All this is somewhat complicated when we consider issues of group conflict rather than individual conflict, but the same basic principles apply.
The question is better phrased by asking what will be the practical consequences of treating an offense as legitimate and ceasing the offending action (and perhaps also apologizing) versus treating it as illegitimate and standing your ground (and perhaps even escalating). Clearly, this is a difficult question of great practical value in life, and like every such question, it’s impossible to give a simple and universally applicable answer. (And of course, even if you know the answer in some concrete situation, you’ll need extraordinary composure and self-control to apply it if it’s contrary to your instinctive reaction.)
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
However, there’s another sort of breakdown of negotiations that just occurred to me. If A asks for less than they want because they think that’s all they can get and/or they’re trying to do a utilitarian calculation, they aren’t going to be happy even if they get it. This means they’re likely to push for more even if they get it, and then they start looking like a utility monster.
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
What do you mean by “satiated”?
From a utilitarian/consequentialist point of view, a desire being “satiated” simply means that the marginal utility gains from pursuing it further are less than opportunity cost of however much effort it takes.
Note that by this definition when a desire is satiated depends on how easy it is to pursue.
If you’re hungry you might feel as though you could just keep eating and eating. However, if enough food is available, you’ll stop and hit a point where more food would make you feel worse instead of better. You’ll get hungry again, but part of the cycle includes satiation. For purposes of discussion, I’m talking about most people here, not those with eating disorders or unusual metabolisms that affect their ability to feel satiety.
I think most people have a limit on their desire for status, though that might be more like the situation you describe. Few would turn down a chance to be the world’s Dictator for Life, but they’ve hit a point where trying for more status than they’ve got seems like too much trouble.
Voluntary self-modification also requires a pre-existing desire to self-modify.
People have motives to increase their status, so we can check this box. Of course, this depends on phenotype, and some people do this much more than others.
I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him.
You can’t self-modify to an arbitrary belief, but you can self-modify towards other beliefs that are close to yours in belief space. See my comment about political writers. You can seek out political leaders, political groups, or even just friends, with beliefs slightly more radical than yours along a certain dimension (and you might be inspired to do so with just small exposure to them). Over time, your beliefs may shift.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group. When you get mad about stuff and complain about it, you feel like you are accomplishing something.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
The problem is that other people only care if you are with them or against them; they don’t care about your calculation.
The second problem is that it can be hard to distinguish these two things. People who have a sufficiently valid beef might be justified in making blame-based demands to stop offending, and your demand that they sound “respectful” and “reasonable” is itself unreasonable. Of course, people without a valid beef will use this exact same reasoning about why you can’t make a “tone argument” against them asking for them to sound more respectful and reasonable.
There might be a correlation between offense and the “validity” of the underlying issue, but this correlation is low enough that it can be hard to predict the validity of the underlying issue from how the offense reaction is expressed, which weakens the utility of the strategy you propose for identifying beefs.
However, your strategy might be useful as a Schelling Point for what sort of demands you’ll accept from others.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop.
It may have been tough to get the message, because the British salmon example is hypothetical. A real-world example of some group succeeding in claims of offensive might be useful.
Okay. I formally admit I’m wrong about the “should usually stop offensive behavior” thing (or, rather, I don’t know if I’m wrong but I formally admit my previous arguments for thinking I was right no longer move me and I now recognize I am confused.)
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
“Request to change” is low status, while “demand to change” is high status. The whole point of taking offense is that some part of your brain detects a threat to your status or an opportunity to increase status, so how can it be “better” to act low status when you feel offended? Well, it may be better if you think you should dis-identify with that part of your brain, and believe that even if some part of your brain cares a lot about status, the real you don’t. But you have to make that case, or state that as an assumption, which you haven’t, as far as I can tell (although I haven’t carefully read this whole discussion).
Here’s an example in case the above isn’t clear. Suppose I’m the king of some medieval country, and one of my subjects publicly addresses me without kneeling or call me “your majesty”. Is it better for me to request him to do so in the language of harm-minimization (“I’m hurt that you don’t consider me majestic”?), or to make a demand phrased in the language of offense?
It would be much better for you to make a request in the language of harm-minimization. If you do that sort of thing often, then it may so damage the aura of divine right (or whatever superstition your monarchy rests on) in that country that your descendants will never again be able to perpetrate the sort of crimes that your ancestors committed with impunity.
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
I see at least two huge problems with the harm-minimization approach.
First, it requires interpersonal comparison of harm, which can make sense in very drastic cases (e.g. one person getting killed versus another getting slightly inconvenienced), but it usually makes no sense in controversial disputes such as these.
Second, even if we can agree on the way to compare harm interpersonally, the game-theoretic concerns discussed in this thread clearly show that naive case-by-case harm minimization is unsound, since any case-by-case consequences of decisions can be overshadowed by the implications of the wider incentives and signals they provide. This can lead to incredibly complicated and non-obvious issues, where the law of unintended consequences lurks behind every corner. I have yet to see any consequentialists even begin to grapple with this problem convincingly, on this issue or any other.
We may be talking at cross-purposes. Are you arguing that if someone says something I find offensive, it is more productive for me to respond in the form of “You are a bad person for saying that and I demand an apology?” than “I’m sorry, but I was really hurt by your statement and I request you not make it again”?
It depends; there is no universal rule. Either response could be more appropriate in different cases. There are situations where if someone’s statements overstep certain lines, the rational response is to deem this a hostile act and demand an apology with the threat of escalation. There are also situations where it makes sense to ask people to refrain from hurtful statements, since the hurt is non-strategic.
Also, what exactly do you mean by “productive”? People’s interests may be fundamentally opposed, and it may be that the response that better serves the strategic interest of one party can do this only at the other’s expense, with neither of them being in the right in any objective sense.
Maybe the most productive variant is just to ignore the offender/offence?
On a slightly unrelated note, one psychologist I know has demonstrated me that sometimes it’s more useful to agree with offence on the spot, whatever it is, and just continue with conversation. So I think in some situations this too may be a viable option.
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group.
So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?
I won’t say that it never happens. I will say that the success prospects of that sort of strategy have been exaggerated of late.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
Surely there are a great many reasons other than offense why, for various different things X, it might be (or seem) useful to me to stop you from doing thing X. For example, if thing X is “mocking my beliefs”: if my beliefs are widely respected, I and people like me will have a larger share of influence than if my beliefs are widely mocked.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you?
Status games. There’s a satirical blog which addresses this, at least in the context of Western sophisticates:
....the threshold for being offended is a very important tool for judging and ranking white people. Missing an opportunity to be outraged is like missing a reference to Derrida-it’s social death.
ETA: In the context of Islamic reaction to the Mohammed cartoons as well as the burning of a Koran, there may be some value for a demogogue to conjure up atrocities by some demonized enemy in order to unite his (and in this case, it will be “his”) followers. Westerners have done the same sorts of things as well, most obviously in wartime propaganda.
I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
I think such modification mostly happens on the level of evolution, especially cultural and memetic evolution. Individual humans are adaptation executers who can’t deliberately self-modify in this way, but those who are more pre-modified are more evolutionarily successful.
That is indeed true, but only because people have the ability to whip themselves into a >very sincere feeling of offense given the incentive to do so. Although sincere, these >feelings will usually subside if they realize that nothing’s to be gained.
I’m reminded of how small children might start crying when they trip and fall and skuff their knee, but will only keep on (and/or escalate) crying if someone is nearby to pay attention…
people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing’s to be gained.
I agree with what you’re saying and it sounds logical, and I’m just wondering if you (or anyone, actually) would have some experimental evidence from psychology (or any related field) that people do that.
This view does seem to be somewhat intuitive to lesswrongers, but if you try to present it to outsiders, it would be nice if it’s backed by evidence from experimental research.
My real-world working theory on utility monsters of the type you describe is basically to keep in mind that some people are more sensitive than others, but if anyone reaches utility monster levels (roughly indicated by whether I think “this is completely absurd”), I flip the sign on their utility function.
Excuse me, but I think you should recheck your moral philosophy before you get the chance to act on that. Are you sure that shouldn’t be “become indifferent with respect to optimizing their utility function”, or perhaps “rescale their utility function to a more reasonable range”? Because according my moral philosophy, explicitly flipping the sign of another agent’s utility function and then optimizing is an evil act.
My own real-world working theory is that if someone I respect in general expresses a sensitivity that I consider completely absurd, I reduce my level of commitment to my process for evaluating the absurdity of sensitivities.
In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason.
The incentive is weaker than you seem to suggest. Surely, I gain nothing tangible by inducing people to tiptoe carefully around my minefield. Only a feeling of power, or perhaps some satisfaction at having caused inconvenience to my enemies. So, what is the more fruitful maxim to follow so as to discourage this kind of thing?
Don’t feed the utility monster.
or
Poke the utility monster with a stick until it desensitizes.
Somehow I have to think that poking is a form of capitulation to the manipulation—it is voluntary participation in a manufactured drama.
The incentive is weaker than you seem to suggest. Surely, I gain nothing tangible by inducing people to tiptoe carefully around my minefield.
Yes, you do. If everything unpleasant to you causes you a huge amount of suffering instead of, say, mild annoyance, other people (utilitarians) will abstain from doing things that are unpleasant to you as the negative utility to you outweighs the positive utility to them.
What you say is certainly true if the utility monster is simply exaggerating. But I understood VM to be discussing someone who claims offense where no offense (or negligible offense) actually exists. Or, someone who self-modifies to sincerely feel offended, though originally there was no such sensitivity.
But in any case, the real source of the problem in VM’s scenario is adhering to an ethical system which permits one to be exploited by utility monsters—real or feigned. My own ethical system avoids being exploited because I accept personal disutility so as to produce utility for others only to the extent that they reciprocate. So someone who exaggerates the disutility they derive from, say, my humming may succeed in keeping me silent in their presence, but this success may come at a cost regarding how much attention I pay to their other desires. So the would-be utility monster is only hurting itself by feeding me false information about its utility function.
But I understood VM to be discussing someone who claims offense where no offense (or negligible offense) actually exists.
The crucial point is that the level of offense at a certain action—and I mean real, sincerely felt painful offense, not fake indignation—is not something fixed and independent of the incentives people face. This may seem counterintuitive and paradoxical, but human brains do have functions that are not under direct control of the conscious mind, and are nevertheless guided by rational calculations and thus respond to incentives. People creating drama and throwing tantrums are a prime example: their emotions and distress are completely sincere, and their state of mind couldn’t be further from calculated pretense, and yet whatever it is in their brains that pushes them into drama and tantrums is very much guided by rational strategic considerations.
So, if I understand you, under certain strategic situations (particularly when they enjoy disconveniencing other folk), people will self-modify so as to feel more pain from certain common annoyances. And you, yourself are able to detect when this is happening. And you feel that you can create disincentives against their performing this self-modification by making the annoyances even more common. And you are yourself so rational that you are not subject to the temptation to self-modify yourself (by, say, convincing yourself that someone asking you to take their preferences into account is doing so ultimately because they enjoy disconveniencing you.)
You are now sneering instead of making an honest attempt to understand what I’m writing. (Although, just to be clear, it wasn’t me who downvoted your comment.)
My point is not some arcane insight open only to a superior intellect. On the contrary, examples of it can be seen everywhere in regular life. Kids will throw more tantrums if it always gets them what they want—and a kid throwing a tantrum is not acting, but under genuine distress. Similarly, when you have to deal with people who create drama over petty things, do you think a better strategy is to appease their every whim or to ignore their drama (and thus disincentivize it)? Again, people of this sort are typically not consciously calculated manipulators who fake their distress when they create drama.
So perhaps in these situations a good way to reduce hostility is to emphasize that while you’re opposed to what the other party’s subconscious status calculations are trying to do, you have no beef with their conscious selves. (Though often their conscious selves aren’t completely innocent either.)
I think this is probably a great way to increase hostility if you say it like that, equivalent to “I know it’s your time of the month but you should try to look at this reasonably”
And you feel that you can create disincentives against their performing this self-modification by making the annoyances even more common.
Even as a snide caricature, this is wrong. A lot of commenters here don’t seem to acknowledge three responses possible to claims of offence: to capitulate to them, to ignore them, and to flout them. The last two should not be conflated; the difference between them is the difference between illustrating an article on Muhammad with pictures (scroll down, since this example leans a little bit in the direction of capitulation) and participating in Everybody Draw Muhammad Day.
Respectfully, I do not conflate ignoring and flouting. In a g-g-grandparent, I call these responses ‘not feeding the utility monster’ and ‘poking it with a stick’. Capitulation would correspond to ‘feeding the monster’. I implicitly advocated not feeding the monster; i.e. ignoring the claims of offense.
What I may have done, though, is to conflate VM with one of the many people here who advocate ‘poking them’. If so, I plead guilty with extenuating circumstances; I was seduced by the formal beauty of a side-by-side comparison of two diagnoses of mental malfunction:
They dislike us; we dislike them.
They therefore gain utility by annoying us; we gain utility by annoying them.
They annoy us by inducing us not to draw Mohammed; we annoy them by drawing Mohammed.
But that only annoys if we want to draw Mohammed; and the other only annoys if they despise having people draw Mohammed.
So we self-modify to want to draw Mohammed; and they self-modify to feel real pain when people draw Mohammed.
We accomplish this self-modification by convincing ourselves using arguments involving slippery slopes, lines in the sand, and the defense of freedom, together with an intuitive understanding of their devious psychology and a grasp of game theory. They accomplish this self modification using arguments involving slippery slopes, lines in the sand, and defense of the faith, together with an intuitive understanding of our Satanic psychology and a grasp of game theory.
Only in the sense that a country with secure borders is hurting itself by forfeiting potential gains from trade. If what they want is to avoid being contaminated by your ideas, to avoid being criticized, that minefield is doing it’s job just fine.
Yvain:
In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason. And indeed, we can see this happening to some extent: when people take unreasonable offense and create drama to gain concessions, their feelings are usually quite sincere.
You say, “pretending to be offended for personal gain is… less common in reality than it is in people’s imaginations.” That is indeed true, but only because people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing’s to be gained.
Beautifully put. So according to your objection, if I want to increase net utility, I have two considerations to make:
reducing the offense I cause directly increases net utility (Yvain)
reducing the offense I cause creates a world with stronger incentives for offense-taking, which is likely to substantially decrease net utility in the long-term (Vladmir_M)
This seems like a very hard calculation. My intuition is that item 2 is more important since it’s a higher level of action, and I’m that kind of guy. But how do I rationally make this computation without my own biases coming in? My own opinions on “draw Mohammed day” have always been quite fuzzy and flip-floppy, for example.
One way is to try and compare similar countries where such offensiveness bans are enforced or not, and see which direction net migration is.
This may be difficult since countries without such bans will in all likely become more prosperous than those with them.
Another alternative might be comparing the same country before and after such laws, e.g. Pakistan.
“Look at the world”. Always a good answer!
I have a bad head for history. Do you know of anyone who has done this for me, ala Jared Diamond, for the case of free speech? It seems like it may still be hard to find someone who is plausibly unbiased on such a topic.
There are many other factors affecting migration. Is it possible to evaluate a single factor’s direct influence?
I don’t know.
Perhaps “freedom of speech” (or whatever variable to call it) is so tightly bundled with other variables—most of all affluence—that it’s impossible to asses properly.
OTOH, if this bundling is evident across nations, cultures and time, it probably means that it truly is an important part of a net desirable society?
I’m not sure people can voluntarily self-modify in this way. Even if it’s possible, I don’t think most real people getting offended by real issues are primarily doing this.
Voluntary self-modification also requires a pre-existing desire to self-modify. I wouldn’t take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don’t really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him. The only point at which I would take such a pill is if I already cared enough about the honor of Mohammed that I was willing to die for him. Since people have risked their lives and earned lots of prison time protesting the Mohammed cartoons, even before they started any self-modification they must have had strong feelings about the issue.
If X doesn’t offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn’t offend you? I think you might be thinking of attempts to create in-group cohesion and signal loyalty by uniting against a common “offensive” enemy, something that I agree is common. But these attempts cannot be phrased in the consequentialist manner I suggested earlier and still work—they depend on a “we are all good, the other guy is all evil” mentality.
Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.
One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop. Even if you don’t like the latter part, I think the advice for the former might still be useful.
It’s a Schellingian idea: in conflict situations, it is often a rational strategy to pre-commit to act irrationally (i.e. without regards to cost and benefit) unless the opponent yields. The idea in this case is that I’ll self-modify to care about X far more than I initially do, and thus pre-commit to lash out if anyone does it.
If we have a dispute and I credibly signal that I’m going to flip out and create drama out of all proportion to the issue at stake, you’re faced with a choice between conceding to my demands or getting into an unpleasant situation that will cost more than the matter of dispute is worth. I’m sure you can think of many examples where people successfully get the upper hand in disputes using this strategy. The only way to disincentivize such behavior is to pre-commit credibly to be defiant in face of threats of drama. In contrast, if you act like a (naive) utilitarian, you are exceptionally vulnerable to this strategy, since I don’t even need drama to get what I want, if I can self-modify to care tremendously about every single thing I want. (Which I won’t do if I’m a good naive utilitarian myself, but the whole point is that it’s not a stable strategy.)
Now, the key point is that such behavior is usually not consciously manipulative and calculated. On the contrary—someone flipping out and creating drama for a seemingly trivial reason is likely to be under God-honest severe distress, feeling genuine pain of offense and injustice. This is a common pattern in human social behavior: humans are extremely good at detecting faked emotions and conscious manipulation, and as a result, we have evolved so that our brains lash out with honest strong emotion that is nevertheless directed by some module that performs game-theoretic assessment of the situation. This of course prompts strategic responses from others, leading to a strategic arms race without end.
The further crucial point is that these game-theoretic calculators in our brains are usually smart enough to assess whether the flipping out strategy is likely to be successful, given what might be expected in response. Basically, it is a part of the human brain that responds to rational incentives even though it’s not under the control of the conscious mind. With this in mind, you can resolve the seeming contradiction between the sincerity of the pain of offense and the fact that it responds to rational incentives.
All this is somewhat complicated when we consider issues of group conflict rather than individual conflict, but the same basic principles apply.
Do you have strategies for distinguishing between game theoretic exaggeration of offense vs. natural offense?
The question is better phrased by asking what will be the practical consequences of treating an offense as legitimate and ceasing the offending action (and perhaps also apologizing) versus treating it as illegitimate and standing your ground (and perhaps even escalating). Clearly, this is a difficult question of great practical value in life, and like every such question, it’s impossible to give a simple and universally applicable answer. (And of course, even if you know the answer in some concrete situation, you’ll need extraordinary composure and self-control to apply it if it’s contrary to your instinctive reaction.)
I don’t see the distinction you’re trying to make.
Tentatively—game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.
However, there’s another sort of breakdown of negotiations that just occurred to me. If A asks for less than they want because they think that’s all they can get and/or they’re trying to do a utilitarian calculation, they aren’t going to be happy even if they get it. This means they’re likely to push for more even if they get it, and then they start looking like a utility monster.
What do you mean by “satiated”?
From a utilitarian/consequentialist point of view, a desire being “satiated” simply means that the marginal utility gains from pursuing it further are less than opportunity cost of however much effort it takes.
Note that by this definition when a desire is satiated depends on how easy it is to pursue.
If you’re hungry you might feel as though you could just keep eating and eating. However, if enough food is available, you’ll stop and hit a point where more food would make you feel worse instead of better. You’ll get hungry again, but part of the cycle includes satiation. For purposes of discussion, I’m talking about most people here, not those with eating disorders or unusual metabolisms that affect their ability to feel satiety.
I think most people have a limit on their desire for status, though that might be more like the situation you describe. Few would turn down a chance to be the world’s Dictator for Life, but they’ve hit a point where trying for more status than they’ve got seems like too much trouble.
People have motives to increase their status, so we can check this box. Of course, this depends on phenotype, and some people do this much more than others.
You can’t self-modify to an arbitrary belief, but you can self-modify towards other beliefs that are close to yours in belief space. See my comment about political writers. You can seek out political leaders, political groups, or even just friends, with beliefs slightly more radical than yours along a certain dimension (and you might be inspired to do so with just small exposure to them). Over time, your beliefs may shift.
To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group. When you get mad about stuff and complain about it, you feel like you are accomplishing something.
The problem is that other people only care if you are with them or against them; they don’t care about your calculation.
The second problem is that it can be hard to distinguish these two things. People who have a sufficiently valid beef might be justified in making blame-based demands to stop offending, and your demand that they sound “respectful” and “reasonable” is itself unreasonable. Of course, people without a valid beef will use this exact same reasoning about why you can’t make a “tone argument” against them asking for them to sound more respectful and reasonable.
There might be a correlation between offense and the “validity” of the underlying issue, but this correlation is low enough that it can be hard to predict the validity of the underlying issue from how the offense reaction is expressed, which weakens the utility of the strategy you propose for identifying beefs.
However, your strategy might be useful as a Schelling Point for what sort of demands you’ll accept from others.
It may have been tough to get the message, because the British salmon example is hypothetical. A real-world example of some group succeeding in claims of offensive might be useful.
Okay. I formally admit I’m wrong about the “should usually stop offensive behavior” thing (or, rather, I don’t know if I’m wrong but I formally admit my previous arguments for thinking I was right no longer move me and I now recognize I am confused.)
I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don’t know if anyone is challenging that.
“Request to change” is low status, while “demand to change” is high status. The whole point of taking offense is that some part of your brain detects a threat to your status or an opportunity to increase status, so how can it be “better” to act low status when you feel offended? Well, it may be better if you think you should dis-identify with that part of your brain, and believe that even if some part of your brain cares a lot about status, the real you don’t. But you have to make that case, or state that as an assumption, which you haven’t, as far as I can tell (although I haven’t carefully read this whole discussion).
Here’s an example in case the above isn’t clear. Suppose I’m the king of some medieval country, and one of my subjects publicly addresses me without kneeling or call me “your majesty”. Is it better for me to request him to do so in the language of harm-minimization (“I’m hurt that you don’t consider me majestic”?), or to make a demand phrased in the language of offense?
It would be much better for you to make a request in the language of harm-minimization. If you do that sort of thing often, then it may so damage the aura of divine right (or whatever superstition your monarchy rests on) in that country that your descendants will never again be able to perpetrate the sort of crimes that your ancestors committed with impunity.
I see at least two huge problems with the harm-minimization approach.
First, it requires interpersonal comparison of harm, which can make sense in very drastic cases (e.g. one person getting killed versus another getting slightly inconvenienced), but it usually makes no sense in controversial disputes such as these.
Second, even if we can agree on the way to compare harm interpersonally, the game-theoretic concerns discussed in this thread clearly show that naive case-by-case harm minimization is unsound, since any case-by-case consequences of decisions can be overshadowed by the implications of the wider incentives and signals they provide. This can lead to incredibly complicated and non-obvious issues, where the law of unintended consequences lurks behind every corner. I have yet to see any consequentialists even begin to grapple with this problem convincingly, on this issue or any other.
We may be talking at cross-purposes. Are you arguing that if someone says something I find offensive, it is more productive for me to respond in the form of “You are a bad person for saying that and I demand an apology?” than “I’m sorry, but I was really hurt by your statement and I request you not make it again”?
It depends; there is no universal rule. Either response could be more appropriate in different cases. There are situations where if someone’s statements overstep certain lines, the rational response is to deem this a hostile act and demand an apology with the threat of escalation. There are also situations where it makes sense to ask people to refrain from hurtful statements, since the hurt is non-strategic.
Also, what exactly do you mean by “productive”? People’s interests may be fundamentally opposed, and it may be that the response that better serves the strategic interest of one party can do this only at the other’s expense, with neither of them being in the right in any objective sense.
Maybe the most productive variant is just to ignore the offender/offence?
On a slightly unrelated note, one psychologist I know has demonstrated me that sometimes it’s more useful to agree with offence on the spot, whatever it is, and just continue with conversation. So I think in some situations this too may be a viable option.
So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?
I won’t say that it never happens. I will say that the success prospects of that sort of strategy have been exaggerated of late.
Sure. See, for example, the rise in prominence of the Gnu Atheists (of which I am one).
Surely there are a great many reasons other than offense why, for various different things X, it might be (or seem) useful to me to stop you from doing thing X. For example, if thing X is “mocking my beliefs”: if my beliefs are widely respected, I and people like me will have a larger share of influence than if my beliefs are widely mocked.
Status games. There’s a satirical blog which addresses this, at least in the context of Western sophisticates:
ETA: In the context of Islamic reaction to the Mohammed cartoons as well as the burning of a Koran, there may be some value for a demogogue to conjure up atrocities by some demonized enemy in order to unite his (and in this case, it will be “his”) followers. Westerners have done the same sorts of things as well, most obviously in wartime propaganda.
I think such modification mostly happens on the level of evolution, especially cultural and memetic evolution. Individual humans are adaptation executers who can’t deliberately self-modify in this way, but those who are more pre-modified are more evolutionarily successful.
I’m reminded of how small children might start crying when they trip and fall and skuff their knee, but will only keep on (and/or escalate) crying if someone is nearby to pay attention…
I agree with what you’re saying and it sounds logical, and I’m just wondering if you (or anyone, actually) would have some experimental evidence from psychology (or any related field) that people do that.
This view does seem to be somewhat intuitive to lesswrongers, but if you try to present it to outsiders, it would be nice if it’s backed by evidence from experimental research.
So anyone?
My real-world working theory on utility monsters of the type you describe is basically to keep in mind that some people are more sensitive than others, but if anyone reaches utility monster levels (roughly indicated by whether I think “this is completely absurd”), I flip the sign on their utility function.
Excuse me, but I think you should recheck your moral philosophy before you get the chance to act on that. Are you sure that shouldn’t be “become indifferent with respect to optimizing their utility function”, or perhaps “rescale their utility function to a more reasonable range”? Because according my moral philosophy, explicitly flipping the sign of another agent’s utility function and then optimizing is an evil act.
My own real-world working theory is that if someone I respect in general expresses a sensitivity that I consider completely absurd, I reduce my level of commitment to my process for evaluating the absurdity of sensitivities.
So you consider it to be a major source of positive utility to antagonize them?
Tongue-in-cheek, yes.
The incentive is weaker than you seem to suggest. Surely, I gain nothing tangible by inducing people to tiptoe carefully around my minefield. Only a feeling of power, or perhaps some satisfaction at having caused inconvenience to my enemies. So, what is the more fruitful maxim to follow so as to discourage this kind of thing?
Don’t feed the utility monster.
or
Poke the utility monster with a stick until it desensitizes.
Somehow I have to think that poking is a form of capitulation to the manipulation—it is voluntary participation in a manufactured drama.
Yes, you do. If everything unpleasant to you causes you a huge amount of suffering instead of, say, mild annoyance, other people (utilitarians) will abstain from doing things that are unpleasant to you as the negative utility to you outweighs the positive utility to them.
What you say is certainly true if the utility monster is simply exaggerating. But I understood VM to be discussing someone who claims offense where no offense (or negligible offense) actually exists. Or, someone who self-modifies to sincerely feel offended, though originally there was no such sensitivity.
But in any case, the real source of the problem in VM’s scenario is adhering to an ethical system which permits one to be exploited by utility monsters—real or feigned. My own ethical system avoids being exploited because I accept personal disutility so as to produce utility for others only to the extent that they reciprocate. So someone who exaggerates the disutility they derive from, say, my humming may succeed in keeping me silent in their presence, but this success may come at a cost regarding how much attention I pay to their other desires. So the would-be utility monster is only hurting itself by feeding me false information about its utility function.
The crucial point is that the level of offense at a certain action—and I mean real, sincerely felt painful offense, not fake indignation—is not something fixed and independent of the incentives people face. This may seem counterintuitive and paradoxical, but human brains do have functions that are not under direct control of the conscious mind, and are nevertheless guided by rational calculations and thus respond to incentives. People creating drama and throwing tantrums are a prime example: their emotions and distress are completely sincere, and their state of mind couldn’t be further from calculated pretense, and yet whatever it is in their brains that pushes them into drama and tantrums is very much guided by rational strategic considerations.
So, if I understand you, under certain strategic situations (particularly when they enjoy disconveniencing other folk), people will self-modify so as to feel more pain from certain common annoyances. And you, yourself are able to detect when this is happening. And you feel that you can create disincentives against their performing this self-modification by making the annoyances even more common. And you are yourself so rational that you are not subject to the temptation to self-modify yourself (by, say, convincing yourself that someone asking you to take their preferences into account is doing so ultimately because they enjoy disconveniencing you.)
I guess I understand your point now.
You are now sneering instead of making an honest attempt to understand what I’m writing. (Although, just to be clear, it wasn’t me who downvoted your comment.)
My point is not some arcane insight open only to a superior intellect. On the contrary, examples of it can be seen everywhere in regular life. Kids will throw more tantrums if it always gets them what they want—and a kid throwing a tantrum is not acting, but under genuine distress. Similarly, when you have to deal with people who create drama over petty things, do you think a better strategy is to appease their every whim or to ignore their drama (and thus disincentivize it)? Again, people of this sort are typically not consciously calculated manipulators who fake their distress when they create drama.
So perhaps in these situations a good way to reduce hostility is to emphasize that while you’re opposed to what the other party’s subconscious status calculations are trying to do, you have no beef with their conscious selves. (Though often their conscious selves aren’t completely innocent either.)
I think this is probably a great way to increase hostility if you say it like that, equivalent to “I know it’s your time of the month but you should try to look at this reasonably”
Even as a snide caricature, this is wrong. A lot of commenters here don’t seem to acknowledge three responses possible to claims of offence: to capitulate to them, to ignore them, and to flout them. The last two should not be conflated; the difference between them is the difference between illustrating an article on Muhammad with pictures (scroll down, since this example leans a little bit in the direction of capitulation) and participating in Everybody Draw Muhammad Day.
Respectfully, I do not conflate ignoring and flouting. In a g-g-grandparent, I call these responses ‘not feeding the utility monster’ and ‘poking it with a stick’. Capitulation would correspond to ‘feeding the monster’. I implicitly advocated not feeding the monster; i.e. ignoring the claims of offense.
What I may have done, though, is to conflate VM with one of the many people here who advocate ‘poking them’. If so, I plead guilty with extenuating circumstances; I was seduced by the formal beauty of a side-by-side comparison of two diagnoses of mental malfunction:
They dislike us; we dislike them.
They therefore gain utility by annoying us; we gain utility by annoying them.
They annoy us by inducing us not to draw Mohammed; we annoy them by drawing Mohammed.
But that only annoys if we want to draw Mohammed; and the other only annoys if they despise having people draw Mohammed.
So we self-modify to want to draw Mohammed; and they self-modify to feel real pain when people draw Mohammed.
We accomplish this self-modification by convincing ourselves using arguments involving slippery slopes, lines in the sand, and the defense of freedom, together with an intuitive understanding of their devious psychology and a grasp of game theory. They accomplish this self modification using arguments involving slippery slopes, lines in the sand, and defense of the faith, together with an intuitive understanding of our Satanic psychology and a grasp of game theory.
And a jolly time is had by all.
I think that this is what happened. There are people here who have advocated poking them, and I agree with you about that. But VM is not one of them.
I like your comparison.
Only in the sense that a country with secure borders is hurting itself by forfeiting potential gains from trade. If what they want is to avoid being contaminated by your ideas, to avoid being criticized, that minefield is doing it’s job just fine.