Signaling Strategies and Morality

I am far from convinced that people in general wish to be seen as caring more about morality than they actually do. If this was the case, why would the persistent claim that people are—and, logically, must be—egoists have so long survived strong counter-arguments? The argument appears to me to be a way of signaling a lack of excessive and low status moral scruples.

It seems to me that the desire to signal as much morality as possible is held by a minority of women and by a small minority of men. Those people are also the main people who talk about morality. This is commonly a problem in the development of thought. People with an interest in verbally discussing a subject may have systematically atypical attitudes towards that subject. Of course, this issue is further complicated by the fact that people don’t agree on what broad type of thing morality is.

The conflict within philosophy between Utilitarians and Kantians is among the most famous examples of this disagreement. <a href=” http://​​people.virginia.edu/​​~jdh6n/​​moraljudgment.html”> Haidt’s views on conservative vs. liberal morality </​​a> is another. Major, usually implicit disagreements regard whether morality is supposed to serve as a decision system, a set of constraints on a decision system, or a set of reasons that should influence a person along with prudential, honor, spontaneity, authenticity, and other such types of reasons.

It seems to me that people usually want to signal whatever gives others the most reason to respect their interests. Roughly, this amounts to wanting to signal what Haidt calls conservative morality. Basically, people would like to signal “I am slightly more committed to the group’s welfare, particularly to that of its weakest members (caring), than most of its members are. If you suffer a serious loss of status/​well-being I will still help you in order to display affiliation to the group even though you will no longer be in a position to help me. I am substantially more kind and helpful to the people I like (loyal) and substantially more vindictive and aggressive towards those I dislike (honorable, ignored by Haidt). I am generally stable in who I like (loyalty and identity, implying low cognitive cost for allies, low variance long term investment). I am much more capable and popular than most members of the group, demand appropriate consideration, and grant appropriate consideration to those more capable than myself (status/​hierarchy). I adhere to simple taboos (not disgusting) so that my reputation and health are secure and so that I am unlikely to contaminate the reputations or health of my friends. I currently like you and dislike your enemies but I am somewhat inclined towards ambivalence on regarding whether I like you right now so the pay-off would be very great for you if you were to expend resources pleasing me and get me into the stable ‘liking you’ region of my possible attitudinal space. Once there, I am likely to make a strong commitment to a friendly attitude towards you rather than wasting cognitive resources checking a predictable parameter among my set of derivative preferences.”

An interesting point here is that this suggests the existence of a trade-off in the level of intelligence that a person wishes to signal. In this model, intelligence is necessary to distinguish between costly/​genuine and cheap/​fake signals of affiliation and to be effective as a friend or as an enemy. For these reasons, people want to be seen as somewhat more intelligent than the average member of the group. People also want to appear slightly less intelligent than whoever they are addressing, in order to avoid appearing unpredictable.

This is plausibly a multiple equilibrium model. You can appear slightly more or less intelligent with effort, confidence and affectations. Trying to appear much less intelligent than you are is difficult as you must essentially simulate one system with another system, which implies an overhead cost. If you can’t appear to be little more intelligent than the higher status members of the group, who typically have modestly above average intelligence, you can’t easily be a trusted ally of the people you most need to ally with. If you can’t effectively show yourself to be a predictable ally for individuals you may want to show yourself to be a predictable ally of the group by predictably following rules (justice) and by predictably serving its collective interests (caring). That allows less intelligent individuals in the group to outsource the task of scrutinizing your loyalty. People can more easily communicate indicators of group disloyalty by asserting that you have broken a rule, so people who can’t be conservatively moral will attend more closely to rules. On this model, Haidt’s liberalism (which I believe includes libertarianism) is a consequence of difficulty credibly signaling personal loyalties and thus having to overemphasize caring and what he calls justice, by which he means following rules.

In America, the explicit rules that people are given are descended from a frontier setting where independence was very practically important and where morality with very strong acts/​omissions distinctions was sufficient to satisfy collective needs with low administrative costs and with easy cheater detection. Leaving others alone (and implicitly, tolerance) rather than enforcing purity works well when large distances make good neighbors. As a result, the explicit rules that people are taught de-emphasize status/​hierarchy, disgust, to a lesser degree loyalty and identity, and to a still lesser extent caring. When the influence of justice, e.g. rules, is emphasized by difficulty in behaving predictably, liberal morality, or ultimately libertarian morality, are the result.