A punishment is when one agent (the punisher) imposes costs on another (the punished) in order to affect the punished’s behavior. In a Society where thieves are predictably imprisoned and lashed, people will predictably steal less than they otherwise would, for fear of being imprisoned and lashed.
Punishment is often imposed by formal institutions like police and judicial systems, but need not be. A controversial orator who finds a rock thrown through her window can be said to have been punished in the same sense: in a Society where controversial orators predictably get rocks thrown through their windows, people will predictably engage in less controversial speech, for fear of getting rocks thrown through their windows.
In the most basic forms of punishment, which we might term “physical”, the nature of the cost imposed on the punished is straightforward. No one likes being stuck in prison, or being lashed, or having a rock thrown through her window.
But subtler forms of punishment are possible. Humans are an intensely social species: we depend on friendship and trade with each other in order to survive and thrive. Withholding friendship or trade can be its own form of punishment, no less devastating than a whip or a rock. This is called “social punishment”.
Effective social punishment usually faces more complexities of implementation than physical punishment, because of the greater number of participants needed in order to have the desired deterrent effect. Throwing a rock only requires one person to have a rock; effectively depriving a punishment-target of friendship may require many potential friends to withhold their beneficence.
How is the collective effort of social punishment to be coordinated? If human Societies were hive-minds featuring an Authority that could broadcast commands to be reliably obeyed by the hive’s members, then there would be no problem. If the hive-queen wanted to socially punish Mallory, she could just issue a command, “We’re giving Mallory the silent treatment now”, and her majesty’s will would be done.
No such Authority exists. But while human Societies lack a collective will, they often have something much closer to collective beliefs: shared maps that (hopefully) reflect the territory. No one can observe enough or think quickly enough to form her own independent beliefs about everything. Most of what we think we know comes from others, who in turn learned it from others. Furthermore, one of our most decision-relevant classes of belief concern the character and capabilities of other people with whom we might engage in friendship or trade relations.
As a consequence, social punishment is typically implemented by means of reputation: spreading beliefs about the punishment-target that merely imply that benefits should be withheld from the target, rather than by directly coordinating explicit sanctions. Social punishers don’t say, “We’re giving Mallory the silent treatment now.” (Because, who’s we?) They simply say that Mallory is stupid, dishonest, cruel, ugly, &c. These are beliefs that, if true, imply that people will do worse for themselves by helping Mallory. (If Mallory is stupid, she won’t be as capable of repaying favors. If she’s dishonest, she might lie to you. If she’s cruel … &c.) Negative-valence beliefs about Mallory double as “social punishments”, because if those beliefs appear on shared maps, the predictable consequence will be that Mallory will be deprived of friendship and trade opportunities.
We notice a critical difference between social punishments and physical punishments. Beliefs can be true or false. A rock or a jail cell is not a belief. You can’t say that the rock is false, but you can say it’s false that Mallory is stupid.
The linkage between collective beliefs and social punishment creates distortions that are important to track. People have an incentive to lie to prevent negative-valence beliefs about themselves from appearing on shared maps (even if the beliefs are true). People who have enemies whom they hate have an incentive to lie to insert negative-valence beliefs about their enemies onto the shared map (even if the beliefs are false). The stakes are high: an erroneously thrown rock only affects its target, but an erroneous map affects everyone using that map to make decisions about the world (including decisions about throwing rocks).
Intimidated by the stakes, some actors in Society who understand the similarity between social and physical punishment, but don’t understand the relationship between social punishment and shared maps, might try to take steps to limit social punishment. It would be bad, they reason, if people were trapped in a cycle of mutual recrimination of physical punishments. Nobody wins if I throw a rock through your window to retaliate for you throwing a rock through my window, &c. Better to foresee that and make sure no one throws any rocks at all, or at least not big ones. They imagine that they can apply the same reasoning to social punishments without paying any costs to the accuracy of shared maps, that we can account for social standing and status in our communication without sacrificing any truthseeking.
It’s mostly an illusion. If Alice possesses evidence that Mallory is stupid, dishonest, cruel, ugly, &c., she might want to publish that evidence in order to improve the accuracy of shared maps of Mallory’s character and capabilities. If the evidence is real and its recipients understand the filters through which it reached them, publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory.
But it also functions as social punishment. If Alice tries to disclaim, “Look, I’m not trying to ‘socially punish’ Mallory; I’m just providing evidence to update the part of the shared map which happens to be about Mallory’s character and capabilities”, then Bob, Carol, and Dave probably won’t find the disclaimer very convincing.
And yet—might not Alice be telling the truth? There are facts of the matter that are relevant to whether Mallory is stupid, dishonest, cruel, ugly, &c.! (Even if we’re not sure where to draw the boundary of dishonest, if Mallory said something false, and we can check that, and she knew it was false, and we can check that from her statements elsewhere, that should make people more likely to affirm the dishonest characterization.) Those words mean things! They’re not rocks—or not only rocks. Is there any way to update the shared map without the update itself being construed as “punishment”?
It’s questionable. One might imagine that by applying sufficient scrutiny to nuances of tone and word choice, Alice might succeed at “neutrally” conveying the evidence in her possession without any associated scorn or judgment.
But judgments supervene on facts and values. If lying is bad, and Mallory lied, it logically follows that Mallory did a bad thing. There’s no way to avoid that implication without denying one of the premises. Nuances of tone and wording that seem to convey an absence of judgment might only succeed at doing so by means of obfuscation: strained abuses of language whose only function is to make it less clear to the inattentive reader that the thing Mallory did was lying.
At best, Alice might hope to craft the publication of the evidence in a way that omits her own policy response. There is a real difference between merely communicating that Mallory is stupid, dishonest, cruel, ugly, &c. (with the understanding that other people will use this information to inform their policies about trade opportunities), and furthermore adding that “therefore I, Alice, am going to withhold trade opportunities from Mallory, and withhold trade opportunities from those who don’t withhold trade opportunities from her.” The additional information about Alice’s own policy response might be exposed by fiery rhetoric choices and concealed by more clinical descriptions.
Is that enough to make the clinical description not a “social punishment”? Personally, I buy it, but I don’t think Bob, Carol, or Dave do.
This feels like it’s missing the most common form of “social punishment”, which is just a threat to at some distant point in the future take resources from you, in a way that ultimately relies on physical force, but just does so through many intermediaries. I agree the map-distorting kind of social punishment is real, but also, lots of social punishment is of the form “I think X is bad, and I will use my ability to steer our collective efforts in the direction of harming X”.
A single step removed, this might simply be someone saying “X is bad, and if I see you associating with X I will come and throw stones through your window”. Another step removed it becomes “X is bad, and I will vote to remove X from our professional association which is necessary for them to do business”. Another step removed it becomes “X is bad and I am spending my social capital which is a shared ledger we vaguely keep track of to reduce the degree to which X gets access to shared resources, and the basis of that social capital is some complicated system of hard power and threats that in some distant past had something to do with physical violence but has long since become its own game”.
I don’t think most social punishment is best modeled as map distortion. Indeed, I notice in your list above you suspiciously do not list the most common kind of attribute that is attributed to someone facing social punishment. “X is bad” or “X sucks” or “X is evil”. Those are indeed different statements, and those statements should usually more accurately be interpreted as a threat in a game grounded in social capital that is more grounded in physical violence and property rights than in map distortion.
I’m inclined to still count this under “judgments supervene on facts and values.” Why is X bad, sucky, evil? These things can’t be ontologically basic. Perhaps less articulate members of a mass punishment coalition might not have an answer (“He just is; what do you mean ‘why’? You’re not an X supporter, are you?”), but somewhere along the chain of command, I expect their masters to offer some sort of justification with some sort of relationship to checkable facts in the real world: “stupid, dishonest, cruel, ugly, &c.” being the examples I used in the post; we could keep adding to the list with “fascist, crazy, cowardly, disloyal, &c.” but I think you get the idea.
The justification might not be true; as I said in the post, people have an incentive to lie. But the idea that “bad, sucks, evil” are just threats within a social capital system without any even pretextual meaning outside the system flies in the face of experience that people demand pretexts.
I agree that in common parlance there is still some ontological confusion going on here, but I think it’s largely a sideshow to what is happening.
If there was a culture in the world that had an expression that more straightforwardly meant “I curse you” and so wasn’t making claims about checkable attributes about the other person, and that expression was commonly used where we use statements like “You suck”, I don’t think that culture would be very different from ours. Indeed, “I curse you”, or the more common “fuck you” is a thing people say (or in the former case used to say), and it works, and usually has very similar effects to saying “you suck”, despite the latter being ontologically a very different kind of statement if taken literally.
I agree that there is often also a claim smuggled in about some third-party checkable attribute. This is IMO not that crazy. Indeed, a curse/direct-insult is often associated with some checkable facts, and so calling attention to both makes it efficient to combine them.
It is indeed common that if you were wronged by someone by your own lights, that this is evidence that other people will be wronged by their lights as well, and so that there will be some third-party checkable attribute of the person that generalizes. So it’s not that surprising that these two kinds of actions end up with shared language (and my guess is there are also benefits in terms of plausible deniability on how much social capital you end up spending that encourage people to conflate here, but this doesn’t change the fact that the pure curse kind of expression exists and is a crucial thing to model to make accurate predictions here).
I admit I am a bit confused about the thesis here… I get that accurate behavioral accounting is sometimes tightly related to social punishment such that the attempt to give or defend oneself from punishment provides incentive to lie about the behavior (and attempts to describe the behavior have direct implications for punishment).
But are you further claiming that that all social punishment is identical[1] to truth-claims about other things (i.e. “reasons for the punishment”)? This seems like an ideal that I aspire to, but not how most people relate to social punishment, where social ostracism can sometimes simply be a matter of fashion or personal preference.
Personally I use phrases like “X is lame” or “X isn’t cool” to intentionally and explicitly set the status of things. I endeavor to always have good reasons for why and to provide them (or at least to have them ready if requested), but the move itself does not require justification in order to successfully communicate that something is having its status lowered or is something that I oppose. People would often happily just accept the status-claims without reasons, similar to learning what is currently ‘in fashion’.
On reflection I don’t quite mean identical to, but something more like “Is a deterministic function of truth-claims about good/bad behavior, taking that-and-only-that as input”.
The distinction between “positive punishment” and “negative punishment” is useful here, and I think a lot of the confusion around this topic comes from conflating the two—both intentionally and otherwise.
If you hit me for no reason, “positive punishment” would be hitting you back in hopes that you stop hitting me. I have to actually want you to hurt, and it can easily spiral out of hand if you hit me for hitting you for hitting me.
“Negative punishment” would be just not hanging out with people who hit me, because I don’t like hanging out with people who hit me. I don’t have to want you to hurt at all in order to do this, in the same way that I love my puppy and don’t hold anything against her, but when she’s jumping on me so much that I can’t work I might have to lock her out of my room. Even if you get offended and decide to respond in kind with some negative punishment of your own, that just means you decide to stop hanging out with me too. Which obviously isn’t a problem. And heck, by your (IMO appropriate) definition of “punishment” this isn’t even punishment because it’s not done in order to affect anyone’s behavior. It’s just choosing to abstain from negative value interactions.
We can’t restrict “negative punishment” without restricting freedom of association and freedom of expression, and we also don’t have to because sharing truth and making good choices are good, and there’s no threat of spiraling out of control. It may hurt a lot to be locked out of all the fun spaces, and it may feel like a punishment in the operant conditioning sense, but that doesn’t mean there’s any intent to punish or that it is punishment in the sense that’s relevant for this post.
What we have to be careful about, is when people try to claim to be doing freedom of association/expression (“negative punishment”) while actually intending to do positive punishment. This comes up a lot in the debates between “You’re trying to stifle free speech!” and “Free speech doesn’t mean freedom from consequences!”/”I’m just using my free speech to criticize yours!”. If you’re responding to obnoxious speech with speech like “I’m gonna stone you if you don’t shut up” then you’re obviously trying to conflate threats of violent positive punishment with “merely freedom of expression”, but it gets much more subtle when you say “Ugh, I don’t see how any decent person could listen to that guy”. Because is that an expression of curiosity from someone who would love to fill in their ignorance with empathy and understanding? Someone who harbors no ill will, just doesn’t find that guy interesting? Or is it someone who actively dislikes the person speaking, and would like to see them change their behavior, and even hurt in order to do so?
This attempt to hurt people in order to change their behavior is positive punishment masquerading as negative punishment, and as such has all the same problems with positive punishment. If I try to give you the silent treatment because you didn’t say you liked my new shirt, and you give me the silent treatment back, then it can easily escalate into losing a friendship that if we’re honest we both wanted. Because it was never actually “I don’t find any value here, so I’m pulling back”, it was “I’m gonna pull back anyway, in hopes of hurting him enough to change his behavior”.
People like Bob, Carol, and Dave are indeed at risk of confusing genuinely prosocial freedom of association and expression with positive punishment, because people like Alice are at risk of doing the latter while pleading the former.
However, they’re also likely to recognize it as sincere if Alice looks more like she’s doing the former than the latter. If the don’t find out about what Mallory did until they ask Alice why she doesn’t hang out with Mallory anymore, they’re unlikely to see her answer as punishment, for example. Similarly, if she comes off more like “Careful with the puppy, she’s friendly but sometimes too friendly!”, that’s technically communicating a bad thing, but it comes off very differently than if she were to get visibly upset and say “That dog is not well disciplined, it’s not a good dog and you should know that”.
It’s not always clear whether a person is genuinely “just sharing information” or secretly trying to positively punish, but they are indeed distinct things, and having the distinction clear makes it easier to judge.
One of my favourite LW post types is “something that I had a vague sense of being identified and fleshed out”
This is a great example of that.
There’s privacy, boundaries on the kinds of facts that should be publicly drawn on the shared maps. Not breaking into homes, but also not publishing transcripts of private conversations, and not reading others’ minds. It would be a coherent policy to strive to withhold evidence relevant to some forms of social punishment, generally not letting it get on the shared maps. Which in particular means that the opposing evidence (with positive valence) must also be withheld.
In my experience, people thinking badly about me isn’t upstream of the social punishment of losing resources, people thinking badly of me IS the social punishment.
Evolutionarily speaking I would guess that worrying about what people think of me is important in order to allow me to get resources but my emotions are implemented on the social level, not the resource level.
(E.g. I don’t know any LWers IRL and will get very little resource from them but it would still feel bad if people had a bad opinion of me)
>A punishment is when one agent (the punisher) imposes costs on another (the punished) in order to affect the punished’s behavior.
If a person punishes another by subtracting the other’s life, this is not done to affect the other’s behavior.
Isn’t it, though?
It could be incapacitation. Incapacitation and deterrence are both “affecting the other’s behavior” in a sense, but the examples in the OP suggest you mean deterrence. (Meanwhile, PeteG’s sibling comment seems to only be considering ‘affecting behavior’ to mean incapacitation.)
(… maybe you’re reserving “punishment” to mean only deterrence and so saying, if A punishes B by killing them that’s by definition done to affect B’s behavior? I don’t understand what’s going on in this thread.)
Like I said, some people would punish by killing not to affect the behavior of the punished (neither to deter nor to incapacitate), but because they would see it as the morally right thing to do, given the crime.
Zack, you are mistaken about highlighting Nick’s sentence as “hitting the mark”.
Not everyone wants to kill with the intent of affecting the behavior of the punished, which in this case would be canceling all future behaviors. Some might want to punish by killing because they feel that is the proper response to the crime of the punished. Even if the punisher somehow knows that the one they are erasing will never behave that way again. Such people see certain behaviors as a permanent stain on a person’s life record and they believe the only correct punishment is to end them.
And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that’s basically wanting to have the cake and eat it too.
The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it’s sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.