“Some Basic Level of Mutual Respect About Whether Other People Deserve to Live”?!
In 2015, Autistic Abby on Tumblr shared a viral piece of wisdom about subjective perceptions of “respect”:
Sometimes people use “respect” to mean “treating someone like a person” and sometimes they use “respect” to mean “treating someone like an authority”
and sometimes people who are used to being treated like an authority say “if you won’t respect me I won’t respect you” and they mean “if you won’t treat me like an authority I won’t treat you like a person”
and they think they’re being fair but they aren’t, and it’s not okay.
There’s the core of an important insight here, but I think it’s being formulated too narrowly. Abby presents the problem as being about one person strategically conflating two different meanings of respect (if you don’t respect me in the strong sense, I won’t even respect you in the weak sense). That does happen sometimes, but I think relevantly similar conflicts can occur when two people have different standards of respect that they’re both applying consistently.
What, specifically, is the bundle of privileges associated with being “respected”? Does it merely entail “address people in accordance with commonly accepted norms of speech in polite Society”, or does it furthermore entail something like, “don’t question people’s competence or stated intentions; assume that people are basically honest and know what they’re talking about”?
If someone who is used to being treated like an authority said, “If you dare question my competence or stated intentions, then I’ll question your competence and stated intentions”, then there would be no conflation, but there’s still a problem, because competence and intentions are real things in the real physical universe, and literal questions about them should have literal answers. If any attempt to imply the literal question is construed as a mere attack to be met in turn with a counterattack, then the questions never get answered.
In 2019, Benjamin Hoffman commented on a private document about ways people can be hurt by speech:
What I see as under threat is the ability to say in a way that’s actually heard, not only that opinion X is false, but that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction. Frequently, attempts to say this are construed primarily as moves to attack some person or institution, pushing them into the outgroup. Frequently, people suggest to me an “equivalent” wording with a softer tone, which in fact omits important substantive criticisms I mean to make, while claiming to understand what’s at issue.
In a culture where people respect each other in a strong “don’t question people’s competence or stated intentions” sense, it’s possible to have a discussion that considers whether an interlocutor’s belief in X is false. Everyone makes mistakes, after all. It’s a lot harder to have a discussion that also considers whether the process that generated opinion X is false, because that would seem to imply questions about the competence or intentions of people who believe X—that it wasn’t an “innocent” mistake.
Thus, to people enmeshed in such a culture of strong-sense “respect”, any attempt to use language to express hypotheses about systematically flawed belief-generators will end up sounding “harsh” to some degree. It’s not going to be easy to propose an equivalent wording, because the disrespect is implied by the hypothesis, not the mere choice of words.
The phrase “systematically flawed belief-generators” is kind of a mouthful. A shorter word that can be used to mean the same thing is bias. It’s going to be hard to overcome bias on a website where it’s hard to talk about biases.
In a discussion about how to moderate web forums, Wei Dai advanced a similar thesis: that since the nature of offense is about defending against threats to one’s social status, there’s no way to avoid giving offense while delivering serious criticism as long as it’s the case that it’s low-status for one’s work to deserve serious criticism.
I think there is something to this, though I think you should not model status in this context as purely one dimensional.
Like a culture of mutual dignity where you maintain some basic level of mutual respect about whether other people deserve to live, or deserve to suffer, seems achievable and my guess is strongly correlated with more reasonable criticism being made.
And just, what? What? This is just such a wild thing to say in that context! “[D]eserve to live, or deserve to suffer”? People around here are, like, transhumanists, right? Everyone deserves to live! No one deserves to suffer! Who in particular was arguing that some people don’t deserve to live or do deserve to suffer, such that this basic level of mutual respect is in danger of not being achieved?
What’s going on in someone’s head when they jump from “it’s impossible to avoid giving offense when delivering serious criticism” to “but we can at least achieve some basic level of mutual respect about whether other people deserve to live”?
If I had to guess, it’s an implied strong definition of respect that bundles not questioning people’s competence or stated intentions with being “treated like a person” (worthy of life and the absence of suffering). I’m imagining the response to my incredulity would go something like: “Sure, no one explicitly argued that someone didn’t deserve to live or did deserve to suffer, but people aren’t dumb and can read subtext. Complying with commonly accepted norms of speech in polite Society just makes it passive-aggressive rather than overtly aggressive, which is worse.”
But from the standpoint of the alleged aggressor who doesn’t accept that notion of respect, we’re not trying to say people should suffer and die. We just mean that opinion X is false, and that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction.
The people who interpret that as treating someone like a non-person think they’re being fair—and they are being fair with respect to a notion of fairness that’s about mutually granting a bundle of privileges that includes both a right to life and the right to not have one’s competence or stated intentions questioned. But that notion of fairness impairs our collective ability to construct shared maps that reflect the territory, and it’s not okay.
Come on man, you have the ability to understand the context better.
First of all, retaliation clearly has its place. If someone acts in a way that wantonly hurts others, it is the correct choice to inflict some suffering on them, for the sake of setting the right incentives. It is indeed extremely common that from this perspective of fairness and incentives, people “deserve” to suffer.
And indeed, maintaining an equilibrium in which the participants do not have outstanding grievances and would take the opportunity to inflict suffering on each other as payback for those past grievances is hard! Much of modern politics, many dysfunctional organizations, and many subcultures are indeed filled with mutual grievances moving things far away from the mutual assumption that it’s good to not hurt each other. I think almost any casual glance at Twitter would demonstrate this.
That paragraph of my response is about trying to establish that there are obviously limits to how much critical comments need the ability to offend, and so if you want to view things through the lens of status, about how its important to view status as multi-dimensional. It is absolutely not rare for internet discussion to imply the other side deserves to suffer or doesn’t deserve to live. There is a dimension of status where being low enough does cause others to try to cause you suffering. It’s not even that rare.
The reason why that paragraph is there is to establish how we need to treat status as a multi-dimensional thing. You can’t just walk around saying “offense is necessary for good criticism”. Some kinds of offense obviously make things worse in-expectation. Other kinds of offense do indeed seem necessary. You are saying the exact same thing in the very next paragraph!
No, it’s the opposite. That’s literally what my first sentence is saying. You cannot and should not treat respect/status as a one-dimensional thing, as the reductio-ad-absurdum in the quoted section shows. If you tried to treat it as a one-dimensional-thing you would need to include the part where people do of course frequently try to actively hurt others. In order to have a fruitful analysis of how status and offense relates to good criticism, you can’t just treat the whole thing as one monolith.
I hope you now understand how it’s not “such a wild thing to say in that context”. Indeed, it’s approximately the same thing you are saying here. You also hopefully understand how the exasperated tone and hyperbole did not help.
You absolutely do not “just mean” those things. Communicating about status is hard and requires active effort to do well at. People get in active conflict with each other all the time. Just two days ago you were quoted by Benquo as saying “intend to fight it with every weapon at my disposal” regarding how you relate to LessWrong moderation, a statement exactly of the kind that does not breed confidence you will not at some point reach for the “try to just inflict suffering on the LessWrong moderators in order to disincentivize them from doing this” option.
People get exiled from communities. People get actually really hurt from social conflict. People build their lives around social trust and respect and reputation and frequently would rather die than to lose crucial forms of social standing they care about.
I do not believe your reports about how you claim to limit the range of your status claims, and what you mean by offense. You cannot wish away core dimension of the stakes of social relationships by just asserting you are not affecting them when them being present in the conversation would inconvenience you. You have absolutely called for extremely strong censure and punishment of many people in this community as a result of things they said on the internet. You do not have the trust, nor anything close enough to a track record of accurate communication on this topic, to make it so that when you assert that by “offense” you just mean purely factual claims, people should believe you.
Like, man, I am so tired of this. I am so tired of this repeated “oh no, I am absolutely not making any status claims, I am just making factual claims, you moron” game. You don’t get to redefine the meaning of words, and you don’t get to try to gaslight everyone you interface with about the real stakes of the social engagements they have with you.
I thought Wei Dai’s comment was good. I responded to it, emphasizing how I think it’s an important dimension to think through in these situations.
But indeed, the way you handle the nature of offense and status in comment threads is not to declare defeat, say that “well, seems like we just can’t take into account social standing and status in our communication without sacrificing truth-seeking, and then pretend that dimension is never there”. You have to actually work with detailed models of what is going on, figure out the incentives for the parties involved, and set up a social environment where good work gets rewarded, harmful actions punished, all while maintaining sufficient ability to talk about the social system itself without everyone trying to gaslight each other about it. It’s hard work, it requires continuous steering. It requires hard thinking. It definitely is not solved by just making posts saying “We just mean that opinion X is false, and that the process generating opinion X is untrustworthy, and perhaps actively optimizing in an objectionable direction”.
There is no “just” here. In invoking this you are implying some target social relationship to the people who are “perhaps actively optimizing in an objectionable direction”. Should they be exiled, rate-limited, punished, forced to apologize or celebrated? Your tone and words will communicate some distribution over those!
It’s extremely hard and requires active effort to write a comment that is genuinely communicating agnosticism about how they think a social ecosystem should react to people to are “optimizing in an objectionable direction” in a specific instance, and you are clearly not generally trying to do that. Your words reek of judgement of a specific kind. You frequently call for social punishment for people who optimize such! You can’t just deny that part of your whole speech and wish it away. There is no “just” here. When you offend, you mean offense of a specific kind, and using clinical language to hide away the nature of that offense, and its implications, is not helping people accurately understand what will happen when they engage with you.
Sorry for the misunderstanding. That’s my bad. Your additional explanation here helps me understand what you were saying. (Readers should feel free to assign me lower status for being so dumb as to misinterpret it the first time!)
The rest of your comment, about ignoring the status implications of one’s speech, is interesting. As you’ve noticed, I am often doing a thing where I deliberately ignore the social implications of my or others’ speech (effectively “declar[ing] defeat” and “pretend[ing] that dimension is [not] there”), but I think this is often a good thing. I’m going to think more carefully about your comment and write another post explaining why I think that.
I still think you have rose-colored glasses about how discussion works—not how it should work, but how it does work—and that this is causing you to make errors. E.g., the habryka quote sounds insane until you reflect on what discourse is actually like. E.g., in the past debate we’ve had you initially said that the moderation debates weren’t about tone, before we got to the harder normative stuff.
It’s probably possible to have your normative views while having a realistic model of how discussions work, but err I guess I think you haven’t reconciled them yet, at least not in, e.g., this post.
Actually, it still sounds insane to me. I find the top-level comment of this thread to be completely unconvincing, to say the least. (This, despite quite a bit of “reflect[ing] on what discourse is actually like”.)
The view which you describe implies that one cannot punish whom one respects. (Because “respect” means “not wanting to inflict suffering”, punishment is the infliction of suffering, therefore if I respect someone, then I can’t punish that person.)
This obviously creates a huge problem, namely: what if someone whom we respect, does something bad and worthy of punishment? On your view, we either don’t punish them, or we have to stop respecting them.
If you then also declare that not respecting someone (required for punishing them) is bad, then this means that we can never punish bad behavior by someone who is a member of the ingroup. (Which is, I assume, the point.)
I don’t see why it’s good to punish people. If you threaten to punish me if I do a particular thing, I’ll just get upset that you might hurt me and likely refuse to interact with you at all. But you do sometimes have to hurt someone’s reputation as a side effect of some other necessary action, like warning other people that they’re untrustworthy.
In the high stakes case: MAD makes sense and retaliation is a better equilibrium than not threatening retaliation.
In the low stakes case: If you punch me, I will likely punch back (or otherwise try to get you punished). This generally works as a fine deterrent for most cases.
I feel like this isn’t a particularly rare or weird concept. Really as basic game theory as it gets.
I think there’s a difference between consequences and suffering (as written in the OP) though.
If a child plays too many videogames you might take away their switch, and while that might decrease their utility, I’d hardly describe it as suffering in any meaningful sense.
Similarly, in the real world, people generally get quite low utility from physical violence. It’s either an act of impulse not particularly sensitive to severity of punishment (like in people with anger management issues), or of very low utility. It’s therefore easy to imagine that the optimal level of punishment for crime might be a decrease in access to some goods, and seperation from broader society to decrease the probability of future impulsive acts harming anyone.
Not sure this is important to discuss, but I definitely would. If I remember correctly, this kind of thing had a pretty strong effect on me when I was small, probably worse than getting a moderate injury as an adult. I feel like it’s very easy to make a small kid suffer because they’re so emotionally defenseless and get so easily invested in random things.
Try to apply this logic to law enforcement, and you will see at once how it fails.
Habryka (or my steelman of him) raises a valid point: If members of a language community predominantly experience communication as part of a conflict, then someone trying to speak an alternate, descriptive dialect can’t actually do that without establishing the credibility of that intent through adequate, contextually appropriate differential signaling costs.
I wish there were more attention paid to the obvious implications of this for just about any unironic social agenda relying on speech to organize itself: if you actually can’t expect to be understood nondramatically when talking about X, then you actually can’t nondramatically propose X, whether that’s “do the most good” or “make sure the AI doesn’t kill us all” or “go to Mars” or “develop a pinprick blood test” or anything else that hasn’t already been thoroughly ritualized.
I don’t think the premise of “predominantly experience communication as part of a conflict” (emphasis mine) is a necessary premise for costly differential signaling to become an important part of communication.
To illustrate, we can apply the same analysis to the case of physical touch. Most people experience most touch not as part of an attempt to physically hurt them. This doesn’t mean that communication and negotiation about touch isn’t heavily shaped by the risk of physical injury or violence. It’s sufficient for a small part of touch to be related to physical violence to make it important that physical touch is accompanied with differential signaling that any given interaction will not involve violence.
Communication doesn’t need to be “predominantly conflict” in order for it to be important to differentially signal that you are trying to have a more conflict focused or more descriptive-focused language[1].
And of course the problem of mimicry is real. Both engagements aiming for physical violence, and speech acts aiming to covertly change the status dynamics will have an incentive to hide themselves as the one that summons fewer defense mechanisms or requires more scrutiny or enables more plausible deniability and avoiding accountability in case the attack/status-move fails.
So even a small fraction of touch being violent, or a small fraction of speech being plausible-deniable social slapdowns masquerading as factual claims, can make it worth it to request effort and differentially costly signaling on behalf of the sender. No need for “predominantly”.
Here splitting things up into two types for the sake of simplicity, mirroring your distinction, though of course reality does not carve neatly into just these two categories
I agree that there are more complicated intermediate cases in between the extremes, which are easier to analyze because they are simpler.
Your analogy to the relation between touch and violence is logical, but I think that it relies on false albeit facially plausible premises. Our sensitivity to touch is highly socially conditioned and therefore contingent. It seems to have much more to do with negotiating territorial boundaries and dominance relations (e.g. who’s allowed to touch whom in what ways) than safety in a sense that isn’t mediated by such social positioning.
As a parent I see other parents tightly control how their babies are allowed to play with mine. Even under circumstances where there is no likely serious bodily harm from touch, they are anxious about the prospect of young children working out their own differences, anxious about permitting wrong kind of intimacy or conflict, and visibly working to inculcate these anxieties in their children.
I think our attitudes towards speech are similarly mediated by considerations of territoriality and dominance, such that “taking offense” has to do with demonstrating one’s power to suppress adverse speech within one’s territory, sphere of influence, or dominance field, rather than the anticipation of harm outside of such considerations.
Finally, I don’t think the “even a small chance of violence” argument works for speech. The potential benefits of information exchange are very high, speech is a considerably less direct tool for harming someone than touch is, and the main way it’s effective is by arguments, which can be refuted if wrong. It doesn’t seem to me like there’s a locally stable equilibrium where speakers have to be constantly on guard against the perception of speech-violence, but speech is largely interpreted descriptively, unless it’s something like a predator-prey or farmer-domesticate relation where one group uses speech for violence, while the other group uses speech for production. If we assume rough equality, then the locally stable attractors are judicially mediated descriptive speech, and antisemantic power games.
Hmm, I don’t think I super buy this, though I agree with your arguments for these alternative considerations being more important than one might naively consider. My guess is there is a simulacra level thing going on here where some of the initial premises of cultural norms around touch were based on violence, and those premises have been re-used/co-opted for other purposes, but the underlying structure of concern with violence still looms large.
I overall agree with you that the case for highly asymmetric payoffs and as such strong need for costly differential signaling and associated negotiation is a bunch stronger for violence than for speech, and so a simple analogy with violence is not sufficient to establish that “predominantly” is a wrong descriptor, but I think the structural analogy is sufficient (as in, if we actually look at the mechanics that the violence analogy highlights).
For speech, my current take is that the degree to which costly differential signaling is necessary is highly dependent on a few different things:
Whether the domain of discussion has obvious relevance to conflict. E.g. discussing mathematical proof rarely has immediate relevance to conflict (though not never), and is cheap to verify, whereas discussing e.g. moderation norms and the logic of associated examples is very conflict-adjacent.
The general background level of people playing descriptive/conflicty mimicry games (as estimated by the participants, which is a very noisy measure, and often terribly mistaken in both directions)
The sensitivity of collective resource allocations to the things said in the conversation, i.e. how much is at stake in this social environment (a board meeting of a multi-billion dollar company will tend much more heavily towards needing differential signaling than a low-stakes purchase transaction in a grocery store, or an undergraduate science class)
The sensitivity of the surrounding environment to spot and identify people playing mimicry games and punish them if they do that (which often requires the ability for the environment to talk about these kinds of things, or have some abstractions to handle this)
And of course the degree to which people get rewarded, whether materially or socially, for making correct object-level observations
In addition to that, the differential signaling investments pay interest, in that a group of people who have built mutual trust that they are communicating descriptively will have to do less of that over time, unless some external factor changes their dynamics.
All of this makes me think that under some set of social premises, when the stakes are high, it’s hard to detect subterfuge, group membership is unstable and short-lived, and the reward for truth is weak, then you basically will always need to be concerned with some amount of differential signaling, even if you have largely successfully kept things descriptive so far.
I think I agree with you that there are forces pulling spaces and groups closer into stable attractors here, so there are feedback loops. I don’t believe those attractors are that stable however, and the long-term history of basically any social space will shift over the years, and it requires active governance to notice and redirect it towards different attractors (and beyond that, I don’t believe in there being just two stable attractors, though I am not fully sure whether you are arguing that).
I agree with this as stated, and think it’s consistent with the perspective I’ve articulated. The crux might be the extent to which ritualized conflict can and does deviate strongly from physical conflict, and relatedly whether legalizing (high-skill) duels would be proepistemic, albeit less so than reviving accessible courts of law, denormalizing ritualized legal boilerplate, and both legalizing bets and and normalizing them as the sort of thing you do if you’re serious.
I think that on this particular spectrum, there are two locally stable attractors for closed social systems, though these attractors have different effects on the nonsocial environment, which can eventually push a system into the other attractor—this is approximately why there’s a large cyclical element to history.
So if a system isn’t clearly falling towards one attractor or the other, we can infer that it’s a frontier between other systems that are changing over time, and doesn’t self-govern.
This analogy doesn’t work at all.
Concerning touch: it’s entirely normal, and indeed is the default in our society, to not want people touching you without your consent, regardless of whether that touching is “part of an attempt to physically hurt [you]”. That is why physical touch requires special signals in order to be permissible—not the fear of it being violent! Indeed, the most common form of unwanted touching is not “violent” (except in some definitional sense where any non-consensual touching is violent); it’s just unwanted.
Conversely, there is no such thing as “being communicated with without your consent” on a public forum.[1] The concept is laughable. (Especially if there’s an “ignore user” function—which there always should be—but even otherwise.) And unlike touch, communication in such a venue is one-to-many, not one-to-one; so it makes no sense to apply analysis that is focused on one person giving or withholding consent to be communicated with.
Also, the dichotomies you describe are all somewhere between “tendentiously described” and “entirely nonsensical”:
“a more conflict focused or more descriptive-focused language”
“speech acts aiming to covertly change the status dynamics” (vs. speech acts that don’t have such aims, presumably)
“plausible-deniable social slapdowns masquerading as factual claims”
This whole class of frameworks is fake. The only purpose of promulgating them is to stigmatize or outright forbid certain sorts of claims without needing to either establish their falsehood or even to explicitly mark them off-limits.
I am ignoring private-messaging functionality here. Generally speaking that isn’t the concern in such discussions.
Oh I think you’re missing something pretty simple on the physical touch thing. In my model of the world the potential for violence or other form of seriously invasive acts (e.g. sexual assault) is why physical touch has gained the norms whereby unwanted touch is considered relatively more important to police boundary violations around than most unwanted behavior (which typically doesn’t have as much consent-focus).
In contrast, sometimes strangers try to engage me on conversations about national politics in a way I find a complete waste of my time and soul-sucking; sometimes I am relatively high status in a room and people talk to me in a sycophantic way that I find very tedious to put up with. But this doesn’t cross a line into “perhaps this person should be socially ostracized” because there isn’t an underlying threat in the unwanted speech, so I just find it a bit annoying.
If someone were to regularly push me around in ways I didn’t want, or touch me sexually in ways I did not want, I would make some move to have them socially ostracized, because the boundary here is much more important and it would be a sign that this person may indeed be violent or commit sexual assault. It’s a really important part of why unwanted physical touch norms are more sensitive than unwanted speech is (though to be clear there are many kinds of speech that are well worth policing too).
Thanks for the snark. I don’t particularly intend to respond further beyond this comment, as it seems to me that you’ve done little but to strawman my position and then proceed to laugh at it, with maybe one sentence of useful argumentative content[1]. A few quick comments before I put this thread down:
Did you even read my goddamn footnote? Did you even read the comment that I am replying to? This is not my dichotomy, it’s the dichotomy of the comment I am replying to. Here is the original text in case you somehow can’t be bothered to read Benquo’s comment.
Like, man, sure, if you want to claim this whole “class of frameworks is fake” then have a conversation with Benquo about it. No need to throw this much snark at me.
Also, come on, on the object level it’s obviously the case that concern around violence absolutely shapes behavior around things like dating, casual touch, and an enormous number of other things involving touch.
Also, this aggregate statement just seems false. The vast majority of touch in our society gets initiated without consent. When people flirt with each other they do not generally ask each other explicitly before holding hands, friends frequently do not ask each other before leaning their head on each other’s shoulder, in many contexts people do not ask before shaking your hand, and family members routinely hug each other without asking first. Touch is absolutely not governed by a universal environment of explicit consent, and people do not by default “dislike all non-consensual touch”. Sure, if you define “consent” as “ultimately accepted positively by the receiver” then yeah, but that’s just begging the question.[2]
People do not just “have a preference against being non-consensually touched”, many preferences about touch are present because touch involves physical vulnerability. Completely denying that just seems insane to me.
It’s fine if you want to highlight that of course we have many norms around touch that are not primarily concerned with violence (for example, many norms probably are also the result of people being worried about spreading diseases). If that’s all you want to say, I have no objections. But somehow trying to claim that costly differential signaling does not apply at all to touch, because none of the negotiation of physical touch is about literal physical vulnerability seems really very confused to me.
Of course, the defails of this are all largely a distraction. The very basic logic of my argument is simple and very straightforward. It does need to be the case for a communication channel to be predominantly concerned with a certain form of attack, in order for that channel to care quite a bit about the marginal attack. If you desire another analogy, most computer traffic is not malware or exploits, nevertheless it sure really matters a lot whether your specific message is malware or some kind of exploit.
(Unless someone else wants to make a bid for me to respond further in this thread, I am going to ignore further comments)
I agree that norms around physical touch are different from norms around speech. I think it’s a decent but not perfect analogy, intended primarily to convey the structure of my argument, not as a social precedent in support of it. Feel free to choose from any other domain with asymmetricly large costs where much value is lost and costly differential signaling is required, even without the domain consisting “predominantly” of the costs.
I am not saying there are no cultural pockets where almost all touch is governed by consent, but it’s clearly not a universal. It’s plausible to me there are specific cultures, including maybe the one you live in, where this is common, but it’s really clear beyond doubt that there are many cultures with plenty of non-consensual touch, and that those cultures of course still have implicit negotiation and signaling around the physical vulnerability that comes with that and the potential for physical violence it implies.
Benquo was referring to your own views in his comment, so this foisting-off of responsibility seems a bit odd. Insofar as he agrees with you, I disagree with him as well. I don’t see what’s weird about that.
I was replying to views expressed by you. If you think that I have mis-apprehended what views you in fact hold, by all means tell me; but that’s not what you seem to be saying?
This is… such a vague statement that I can’t disagree with it, as such… but it’s also not really responsive to what I wrote. (But if you did mean this as a contradiction of what I wrote, then—I disagree and think that you are mistaken.)
I didn’t say anything about explicit consent. It’s true that “friends do not ask each other before leaning their head on other people’s shoulder”, but that doesn’t make this non-consensual! Of course it’s consensual. (Usually. But then, there are many cases when someone initiates this sort of “implied-consensual” touching, but then it turns out that consent wasn’t actually implied. Whoops! In such cases, the distinction between implied consent, and lack of consent, suddenly becomes quite salient.)
And it’s true that “family members routine hug each other without asking first”. It’s also true that many, many people experience this as a violation of their autonomy and their boundaries, and greatly dislike it. (Stuff like this is about as novel and surprising as “what’s the deal with airline food” at this point.)
Perhaps if you understand “physical vulnerability” in a broad way. But otherwise, no, I do not agree. My point is that the possibility of violence usually has little to do with it.
I wouldn’t say “none”, perhaps. But almost none.
Er, you’re the one making the analogy, so surely it’s up to you to choose a better example, not your interlocutor…
I agree he could have chosen a better example. But like, are you trying to understand Habryka? Or are you just trying to litigate proper use of analogies? Your comment reads to me as 20% responding to his point, and 80% litigation. This sort of thing feels like an advanced version of arguing about definitions instead of just tabooing the word and talking about the underlying point each person is trying to make.
What you’re doing here is just this.
I think that habryka’s point, as described in the top-level comment of this thread and as explicated by the analogy he used, is wrong. If he wants to amend his claims to make them less wrong (possibly by choosing a different analogy and then reasoning on that basis, or possibly in some other way), that’s fine, he can do that, and I’ll be glad to read any such corrective commentary. But I can’t do that for him.
The idea that disagreeing with someone just means that the disagreer doesn’t understand and needs to work harder to try to understand needs to die a flaming death. Yes, sometimes misunderstanding happens, but sometimes people just think that you’re wrong. They understand what you’re saying and they disagree. Your task then is to convince, not to explain—and certainly not to whine about your interlocutors not working hard enough to convince themselves of your claims on your behalf.
Hmm, seems like I didn’t communicate well enough. Trying again.
I believe you understand and disagree with his point. I also believe you think his analogy is bad. When I first read your reply I thought you were disagreeing because of/due to the disanalogies, and for no other reason. I no longer think this.
Your disagreement of his point, and your critique of his analogy felt very mixed together. Which like, he’s using the analogy to explicate his point for a reason, so fair. But there’s something like a difference between using an analogy as supporting argument, and using an analogy just to point at the thing.
As far as I can tell your comment doesn’t address this point directly? I’m asking for something like a clearer distinction between disagreeing with his point, and critiquing the analogy. Especially in this case where I don’t think the particulars of the analogy where central to his point.
I mean, the analogy is “bad” insofar as the point that it supports is wrong. Like, the two things which are claimed to be analogous in a certain important way, are in fact not analogous in that important way. (Or so I claim!)
(It’s like if I said “the sky is like a glass dome; if you fly high enough, you’ll crash into it; and since the sky is indestructible, much like a glass dome is, you also can’t break through it”. Well, no, you in fact will not crash into the sky; in this way, it is precisely not like a glass dome. And of course glass is totally destructible. The analogy successfully communicates my beliefs about the sky—that it’s a solid barrier which can be crashed into but not broken through. Those beliefs happen to be totally wrong. The glass dome analogy is “bad” in that sense.)
True, I did not address that. I’ll do so now.
So, let’s recall what “it sure really matters a lot” means, specifically, in this context. The key claim from the earlier comment is this:
In the malware/exploit case, the analogous claim would be something like:
“Computer traffic doesn’t need to be ‘predominantly malware or exploits’ for it to be important to differentially signal that you are trying to send innocent, non-malicious data.”
Well… you can probably see the problem here. There’s basically two scenarios:
There exists a totally unambiguous, formally (which usually means: cryptographically) verifiable signal of a data packet or message being non-malicious. That signal gets sent, we check it, if it doesn’t check out we reject the data, the end. (If the signal can be faked after all, then we’re just fucked.)
There is no such verifiable signal. In this case, malicious traffic is going to be sending all the signals of non-maliciousness that “good” traffic sends. “A differential signal of innocence is being intentionally sent” is almost completely worthless as a basis for concluding that the data is non-malicious. Instead, we have to use complicated Bayesian methods to sort good from bad (as in email), or we have to enter into an arms race of requiring, and checking for, increasingly convoluted and esoteric micro-signals of validity (as in CAPTCHAs, UA sniffing, and all the other myriad tricks that websites use these days to protect themselves from abuse). (And any client that deliberately sends the signals we’re checking for is actually more likely to be a bad actor!)
This situation… is also not analogous to “posting on a public discussion forum”, which looks nothing like either of the above cases.
Aside: I do want to try to help you here, and warn you that it is my opinion Said Achmiz regularly cannot understand basic points and ideas and criticizes his interlocutors with extremely harsh language – regular unnecessary incredulity with extra question marks and exclamation points and italics, describing your positions as laughable, obviously wrong, deeply corrosive, etc – and I would encourage you to not continue trying to explain something to his satisfaction unless you would also find it a similarly worthwhile activity if your interlocutor was an LLM in whose system prompt it was written that it should not be able to either agree with or understand your point.
(You’re of course totally welcome to keep trying, I just felt I’d be irresponsible not to warn you.)
Excuse me, what?
Italics is “harsh language” now??
Separately, your phrasing:
… implies that I was referring to some person or people with the phrase “needs to die a flaming death”. I hope you can see how totally unacceptable that implication is. (I won’t belabor the point that it’s false; you know that. But please correct the phrasing; falsely implying that I expressed a desire for a person to die violently is absolutely not ok, even if that implication was accidental, as I assume that it was.)
Oh yeah that’s fair, edited to other examples.
Thank you. (I’ve updated my vote on your comment accordingly.)
Your position is laughable, obviously wrong, and deeply corrosive. I routinely succeed at explaining things to Achmiz’s satisfaction.
To illustrate, I’ll quickly pick two arbitrary examples from the top of my head. One, on this website in February 2023, I explained the existence of situations in which trying to minimize the damage from errors is preferable to trying to not commit errors. Two, in private correspondence earlier this month about an unpublished draft of mine, Achmiz wrote that an analogy I had used was “Confusing—because I don’t really get what sort of situation it can be analogous to [...] [C]onsider this objection to constitute a request for such examples!” I wrote up an explanation of a real-world situation of the class I was thinking of when I wrote the analogy, to which Achmiz responded, “Hmm … ok, I see what you mean [...] Anyhow, you’ve answered my question, yeah.”
To empirically test your claim about Achmiz vis-à-vis large language models, I supplied the explanation about minimizing damage from errors to Claude Opus 4 with the custom style prompt, “Emulate an obstinate interlocutor that should not be able to either agree with or understand the user’s point.” The difference between Achmiz’s actual response from February 2023 (“I see, thanks” and a relevant followup question which I also had no trouble answering) and the LLM’s response (“I don’t see how this example makes any sense at all” followed by another 200 words of blatant misreadings and non sequiturs) is quite apparent. Your claim that Morrison would find interacting with Achmiz and the LLM-prompted-for-obstinancy “similarly worthwhile” is clearly absurd. (And as it happens, Achmiz’s reply to Morrison seemed entirely cogent to me.)
I’m presenting this counterevidence to make a point, but really, I don’t think you believe what you wrote. The existence of examples of Achmiz being satisfied with explanations and the outcome of the LLM prompt were both easily predictable; you just didn’t think anyone would call your bluff on a hyperbolic insult. You were wrong, but more importantly: do you really think this is a good look for you or the website?
Good to know it’s ever happened! It is extremely uncommon in my experience of reading threads with Said in them.
Added: To be fair I wrote my comment upthread feeling my peak frustration about (what I read as) Said’s unproductive commenting style. I wrote to someone else at the time it would probably be my worst contribution to this ongoing conversation of hundreds of comments and hundreds of hours of conflict, so I’ll admit it is probably my peak sloppily-stated-times-aggressive comment and not to my usual standards.
Still, I think it’s more reasonable than you’re giving it credit for. I think I think it’s a fairly standard human phenomena to stick one’s heels in and be unwilling to budge on a position regardless of reason or argument, and I suspect Said of doing that from time to time (or alternatively of being quite dense), and you’ve told me before you also have felt that he seems sometimes to be kind of dense about very simple things, and I think people often do things they shouldn’t when they feel threatened, so I think you’re overstating that you think my idea is laughable and such.
And also normatively correct.
But note the switch you’ve performed: you’ve now substituted “I suspect Said of doing that [‘stick[ing] one’s heels in and be unwilling to budge on a position regardless of reason or argument’] from time to time” in place of your initial “Said Achmiz regularly cannot understand basic points and ideas”.
It should be obvious that these are two extremely different thing.
Being unable to “understand basic points and ideas” is clearly bad. But, as Zack points out, it’s also just obviously untrue of me.
Being “unwilling to budge on a position regardless of reason or argument” is… well, as I note in the link above, it’s often entirely sensible even if the reasons are good and the argument convincing. But why assume that? Surely it’s not always the case? If your reasons are dumb and your arguments bad, then it would obviously be wrong for me to “budge”, yes? (I am reminded, here, of the discussion about “frame control”.)
But then the complaint is just that I sometimes (often?) find people’s arguments to be bad. Well, yeah. Is this… surprising? Surely you don’t think that most people’s arguments are good…?
Heck, this even applies to “understand[ing] basic points and ideas” as well! We have this implication:
“If a point or idea is basic, coherent, not nonsense, etc., then a reasonable, non-dumb person should be able to understand it.”
Well, you know what they say: one man’s modus ponens is another man’s modus tollens.
So, in the end, all you’re really saying is “in various cases, I disagreed with Said about whether some argument was good, whether some idea was coherent and straightforward, whether some reasons for some belief were good ones, etc.” Yes, it’s true! I sometimes disagree with people about such things. Guilty as charged! I do indeed think that sometimes (maybe even… often?) people claim that some idea is “basic”, but actually it’s incoherent nonsense. And sometimes (maybe even… often?), people think that some argument is good, but actually it’s bad.
But I’m always open to being convinced otherwise.
By the by, I just want to note another instance of you saying things are obvious. I perceive it to often be an attempt to equivocate between you just describing the world as you see it, and also attempting to quietly imply that your interlocutor’s perspective should be entirely dismissed (on at least this issue), and even worse in conflict scenarios to imply that your interlocutor is a fool. Insofar as this is true, it seems especially egregious to me as sometimes it seems to me that you are wrong (e.g. saying it’s ‘obvious’ that there’s no distinction in how hard it is to write comments or posts).
It’s not easy to have a conversation while someone takes every opportunity to not only say that they think you’re mistaken, but to say your perspective should be dismissed and imply you’re a fool for having it. Again, as Zack would be quick to say, perhaps you believe they’re a fool! But if that’s what’s intended, it’s not what’s happening here, as it’s being equivocated with or carefully masked.
(I am sure Said will point out he has never out-and-out called someone a fool, and so it is ‘obviously wrong’ of me to think that this reading is desired.)
False. It’s very easy. In fact, not only are you mistaken about this, but your perspective should be entirely dismissed. (But you’re not a fool—at least, no more than I am, on the whole, or than anyone else is. Indeed, that’s what makes this all so frustrating.)
Now, will you punish me for saying this? Will you try to use this comment as evidence to accuse me of… whatever it is that you’re claiming is bad about my comments? Or will you say “ah, you see, now that’s better, since it’s honest and straightforward instead of being masked; this is good and proper commenting, no complaints”?
I predict that you’re going to do the former, and not the latter. I make this prediction because I don’t believe that you’re sincere in your argument that it’s the non-explicitness that’s the problem. I think that you’re making that argument in bad faith.
I stand ready to apologize if I turn out to have been wrong about this.
I don’t believe my perspective on communication should be entirely dismissed. I think I have managed to navigate a lot of social situations unusually well – herding spiky and oddball rationalists in-person and online for nearly a decade into one of the few highly active and alive intellectual web-forums, helping build Lightcone Infrastructure to run many very successful and difficult events (e.g. LessOnline), writing good LessWrong posts about social dynamics, etc, and I think this is intertwined with my perspective on social interactions, what is being communicated, and how. Many smart and wise people have things to learn from my perspective, as do I theirs.
I agree that your perspective on communication should not be dismissed, and that your successful navigation of social situations at Lightcone and elsewhere are evidence for this. Achmiz is definitely obviously wrong about this. Shame on him!
But that’s not the interesting part of the grandparent. The interesting part is where Achmiz accuses you of bad faith (pretending to entertain one set of feelings while acting as if influenced by another) for criticizing his comments (above and elsewhere) for being allegedly passive-aggressive. I find it surprising that you would reply to the grandparent but not address the bad faith accusation (as contrasted to just not replying because you’re busy).
Maybe you think you shouldn’t be subjected to such accusations and don’t want to dignify them with a response. But I’m really interested in what your response would be, particularly because, separately from the bad faith accusation, the object-level complaint seems cogent: if passive-aggressive comments are bad, then saying the same thing in an overtly aggressive manner must be better, right? (If not, then the passive-aggressiveness wasn’t the problem, and presumably previous criticisms saying that were in error.)
For myself, when I get accused of bad faith, I usually do think it’s worth responding, because given human nature, I don’t think it’s crazy for someone to suspect that I might have some hidden motive for my speech that hadn’t already been made clear, and I’m eager to allay such concerns by trying to dump more context for why I said what I said. I don’t think I’m entitled to an assumption of good faith from my interlocutor; I think I can earn it. I think you can earn it, too.
I am busy, so not responding to all parts of these comments, especially not those with longer inferential distance, and prolly won’t keep replying here for at least a few days. But to be clear I said that Said was attempting to call people fools[1] with plausible deniability, and that he should do so without the subterfuge, then he denied that he thought people were fools, so there’s not a natural next step here where I praise him for saying it directly. I’m not sure I believe him but this does not seem like the margin to prosecute that criticism.
As an aside, I have a long draft blogpost from a year ago explaining why plausible-deniable aggression and plausible-deniable rule-breaking is really bad, which I hope to finish and publish one day. This isn’t some new position I have found in the course of this political conflict, it is something I have believed for a long time (and have been slightly surprised that you find it so improbable that I would hold this position, I would’ve thought it relatively clear why it is at least compelling once the hypothesis was raised).
“Calling them fools” here is a stand-in for “is intentionally attempting to lower their status, and not incidentally doing so as a side effect of criticizing their writing”. The specific word “fool” is not the load-bearing part.
Hope whatever tasks you’re busy with go well!
I look forward to the post. (I’d love to read the draft, if you’re comfortable sharing.)
The reason I have the opposite intuition so strongly is because I think regulating social status emotions is a distraction from the intellectually substantive work we’re here to do. Norms that encourage people to overtly do more of that seem super toxic, and norms that encourage inquisitorial scrutiny of subjective minutiae of subtext to make certain it’s not happening covertly seem super toxic in a different way. I’d rather just let people’s emotional regulation happen in the background without having to talk about it. If that means some people get away with slipping in social “attacks” through the subtext of their writing without getting punished except by subtextual counterattacks from their interlocutors, that seems like a pretty normal part of talking naturally as a human, and I’d rather just let it happen than try to heavy-handedly control it.
The world “fool” may not be load-bearing in your comment, but I’m pretty sure it is load-bearing in Achmiz’s denial (“But you’re not a fool [...] that’s what makes this all so frustrating”). He’s trying to lower your status by calling you dishonest, not stupid.
It’s not Ben’s perspective on “communication”, in general, that should be dismissed, but his perspective on the specific thing I quoted. I stand by that view.
Isn’t it, though? You can just keep talking! Why would them implying you’re a fool end the conversation?
Okay, maybe if your only goal in the conversation is to persuade the other person, them appearing “closed-minded” (by implying you’re a fool) implies that you won’t succeed in persuading them, so you shouldn’t bother? But it can still be worth talking if you have other goals, like persuading third parties or (this is an important one!) being persuaded yourself if you’re wrong.
It’s a common failure mode on the Internet (and, by extension, on LW) that people believe a public response to/a criticism of an author’s writing or ideas is meant to convince the author that they’re wrong. It’s not. as a general matter, people believe what they believe and it takes a tremendous amount of time and effort to get them to change their minds, regardless of how smart or LW-”rational” they are.[1] It’s very rarely worth it to try to convince any single person of anything, unless they are super high-status or have a ton of decision-making power. And in any case, if I want to personally convince someone of something, I just PM them; humans more readily admit to mistakes when they’re not blasted in public from the outset.
On the contrary, explaining in public why someone is wrong is emphatically for the benefit of the audience. It is inherently performative. The author is just one guy,[2] but the audience is a usually a lot more guys. “This author’s argument here is wrong and you shouldn’t believe them” and “this author’s argumentative flaws illustrate why they’re fundamentally confused/epistemically broken/dumb and you shouldn’t listen to them again in the future” are both significantly higher-impact and therefore more worthwhile than “you made a mistake here, please change your mind.”
Of course, one man’s bug is another man’s feature. I called it a “failure mode” above, but that’s only relative to a specific set of end goals.[3] Another very common set of goals contains desires like “increase subjective hedonic enjoyment of online social interactions, as an end goal in and of itself.” If one subscribes to this, it’s not hard to figure out why others calling you a fool is something to be avoided and proactively guarded against.
And even when they do change their minds, it’s usually in private over days or weeks of mulling over the problem in the comfort of their own abode
Not meant to imply any specific gender
Such as promoting epistemic hygiene, increasing map-territory correspondence, rewarding proper reasoning and disincentivizing sloppy thinking… you know, all the good stuff LW pretends it’s about
That’s a fair point, but if someone is fully on the side of “I’m not actually trying to have a conversation with the author, I’m writing for the rest of the readership” then it is bad faith to write as though you are having a conversation with them e.g. addressing them with ‘you’ and writing to them questions and so forth, which I also think is a common bad discourse pattern.
This is a strawman of @sunwillrise’s point. He did not say anything about “not trying to have a conversation with” someone. What he said was:
It is entirely possible to have a conversation with someone without any intent or expectation of convincing that person that they’re wrong. (Indeed, it’s by far the more common scenario.)
Your response, on the other hand, implies that the only reason to have a conversation with someone who you think is wrong, is to try to convince them that they’re wrong. I hope you can see how silly that is. (I don’t think that you actually think this—but what you wrote implies it.)
I am not strawmanning sunwillrise’s position, I am making an additional related point.
I do not believe it is the only reason, but it is a common reason, and it is costly for people to repeatedly come to believe (based on the common social cues) that it is what is happening and engage on those terms, only to find out later that it is not and that they were engaged in a different social game they would rather not be playing where the goal is to make them look bad and for them to defend themselves.
You wrote:
Whom does this describe? Who has expressed any such sentiment?
The implicature of your comment was that the quoted bit was a restatement of the point to which you were responding. You were absolutely strawmanning sunwillrise’s position.
If someone mistakenly believes that their interlocutor in a public conversation on a public discussion forum is just trying to convince them, personally, to change their minds, then this is almost certainly an error on their part. As a moderator of said forum, it would behoove you to spread awareness of the fact that such is not the default or the usual motivation for people to have public conversations on said public forum.
This, too, is a strawman: “substantially motivated by wanting to make them look bad” is a tendentious description, and is certainly not one which most people would endorse, as applied to their own contributions to such conversations.
Insofar as conversations are 100% about communicating information on the object level about a topic, of course social information about status and personal hostilities are irrelevant to the point of a conversation. Insofar as one is not blinding oneself to the other dynamics but also living in them, then calling someone a fool a lot is quite relevant to whether to continue interacting with someone, as I’m pretty sure you’re aware.
Our comment threads are going in circles, and these perspectives are not being bridged. I expect more comments like this are not going to change much.
I have much work to do other than replying to these long threads with you and your allies under the clouds of this political conflict around whether to ban you from LessWrong. Do not expect many more replies any time soon.
I think there are a few things being conflated in the situations you’re talking about.
1) “Do I respect you, as a person?”
2) “Do I respect your ideas on the topic at hand?”
3) “Am I cowed away from challenging what I don’t [know that I] respect?”
With these distinctions in place, we can keep a simple definition of respect as “Honestly evaluated value of engaging”, and stop polluting the term “respect” with unlike things which people sometimes want to pretend are “just respect”.
1) “Do I respect you as a person?” fits well with the “treat someone like a person” meaning. It means I value not burning bridges by saying things like “Go die, idiot”, and will at least say “No thanks” if not “What justifies your confidence in what you’re saying?” when you’re saying things I have a hard time taking seriously.
2) “Do I respect your ideas on the topic at hand” is closer to “Do I see you as an authority”—at least, in the sense in which it’s legitimate. If you’re a math professor and you say something counterintuitive about math, I’m more likely to assign it higher credence and all that. This isn’t really as important, because if we have mutual respect for each other then we can explicitly sort out whether its worth taking seriously your ideas on math.
3) “Am I cowed away from challenging” is generally what’s going on when people are saying “if you dare question”—because why is it a “dare” if it’s just about honestly held beliefs and not fear? Whenever fear is driving, honest beliefs and updating on evidence can’t drive, and that’s bad. But if I’m insecure about how that conversation might turn out, then it’s tempting for me to try to frame things as “You’re not respecting me, and I’m an authroritah”. Because that sounds a whole lot better than “I’m afraid that if we honestly hash out how much my ‘expertise’ is worth here, you’ll come to the honest conclusion that it’s less than I know how to handle”.
The somewhat trickier part is that when other people are acting as “systematically flawed belief-generators”. Say I’m a bad actor, and I want to bully people into respecting my authoritah. A healthy ecosystem will recognize the value of my inputs for what it is, and that is necessarily humiliating for me. So what happens when I come in posturing and ignoring subtle hints that people don’t respect my ideas on the topics at hand?
Either I’m successful in cowing the whole community into submission, or the signals get louder. If the community can’t handle loud signals of humiliation, then as long as I’m more shameless than the community is willing to humiliate, the community can’t correct my behavior. And if I’m unwilling to change, people will justifiably lose respect for me as a person—because at this point, what’s the value of a bridge?
In order to not fall victim to such abuses, and keep updates flowing freely, the community has to be quick enough to notice and exclude people acting on these temptations before the conflict gets more heated than it’s tolerance for heat allows. And that can be tricky, because 1) the bad actor is tempted to do their best to disguise things and play innocent, 2) the more conflict averse are tempted to run defense too, and 3) the “bad actors” are often valuable contributors who are erring a little, rather than any sort of “black and white” situation.
Basically the whole problem is that making any sort of “respect” mandatory necessarily enables bad actors.
For example, you say:
Now, suppose that Alice does not think that Bob is capable of honestly collaborating on a project of explicitly sorting out whether Bob’s ideas on math are worth taking seriously. (Perhaps Alice thinks that Bob lacks sufficient self-awareness, or that Bob’s ego is too big and too fragile, or that Bob is a crank, or… etc.)
By modus tollens, this implies that Alice does not “respect” Bob (in the #1 sense).
But whoops! It turns out that the rules of the space in which Alice and Bob are having their discussions mandates that participants “respect” one another. Alice, being a faithful rule-follower, scrupulously behaves as if she “respects” Bob. However, Alice is certainly not going to pretend that her evaluation of Bob’s capabilities is otherwise than it is; she remains quite convinced that there’s no hope of explicitly sorting out whether Bob’s math ideas are worth taking seriously.
This breaks the quoted implication, as far as the apparently-observed rules of the community are concerned. Alice and Bob respect each other, or so it would seem; and yet there isn’t any explicit sorting-out happening. Instead, Alice just silently disvalues Bob’s math ideas.
You say:
And this is all true, of course. And so long as “respect” is mandatory, these problems cannot be solved.
Yep. Because you can’t require actual respect, any more than you can require people to believe that the sky is green. You can intimidate people into claiming belief/respect, but this necessarily comes at the cost of honesty and ability to update towards the truth.
That doesn’t mean we should tolerate unnecessary hostility. “Go die, idiot” is generally bad behavior, but not because it’s “lacking respect”.
I basically agree with you, but this
confusingly contradicts (semantically if not substantively)
It is lacking respect, but it’s not bad because it’s lacking respect. The badness is separate.
Fully agreed. Fortunately, this sort of thing is mostly not a problem on Less Wrong.
(There are exceptions, of course, but they are usually dealt with fairly competely by the mods.)
Discussion of how people or institutions do things in systematically bad ways could focus on norms and incentives, rather than on the people or the institutions themselves. This covers the vast majority of situations (though not all), and is also more constructive in those situations. False beliefs are the easy case, but it’s not the only case where it’s practical to argue people into self-directed change.
The people who are enforcing bad norms or establishing bad incentives, while acting under other (or the same) bad norms or incentives would often acknowledge that the norms and the incentives are no good, and might cooperate (or even lead) in setting up coordination technologies that reduce or contest the influence of these forces. I think it’s worth making sure if this path really is closed, before resorting to the more offense-provoking methods (even rhetoric).
Without concrete examples, everyone can simply agree with all the arguments about norms and incentives, and then continue as before, without changing their behavior in any way. Al Capone is happy to say “Of course tax evasion is bad!”—as long as we don’t try to prosecute him for it.
No, what happens is that those people acknowledge that the norms and incentives are no good, and then do not cooperate in setting up those coordination technologies, and indeed often actively sabotage other people’s attempts at such coordination.
(I think this post deserves to be Frontpaged, because it’s trying to explain useful, relevant timeless insights about psychology. I’m saying this because I was mildly surprised that one of my posts from earlier this week was relegated to Personal.)
This reads to me like an indirect public airing of grievances related to some drama we have insufficient context for, at least without investigating other threads. Without this context, the post is ungrounded, difficult to make sense of, and reads to me as personally-motivated meta-level slop.
I don’t think this deserves front page status as I didn’t find anything useful, relevant, or timeless in it, much less all three.
(Frontpage is not a quality filter, it’s a topic filter. The topic is clearly timeless, and while I agree some interpretations could be too inside-baseball-y, the post is clearly at least aspiring to broader relevance.
In general, bar a fairly small set of exceptions, I think the LW habit of taking local grievances and trying to abstract the disagreement into general principles using non-politicized examples is a good one, and I would like people to keep doing it)
I’m sorry you didn’t like the post. (Please downvote if you haven’t already.) Maybe the following brief summary will help clarify what I was trying to get at, since the post as written didn’t work for you or Cole?
There’s a famous Tumblr quote complaining about situations where people strategically conflate two meanings of “respect”: saying, if you don’t “respect” me (meaning, defer to me when I claim to know something), then I won’t “respect” you (meaning, I can mistreat you). This results in conflict.
I’m saying there can also be other situations where rather than one person using two definitions of “respect” inconsistently, two people are using different definitions, with each of them being consistent in their own usage. One person thinks that “respect” means that people should defer to each other when the other person claims to know something, and the other doesn’t think that “respect” entails that, and that results in conflict.
(End summary.)
Maybe you don’t think that’s interesting, but a lot of people shared that Tumblr quote, so presumably someone is interested in this kind of social theorizing?
In retrospect, this post was not my best work. (In particular, I’m embarrassed that I misunderstood the intent of the quote that I used in the title.) I guess that’s the way it goes sometimes. (I try to write good posts and not publish bad posts, but sometimes I don’t notice that a post is bad before seeing how it lands in the comment section.)
Yeah the first paragraph or so was interesting, the lesswrong related examples were just too high context.
I understood your the point you just summarized, just not how it fits into the broader picture that precipitated this post.
Yeah I don’t understand what’s happening here at all.
(Most of the mod team agreed about your earlier post after it was promoted to attention, so it’s been Frontpaged. We have a norm against “inside baseball” on the frontpage: things that are focused too much on particular communities associated with LW. I think the mod who put it on Personal felt that it was verging on inside baseball. The majority dissented)
(And the LW team doesn’t exempt itself from this rule, e.g. this podcast with Habryka was considered to be a Personal Blog.)
I have no power to decide what’s on the frontpage, but I’m glad these posts aren’t there because they make general points in a way that read to me as continuations of meta-discussions about the site and use examples from those discussions not so much as examples (because you could have easily made up more relatable examples, and the examples you chose are not salient to more than maybe a hundred people) but what seems to me to be a a way to make points against what’s happening in those conversations. This feature makes these posts feel like thinly-veiled drama posting to me.
To be fair, I don’t know your actual motivations, so this is just based on the vibe I’m getting reading them, but I think the vibe I (and others?) pick up reading a post is pretty important for what should be on the frontpage.
Honestly, a lot of my work on this website consists of trying to write “the generalized version” of something that’s bothering me that would not otherwise be of philosophical interest. I just think this has a pretty good track record of being philosophically productive! For example, you yourself have linked to my philosophy of language work, even though you probably don’t care about the reason I originally got so obsessed with the philosophy of language in the first place. To me, that’s an encouraging sign that I got the philosophy right (rather than the philosophy being thinly-veiled politics).
Yes, I think this instinct has mostly served you well. In this instance, though, it appears to me to cross some hard-to-define line, maybe because it’s about drama I’m tangentially involved in?
I roughly think that the previous post was more clearly on the “frontpage side” than this one, and this one is edge-casey. (I’m only one of the mods and we don’t all agree all the time, but for people modeling where the line is in mod-judgment-aggregate, uh, that’s my current take)
If someone is “optimizing in an objectionable direction” doesn’t that just mean they’re your enemy? And if so, aren’t the valid responses to fight, negotiate, or give up? I don’t understand what you’re concretely expecting to happen in this situation. It seems like you’re expecting the bad guys to surrender just because you explained that they’re bad, but I don’t see what would motivate them to do that.
I don’t think this refers necessarily to intentional malice. Suppose there is someone who makes important, impactful decisions based on astrology. You can’t just tell them “hey you made a silly mistake reading the precise position of Mercury retrograde here, it happens”. You have to say “astrology is bunk and basing your decisions on it is dangerous”. But in a culture in which the rule is “if someone strongly enough believes in something—like astrology—that they’ve built their entire identity around it, attacking that something is the same as an attack on their person which will inflict suffering on them, and therefore shouldn’t be done”, that action is taboo. Which is the problem that the post gestures at, I think.
Of course one can argue that maybe it’s strategically better to not go too hard—if for example astrology is a majority belief and most people will side with the other person. But that’s a different story. If saying “hey people, this guy believes in astrology! Stop listening to him!” is enough to make them lose status, should you be able to do it or not? Which is more important, their personal sense of validation, or protecting the community from the consequences of their wrong beliefs?
Right, but the problem is the people who believe in astrology (or who work for an astrology company, or whose friends are into astrology, etc.) will say “no, it’s wrong to criticize astrology” and the people who don’t have a stake in astrology will say “yes, it’s okay to criticize astrology” and there’s no neutral arbiter to adjudicate the disagreement. You haven’t gotten anywhere by going up a meta-level because the stakes are still the same.
As for intent, I tend to favor treating intentional and unintentional machiavellianism the same, as doing otherwise just amounts to punishing people for having an accurate self-model, which seems like a bad way to promote truthseeking.
That’s the bare minimum I can expect. The problem is when people who don’t believe in astrology still take it upon themselves to make it a general social rule that you simply shouldn’t criticize any sufficiently dearly-held beliefs because it makes people’s feelings hurt. Because that can tip the scales from a minority to a majority and establish norms that are in fact long term toxic. I actually remember some time ago some Twitter discourse about how love for astrology is feminine-coded, and therefore mocking astrology is in fact something men do to put women down or something. This one is a bit of a ridiculous example and not many people were going along with it, but there are bigger things (like the shift in attitudes towards religion and militant atheism) that instead matter more.
I don’t follow this bit, can you expand on it?
The world is a big place, so there are probably a few people out there who truly abhor all criticism that hurts people’s feelings regardless of who it’s directed at. But in my experience, the vast majority of the time, whether someone perceives a criticism as hurtful or out of bounds depends strongly on whether the perceiver likes, agrees with, or is affiliated with the target of the criticism. To take the atheism example, it seems to me there wasn’t an overall shift away from criticism of deeply-held beliefs in general, but rather a shift in the larger battle lines of the culture war.
On unintentional machiavellianism, see here.
There was a shift, but it’s defended and rationalised in the terms I presented. Regardless of how and why the shift happened, many people eventually do simply believe in the rationalization itself, even if it emerged (probably not intentionally, but via selection effects) to simply fit the new shape of the coalition that was pushing it.
This would be way easier to reason about with an example
I feel like you’re probably talking about some specific situation but without that it’s very unclear
I think this phenomenon can be likened to strawmanning, since both include defense against an imagined version of the “actual meaning”. More exactly, I think it can be considered an instance of “subtext strawmanning”, since it probably came from applying exaggerations to the connotation of the criticism, using logic like “criticism ⇒ impolite ⇒ disrespectful ⇒ threatening ⇒ actual danger”.
In general, paying attention to the way in which parties interpret fallaciously aspects of a discussion other than the actual logic seems like a useful thing to do.
This seems related to the “Argument is War” class of metaphors. They may have (subconsciously) thought that the consequences of war (i.e., danger) also apply to things like blogging, since the criticizing person has “attacked” their argument/post through their criticism. While fallacious, I don’t think such logic is absurd or implausible.
Good analysis of the dynamic the original quote is discussing. The authority figure is expecting ‘respect appropriate to their station’ and will give in return ‘respect appropriate to the other’s station’.
The non-authority expects to be able to reject the authority’s framework of respect and unilaterally decide on a new one.
The authority, quite naturually, does not take the lower status individual as their authority figure on the meta respect structure.
Unless the meta respect structure of multiple disconnected layers is imposed societally by high status decouplers (as most people are not much for decoupling, and decoupling is probably bad for your status a lot of the time), the authority figure is correct and the low status individual is wrong. Status is mostly bundled, respectful behaviour follows status and is mostly bundled, and demanding a higher status individual lower themselves to their status behaviourally by accepting the low status individual as an authority (or higher status) conceptually is highly disrespectful of not only the hierarchy but the individual as well
The word “unilaterally” is tendentious here. How else can it be but “unilaterally”? It’s unilateral in either direction! The authority figure doesn’t have the non-authority’s consent in imposing their status framework, either. Both sides reject the other side’s implied status framework. The situation is fully symmetric.
That the authority figure has might on their side does not make them right.
Higher status individual is socially decided, communication doesn’t happen in a vacuum.
If you wish to not treat them as higher status that’s leaving the social default.
You can call this “might” but in fact it’s attempting to change the default context (according to society) to lower the other person’s position.
The norm that one should not do anything that threatens the social status of those with high social status is, of course, highly beneficial to those with high social status, which gives them an incentive to promulgate said norm; and, having high social status, they have the ability to do just that. This fully suffices to explain why the norm exists.
That is pure, unadulterated “might makes right”.
The norm is you should give each person the treatment they deserve based on the social norms. A high status person treating a lower status person with less respect than is appropriate is exactly the same, except they can often get away with it due to might makes right.
Similarly, stealing from rich people is pretty similar to stealing from poor people, and the fact that rich people will be protected from thieves -with violence if necessary—is a feature not a bug.
That thieves don’t respect property rights does not make the rich protecting themselves with armed guards “might makes right”.
Downvoted because title felt too much like clickbait