There are sort of two parts to this, but they overlap and I haven’t really teased them apart, so sorry if this is a bit muddled.
I think there’s a tension between information and adherence-to-norms.
Sometimes we have a rude thought. Like, it’s not just that its easiest expression is rude, it’s that the thought itself is fundamentally rude. The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot. When your best hypothesis is that you actually understand them better than they understand themselves.
It’s not really possible to say that in a way that doesn’t contain the core sentiment “I think I know better than you,” here. You can do a lot of softening the blow, you can do a lot of hedging, but in the end, you’re either going to share your rude information, or you are going to hide your rude information.
Both LW culture and Duncan culture have a strong, endorsed bias toward making as much information shareable as possible.
Duncan culture, at least (no longer speaking for LW) also has a strong bias toward doing things which preserve and strengthen the social fabric.
(Now we’re into part two.)
If I express a fundamentally rude thought, but I do so in a super careful hedged and cautious way with all the right phrases and apologies, then what often happens is that the other person feels like they cannot be angry.
They’ve still been struck, but they were struck in a way that causes everyone else to think the striking was measured and reasonable, and so if they respond with hurt and defensiveness, they’ll be the one to lose points.
Even though they were the one who was “attacked,” so to speak.
A relevant snippet from another recent comment of mine:
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
So, as I see it, the value in “I admit this is bad but I’m going to do the bad thing” is sort of twofold.
One, it allows people to share information that they would otherwise be prevented from sharing, including “prevented by not having the available time and energy to do all of the careful softening and hedging.” Not everyone has the skill of modeling the audience and speaking diplomatically, and there’s value in giving those people a path to saying their piece, but we don’t want to abandon norms of politeness and so an accepting-of-the-costs and a taking-of-lumps is one way to allow that data in.
And two, it removes barriers in the way of appropriate pushback. By acknowledging the rudeness up front, you embolden the people who were offended to be offended in a way that will tend to delegitimize them less. You’re sort of disentangling your action from the norms. If you just say a rude thing and defend it because “whatev, it’s true and justified,” then you’re also incrementally weakening a bunch of structures that are in place to protect people, and protect cooperation. But if you say something like “I am going to say a thing that deserves punishment because it’s important to say, but then also I will accept the punishment,” you can do less damage to the idea that it’s important to be polite and charitable in the first place.
tension between information and adherence-to-norms
This mostly holds for information pertaining to norms. Math doesn’t need controversial norms, there is no tension there. Beliefs/claims that influence transmission of norms are themselves targeted by norms, to ensure systematic transmission. This is what anti-epistemology is, it’s doing valuable work in instilling norms, including norms for perpetuating anti-epistemology.
So the soft taboo on politics is about not getting into a subject matter that norms care about. And the same holds for interpersonal stuff.
For both my own thought and in high-trust conversations I have a norm that’s something like “idea generation before content filter” which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don’t have this norm for “things I say on the public internet” (or any equivalent norm). I’ll have to think a bit about what norms actually seem good to me here.
I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they’re (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the “I know this is uncharitable/rude, but [uncharitable/rude thing]” is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven’t read such comments carefully.
In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.
Happy to try.
There are sort of two parts to this, but they overlap and I haven’t really teased them apart, so sorry if this is a bit muddled.
I think there’s a tension between information and adherence-to-norms.
Sometimes we have a rude thought. Like, it’s not just that its easiest expression is rude, it’s that the thought itself is fundamentally rude. The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot. When your best hypothesis is that you actually understand them better than they understand themselves.
It’s not really possible to say that in a way that doesn’t contain the core sentiment “I think I know better than you,” here. You can do a lot of softening the blow, you can do a lot of hedging, but in the end, you’re either going to share your rude information, or you are going to hide your rude information.
Both LW culture and Duncan culture have a strong, endorsed bias toward making as much information shareable as possible.
Duncan culture, at least (no longer speaking for LW) also has a strong bias toward doing things which preserve and strengthen the social fabric.
(Now we’re into part two.)
If I express a fundamentally rude thought, but I do so in a super careful hedged and cautious way with all the right phrases and apologies, then what often happens is that the other person feels like they cannot be angry.
They’ve still been struck, but they were struck in a way that causes everyone else to think the striking was measured and reasonable, and so if they respond with hurt and defensiveness, they’ll be the one to lose points.
Even though they were the one who was “attacked,” so to speak.
A relevant snippet from another recent comment of mine:
So, as I see it, the value in “I admit this is bad but I’m going to do the bad thing” is sort of twofold.
One, it allows people to share information that they would otherwise be prevented from sharing, including “prevented by not having the available time and energy to do all of the careful softening and hedging.” Not everyone has the skill of modeling the audience and speaking diplomatically, and there’s value in giving those people a path to saying their piece, but we don’t want to abandon norms of politeness and so an accepting-of-the-costs and a taking-of-lumps is one way to allow that data in.
And two, it removes barriers in the way of appropriate pushback. By acknowledging the rudeness up front, you embolden the people who were offended to be offended in a way that will tend to delegitimize them less. You’re sort of disentangling your action from the norms. If you just say a rude thing and defend it because “whatev, it’s true and justified,” then you’re also incrementally weakening a bunch of structures that are in place to protect people, and protect cooperation. But if you say something like “I am going to say a thing that deserves punishment because it’s important to say, but then also I will accept the punishment,” you can do less damage to the idea that it’s important to be polite and charitable in the first place.
This mostly holds for information pertaining to norms. Math doesn’t need controversial norms, there is no tension there. Beliefs/claims that influence transmission of norms are themselves targeted by norms, to ensure systematic transmission. This is what anti-epistemology is, it’s doing valuable work in instilling norms, including norms for perpetuating anti-epistemology.
So the soft taboo on politics is about not getting into a subject matter that norms care about. And the same holds for interpersonal stuff.
OK, excellent this is also quite helpful.
For both my own thought and in high-trust conversations I have a norm that’s something like “idea generation before content filter” which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don’t have this norm for “things I say on the public internet” (or any equivalent norm). I’ll have to think a bit about what norms actually seem good to me here.
I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they’re (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the “I know this is uncharitable/rude, but [uncharitable/rude thing]” is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven’t read such comments carefully.
In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.