Agree, good question.
In was going to say the much the same. I think it kind of is a noncentral fallacy too, but not one that strikes me as problematic.
Perhaps I’d add that I feel the argument/persuasion being made by Eliezer doesn’t really rest on trying to import my valence towards “swindle” over to this. I don’t have that much valence to a funny obscure word.
I guess it has to be said that it’s a noncentral noncentral fallacy.
Me: *makes joke*
Vaniver: I want you to post it on LessWrong so I can downvote it.
Yeah, granted that it’s going to be rough.
5x seems consistent with the raw activity numbers though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.
Thanks, fixed! It’s a little bit repetitive with everything else I’ve written lately, but maybe I’m getting it clearer with each iteration.
I hereby proclaim that “feelings of safety” be shortened to “fafety.” The domain of worrying about fafety is now “fafety concerns.”
Problem solved. All in a day’s work.
I agree with this, but it’s quite a mouthful to deal with
Yeah, but there’s a really big difference! You can’t give up that precision.
This seems rightish- but off in really important ways that I can’t articulate.
Nods. Also agree that “collective responsibility” is not the most helpful concept to talk about.
Note that this issue is explicitly addressed in the original dialogue. If someones feelings are hurting the discourse, they need to take responsibility for that just as much as I need to take responsibility for hurting their feelings.
Indeed, the fact people can say “”It feels like your need for safety is getting in the way of truth-seeking”is crucial for it to have any chance.
My expectation based on related real-life experience though, is that if making your need for safety is an option, there will people who abuse this and use it to suck up a lot of time and attention. That technically someone could deny their claim and move on, but this will happen much later than optimal and in the meantime everyone’s attention has been sucked into a great drama. Attempts to say “your safety is disrupting truth-seeking” are accused as being attempts to oppress someone, etc.
This is all imagining how it would go with typical humans. I’m guessing you’re imagining better-than-typical people in your org who won’t have the same failure mode, so maybe it’ll be fine. I’m mostly anchored how I expect that approach to go if applied to most humans I’ve known (especially those really into caring about feelings and who’d be likely to sign up for it).
Thanks for the precise and nuanced write-up, and for not objecting to my crude attempt to characterize your position.
Nothing in your views described here strikes me as gravely mistaken, it seems like a sensible norm set. I suspect that many of our disagreements appear once we attempt be precise around acceptable and not acceptable behaviors and how they are handled.
I agree that “aggression” is fuzzy and that simply causing negative emotions is certainly not the criteria by which to judge the acceptability of behavior. I used those terms to indicate/gesture rather than define.
I have a draft, Three ways to upset people with your speech, which attempts to differentiate between importantly different cases. I find myself looking forward to your comments on it once I finally publish it. I don’t think I would have said that a week ago, and I think it’s largely feeling safer with you, which is in turn the result of greater familiarity (I’ve never been active in the LW comments as much as in the last few weeks). I’m more calibrated about the significance of your words now, the degree of malice behind them (possibly not that much?), and even the defensible positions underlying them. I’ve also updated that it’s possible to have a pleasant and valuable exchange.
(I do not say these things because I wish to malign you with my prior beliefs about you, but because I think they’re useful and relevant information.)
Your warm response to my mentioning dream-meeting you made me feel warm (also learning your Myers Briggs type).
(Okay, now please forgive me for using all the above as part of an “argument”; I mean it all genuinely, but it seems to be a very concrete applied way to discuss topics that have been in the air of late.)
This gets us into some tricky questions I can place in your framework. I think it will take us (+ all the others) a fair bit of conversation to answer, but I’ll mention them here now to at least raise them. (Possibly just saying this because I’m away this week and plan not to be online much.)
My updates on you (if correct) suggest that largely Said’s comments do not threaten me much and I shouldn’t feel negative feelings as a result. Much of this is just how Said talks, and he’s still interested in honest debate, not just shutting you down with hostile talk. But my question is the “reality as it presents itself to me” you mentioned. The reality might be that Said is safe, but was I, given my priors and evidence available to me before, wrong to be afraid before I gained more information about how to interpret Said?
(Maybe I was, but this is not obvious.)
Is the acceptability of behavior determined by what the recipient reasonably could have believed (as judged by . . . ?) or by the actual reality. Or there are three possibilities even: 1) what I could have reasonably believed was the significance of your actions, 2) what you could have reasonably believed was the significance of your actions, 3) what the actual significance of your actions were (if this can even be defined sensibly).
It does seem somewhat unfair if the acceptability of your behavior is impacted by what I can reasonably believe. It also seems somewhat unfair that I should experience attack because I reasonably lacked information.
How do we handle all this? I don’t definitively know. Judging what is acceptable/reasonable/fair and how all different perspectives add up . . . it’s a mess that I don’t think gets better even with more attempt at precision. I mostly want to avoid having to judge.
This is in large part what intuitively pushes me towards wanting people to be proactive in avoiding misinterpretations and miscalibrations of other’s intent—so we don’t have to judge who was at fault. I want people to give people enough info that they correctly know even when I’m harsh, I still want them to feel safe. Mostly applies to people who don’t know me well. Once the evidence has accrued and you’re calibrated on what things mean, you require little “padding” (this is my version of Combat culture essentially), but you’ve got to accrue that evidence and establish the significance of actions with others first.
Phew, like everything else, that was longer than expected. I should really start expected everything to be long.
Curious if this provides any more clarity on my position (even if it’s not persuasive) and curious where you disagree with this treatment.
Here’s the couple of thousand words that fell out when I attempted to write up my thoughts re safety and community norms.
Yes, definitely this. ^
Sadly, this doesn’t mean that the converse is true—sometimes they will feel that you’re just setting up strawmen of their fears when you are just honestly trying to understand and making a genuine effort to verbalize your best guess of their fears.
Sadly, I’ve had that happen.
I don’t know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I think Wei Dai said that too elsewhere. When each of you says intellectual generativity, do you the site a whole (post + discussions), or specifically that the discussions in comments were more generative?
Other question is if you think you can quantitatively state some factor by which LW1 was more generative than LW2? If it was only 2x, that would suggest less generativity per person/comment than current LW, since old LW had much more than double the number of users and comments. If it was 10x, then LW1 was qualitatively better in some way.
(I’d expect the output to be a right-tailed distribution over individuals. LW2 could be less generative than LW1 because the top N users which produced 80% of the value left, so it’s not really about the raw number of users/comments.
The most interesting scenario would be if it were all the same people, but they were being less generative.)
Some Thoughts on Communal Discourse Norms
I started writing this in response to a thread about “safety”, but it got long enough to warrant breaking out into its own thing.
I think it’s important to people to not be attacked physically, mentally, or socially. I have a terminal preference over this, but also think it’s instrumental towards truth-seeking activities too. In other words, I want people to actually be safe.
I think that when people feel unsafe and have defensive reactions, this makes their ability to think and converse much worse. It can push discussion from truth-seeking exchange to social war.
Here I think mr-hire has a point: if you don’t address people’s “needs” overtly, they’ll start trying to get them covertly, e.g. trying to win arguments for the sake of protecting their reputation rather than trying to get to the truth. Doing things like writing hasty scathing replies rather slow, carefully considered ones (*raises hand*), and worse, feeling righteous anger while doing so. Having thoughts like “the only reason my interlocutor could think X is because they are obtuse due to their biases” rather than “maybe they have I point I don’t fully realize” (*raises hand*).
I want to avoid people being harmed and also feeling like they won’t be harmed (but in a truth-tracking way: if you’re likely to be attacked, you should believe it). I also think that protective measures are extremely risky themselves for truth-seeking. There is a legitimate fear here a) people can use the protections to silence things they don’t like hearing, b) it may be onerous and stifle honest expression to have to constrain one’s speech, c) fear of being accused of harming others stifles expression of true ideas, d) these protections will get invoked in all kind of political games.
I think the above a real dangers. I also think it’s dangerous to have no protections against people being harmed, especially if they’re not even allowed to object to be harmed. In such an arrangement, it becomes too easy to abuse the “truth-seeking free speech” protections to socially attack and harm people while claiming impunity. Some of it’s truth-seeking ability lost to becoming partly a vicious social arena.
I present the Monkey-Shield Allegory (from an unpublished post of mine):
Take a bunch of clever monkeys who like to fight with each other (perhaps they throw rocks). You want to create peace between them, so you issue them each with a nice metal shield which is good at blocking rocks. Fantastic! You return the next day, and you find that the monkeys are hitting each with the mental shields (turns out if you whack someone with a shield, their shield doesn’t block all the force of the blow and it’s even worse than fighting with rocks).
I find it really non-obvious what the established norms and enforced policies should be. I have guesses, including a proposed set of norms which are being debated in semi-private and should be shared more broadly soon.Separate from the question I have somewhat more confidence in the following points and what they imply for individual.
1. You should care about other people and their interests. Their feelings are 1) real and valuable, and 2) often real information about important states of the world for their wellbeing. Compassion is a virtue.
Even if you are entirely selfish, understanding and caring about other people is instrumentally advantageous for your own interests and for the pursuit of truth.
2. Even failing 1, you should try hard to avoid harming people (i.e. attacking them) and only do so when you really mean to. It’s not worth it to accidentally do it if you don’t mean to.
3. I suspect many people of possessing deep drives to always be playing monkey-political games, and these cause them to want to win points against each other however they can. Ways to do that include being aggressive, insulting people, etc, baiting them, and all the standard behaviors people engage in online forums.
These drives are anti-cooperative, anti-truth, and zero-sum. I basically think they should be inhibited and instead people should cultivate compassion and ability to connect.
I think people acting in these harmful ways often claim their behaviors are fine by attributing to some more defensible cause. I think there are defensible reasons for some behaviors, but I get really suspicious when someone consistently behaves in a way that doesn’t further their stated aims.
People getting defensive are often correctly perceiving that they are being attacked by others. This makes me sympathetic to many cases of people being triggered.
4. Beyond giving up on the monkey-games, I think that being considerate and collaborative (including the meta-collaborative within a Combat culture) costs relatively little most of the time. There might be some upfront costs to change one’s habits and learn to be sensitive, but long run the value of learning them pays off many times over in terms of being able to have productive discussions where no one is getting defensive + plus that seems intrinsically better for people to be having a good time. Pleasant discussions provoke more pleasant discussions, etc.
* I am not utterly confident in the correctness of 4. Perhaps my brain devotes more cycles to being considerate and collaborative than I realize (as this slowly ramped up over the years) and it costs me real attention that could go directly to object-level thoughts. Despite the heavy costs, maybe it is just better to not worry about what’s going on in other people’s minds and not expend effort optimizing for it. I should spend more time trying to judge this.
5. It is good to not harm people, but it also good to build one’s resilience and “learn to handle one’s feelings.” That is just plainly an epistemically virtuous thing to do. One ought to learn how to become less often and also how to operate sanely and productively while defensive. Putting all responsibility onto others for your psychological state is damn risky. Also 1) people who are legitimately nasty sometimes still have stuff worth listening to, you don’t want to give up on that; 2) sometimes it won’t be the extraneous monkey-attack stuff that is upsetting, and instead the core topic—you want to be able to talk about that, 3) misunderstandings arise easily and it’s easy to feel attacked when you aren’t being, some hardiness to protection again misunderstandings rapidly spiralling into defensiveness and demonthreads.
6. When discussing topics online, in-text, and with people you don’t know, it’s very easy to miscalibrated on intentions and the meaning behind words (*raises hand*). It’s easy for their to be perceived attacks even when no attacks are intended (this is likely the result of a calibrated prior on the prevalence of social attacks).
a. For this reason, it’s worth being a little patient and forgiving. Some people talk a bit sarcastically to everyone (which is maybe bad), but it’s not really intended as an attack on you. Or perhaps they were plainly critical, but they were just trying to help.
b. When you are speaking, it’s worth a little extra effort to signal that you’re friendly and don’t mean to attack. Maybe you already know that and couldn’t imagine otherwise, but a stranger doesn’t. What counts as an honest signal of friendly intent is anti-inductive, if we declare to be something simple, the ill-intentioned by imitate it by rote, go about their business, and the signal will lose all power to indicate the friendliness. But there are lots of cheap ways to indicate you’re not attacking, that you have “good will”. I think they’re worth it.
In established relationships where the prior has become high that you are not attacking, less and less effort needs to be expended on signalling your friendly intent and you can get talk plainly, directly, and even a bit hostilly (in a countersignalling way). This is what my ideal Combat culture looks like, but it relies of having a prior and common knowledge established of friendliness. I don’t think it works to just “declare it by fiat.”
I’ve encountered push back when attempting to 6b. I’ll derive two potential objections (which may not be completely faithful to those originally raised):
Objection 1: No one should be coerced into having to signal friendliness/maintain someone else’s status/generally worry about what impact their saying true things will have. Making them worry about it impedes the ability to say true things which is straightforwardly good.
Response: I’m trying to coerce anyone into doing this. I’m trying to make the case you should want to do this of your own accord. That this is good and worth it and in fact results in more truth generation than otherwise. It’s a good return of investment. There might be an additional fear that if I promote this as virtuous behavior, it might have the same truth-impeding effects as it if was policy. I’m not sure, I have to think about that last point more.
Objection 2: If I have to signal friendly intent when I don’t mean it, I’d be lying.
Response: Then don’t signal friendly intent. I definitely don’t want anyone to pretend or go through the motions. However I do think you should probably be trying to have honestly friendly intent. I expect conversations with friendly intent to be considerable better than those without (this is something of a crux for me here), so if you don’t have it towards someone, that’s real unfortunate, and I am pessimistic about the exchange. Barring exceptional circumstances, I generally don’t want to talk to people who do not have friendly intent/desire to collaborate (even just at the meta-level) towards me.What do I mean by friendly intent? I mean that you don’t have goals to attack, win, or coerce. It’s an exchange intended for the benefit of both parties where you’re not the side acting in a hostile way. I’m not pretending to discuss a topic with you when actually I think you’re an idiot and want to demonstrate it to everyone, etc., I’m not trying to get an emotional reaction for my own entertainment, I’m not just trying to win with rhetoric rather than actually expose my beliefs and cruxes, if I’m criticizing, I’m not just trying to destroy you, etc. As above, many times this is missing and it’s worth trying to signal its presence.
If it’s absent, i.e. you actually want to remove someone from the community or think everyone should disassociate from them, that’s sometimes very necessary. In that case, you don’t have friendly intent and that’s good and proper. Most of the time though (as I will argue), you should have friendly intent and should be able to honestly signal it. Probably I should elaborate and clarify further on my notion of friendly intent.
There are related notions to friendly intent like good faith, questions like “respect your conversation partner”, think you might update based on what they say, etc. I haven’t discussed them, but should.
I think mr-hire thinks the important success condition is that people feel safe and that it’s important to design the space towards this goal, with something of a collective responsibility for the feelings of safety of each individual.
I think Said things that individuals bear full responsibility their feelings of safety, and that it’s actively harmful to make these something the group space has to worry about. I think Said might even believe that “social safety” isn’t even important for the space, i.e., it’s fine if people actually are attacked in social ways, e.g. reputationally harm, caused to be punished by the group, made to experience negative feelings due to aggression from others.
If I had to choose between my model of mr-hire’s preferred space and my model of Said’s preferred space, I think I would actually choose Said’s. (Though I might not be correctly characterizing either—I wanted to state my prediction before I asked to test how successfully modeling other’s views).
When it comes to truth seeking, I’d rather err on the side of people getting harmed a bit and having to do a bunch of work to “steel” themselves against the “harsh” environment, then give individuals such a powerful tool (the space being responsible for their perception of being harmed) to disrupt and interfere with discourse. I know that’s not the intended result, but it seems too ripe for abuse to give feelings and needs the primacy I think is being given in the OP scenario. Something like an unachievable utopia: it sounds good, but I am very doubtful it can be done and also be a truth-seeking space.
[Also Said, I had a dream last night that I met you in Central Park, NY. I don’t know what you look or sound like in person, but I enjoyed meeting my dream version of you.]
I think the question of “what is safety?” is a really good one. I’ll write up some thoughts here both for this thread, but also to be to refer to generally (hence a bit more length).
Safety is when a particular circumstance doesn’t trigger that reaction,
I’m not a fan of that definition. It’s equating “feelings of safety” with “actual safety”
It’s defining safety as the absence the response to perceived unsafety. It feels equivalent to saying “sickness is the thing your immune system fights, and health is the absence of your immune system being triggered to fight something.” Which is very approximately true, but breaks down when you consider autoimmune disorders. With those, it’s the mistaken perception of attack which is the very problem.
This definition can also put a lot of the power in the hands of those who are having a reaction. If we all agree that our conversation must be safe, and that any individual can declare it unsafe because they are having a reaction, this gives a lot power to individuals to force attention on the question of safety (and I fear too asymmetrically with others being blamed for causing the feelings of uncertainty).
So here’s the alternative positive account of “safety” I would give:
One *is* safe if one is unlikely to be harmed; one *feels* if they believe (S1 and/or S2) if they believe they won’t be harmed.
This accords with the standard use of safety, e.g. safety goggles, safety precautions, safe neighborhood, etc.
In conversation, one can be “harmed socially”, e.g. excluded from the group, being “punished” by the group, being made to look bad or stupid (with consequences on how they are treated), having someone act hostilely or aggressive to them (which is a risk of strong negative experience even if they S2 believe it won’t come to any physical or lasting harm), etc. (this is not a carefully developed or complete list).
So in conversation and social spaces, safety equates to not being likely to be harmed in the above ways.
Much the same defenses that activate when feeling under physical threat also come online when feeling under social threat (for indeed, both can be very risky to a human). These are physiological states, fight or flight, etc. How adaptative these are in the modern age . . . more than 0, less than 1 . . .? Having these responses indicates that some part of your mind perceives threat, the question being whether it’s calibrated.
On the question of space: a space can be perceived to have or lower risk of harm to individual (safety) and also higher or lower assessments of risk of harm related to taking specific actions, e.g. saying certain things.
With this definition, we can separately evaluate the questions of:
1) Are people actually safe vs likely to be harmed in various ways?
2) Are the harms people worried actually legitimate harms to be worried about?
3) Are people correct to be afraid of being harmed, that is, to feel unsafe?
4) Who should be taking action to cause people to feel unsafe? How is responsibility distributed between the individual and the group?
5) How much should the group/community worry about a) actual safety, and b) perceived safety?
I’m interested in how different people answer these questions generally and in the context of LessWrong.
And also the vehemence with which these viewpoints seemed to be held and defended.
I agree there’s something like vehemence and it’s made all the conversations unpleasant and stressful. Someone countered to me that if you perceive someone to be threatening the very integrity of your ability to have conversations, it’s appropriate to break frame and get up in arms. I’m not convinced it’s warranted here, but maybe...
“Tone and degree of charity are very important too” is a perspective I’d like to see represented more among LW users. (But if I’m in the minority, that’s fine and I don’t object to communities keeping their defining features if the majority feels that they are benefitting.)
I’m not sure about the exact proportion of people’s perspectives. There definitely is a cluster of people (myself included) who think “tone”, etc. are significant. (This group also might be more averse to getting into online conflicts.) I’m also concerned about the number of people who would counterfactually engage more on LessWrong, except they dislike the conversations they’ll end up in currently.
There are a bunch of conversations going on about the topic right (some in semi-private which might be public soonish). There’s support (at least on the LW team) for an Archipelago type solution where people can opt-in into one of 2 or 3 norm sets. (Though that doesn’t quite fix site-level things like the karma notifier settings.) One of those spaces should have much more “civility.”
Maybe I expressed it poorly, but what I meant was just that rationality is not an end in itself.
Yeah, that’s reasonable. I think that many people, while agreeing with that (or something close to it), get very afraid as soon as someone says it that because they fear it’s going to be used to justify distinctly not-rational/damages the whole endeavor of being rational. I have some of this fear myself.
It seems to me that rationality is extremely fragile and vulnerable, such that even though rationality might serves other goals, you have to be very uncompromising with regards to rationality, especially core things like hiding information from yourself (I was lightly opposed to the negative karma hiding myself) even if it that has appararant costs.
But it’s hard. I think there are tricky questions to answer, but the conversation currently can be civil/happen without vehemence.
Overall great post, thanks! Much I agree with, but a few things stick out.
By bringing this topic up so much, you’re putting your needs above the needs of others you’re interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.
The competing needs frame feels off to me. I think this is why (but I haven’t thought about it at length):
Balancing between everyone’s needs makes sense if the point of the group/community is for people to come together and assist each other in meeting their individual needs. But I think that’s very often not the point of a group/community.
In many cases (including the rationality community/LW), the point is to come together towards some joint objective. Raemon would call this building a product together. When you’re building a product, it’s not about my needs vs your needs, it’s about which actions will actually lead to a successful product.
It doesn’t make sense to say “we should balance between my need for website minimalism and your need for information density”, but rather “we need to answer which of these is actually better for the product.”
If I think insufficient minimalism is going to the kill the product, it makes sense that I want to keep talking about that until I convince you or you convince me.
In the context of your examples (which seems to EA/rationality community), the “product” we’re building together is very nebulous. Maybe it’s “a truth-seeking community/true knowledge” or “an optimal world.” So John might not be repeatedly mentioning veganism because its his need, but because he believes veganism is crucially important the success of the entire joint project + everyone’s else values/goals. He might be arguing: we need to talk about this for all of our sakes, not just mine.
Obviously, there need to be good ways of allocating group attention between the different things that different people think are imperative for success, and good ways of handling persistent disagreements, etc. If 9 out of 10 people have heard my arguments (repeatedly) and are still against minimalism, I should possibly accept that or leave (first I might have a conversation with Jill). If I’m being unproduct and uncooperative/coercive in my desire to talk about a thing repeatedly in way that harms group cooperation and health, it’s probably necessary for Jill to have a chat with me, etc—similar to the picture you painted.
As a small data point: there have been at least three instances in the past ~three months where I was explicitly noticing certain norm-promoting behavior in the rationalist community (and Lesswrong in particular) that I found off-putting, and “truth-seeking over everything else” captures it really well.
Can you clarify which bit was off-putting? The fact that any norms were being promoted or the specific norms being promoted?
If the former, I think it’s actually important that a community debates and determines it norms, and that members enforce those norms. I think it’s overall healthy to me that norms are being discussed a lot present (even if not all the discussion happens in accordance with the norms I’d advocate).
“It is rational to do x regardless of how it affects people’s quality of life and productivity” should never be an argument.
That doesn’t feel true to me. Specific examples don’t spring to mind, but I can’t endorse that as a categorical statement in the abstract. People’s quality of life and productivity (in the short-term) aren’t sacred enough to me to be never be outweighed in any circumstance.
Edit: I hadn’t read Zack’s long reply when making this comment, so it wasn’t factored into it. Likely would have said something very slightly different if I had.
Entirely fair of you to make the meta-note. Data point from me: I actually found the question/answer pairs quite helpful + think they’re reasonable; I probably could have generated answers for a system I set up, but I haven’t fully absorbed your proposal enough to do so on your behalf.
Actually, something generally helpful to hear is the “it’s highly context specific.” That seems true and a good answer. I think I would have tried to specify some overarching principle for all these cases and done so poorly.
Treading carefully, I’ll say that I can’t speak to the motivations/attitudes behind the questions, and I thought the wording in the other question wasn’t very good, but both questions themselves seem good to me.
[Attempt to engage with your comment substantively]
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this.
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.