People feel “safe” when their interests aren’t being threatened. (Usually the relevant interests are social in nature; we’re not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first.
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (Youdon’t believe in interpretive labor, but Ray doesn’t believe in answering all of Said’s annoying questions, so it’s my job to fill in the gap.)
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.
The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
Hmm, I see. That usage makes sense in the context of the hypothetical example. But—
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website
… indeed.
you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor
Thanks! However, I have a follow-up question, if you don’t mind:
Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?
I think I’ll go up to 85% confidence that Raemon will affirm the grandparent as a “close enough” explanation of what he means by safe. (“Close enough” meaning, I don’t particularly expect Ray to have thought about how to reduce the meaning of safe and independently come up with the same explanation as me, but I’m predicting that he won’t report major disagreement with my account after reading it.)
It’s similar (I definitely felt it was a good faith attempt and captured at least some of it).
But I think the type-signature of what I meant was more like “a physiological response” than like “a belief about what will happen”. I do think people are more likely to have that physiological response if they feel their interests are threatened, but there’s more to it than that.
Here are a few examples worth examining:
On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob’s interests, B) indicates they don’t understand that they have threatened Bob’s interests (so they aren’t even tracking it as a cost/concern)
#1, but Alice does convey they understood Bob’s interests, and thinks in this case it’s worth sacrificing them for some other purpose
Same as #1, but on a private slack channel (where Bob doesn’t visceral feel the thing is likely to immediately spiral out of control)
Same as #1, but it’s in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
Same as #4, but the conversation by the fireplace is being broadcast live to the world.
Same as #4 (threatening, not understanding, but by nice stream), but in this case Alice is a) high status, and specifically states an explicit plan they intend to follow through on, even though right now technically the conversation is private and Bob has a chance to respond.
We’re back on a public webforum, Alice is high status, announcing a credible threatening plan, doesn’t seem to understand Bob right now, but there is a history of people on the webforum trying to understand where each other are coming from, have some (limited) budget for listening when people say “hey man you’re threatening my interests” until they at least understand what those interests are, and some tradition of looking for third-options that accomplish Alice’s original goal while threatening Bob less. There is also some “being on same-paged-ness” about everyone’s goals (which might include ‘we all care about truth, such that it’s in our interests to get criticized for being wrong even if it’d, say, hurt our chances of getting grant money’. This might further include some history of understanding that people gain status rather than lose status when they admit they’re wrong, etc)
I’d probably expect 1 − 4 to be in ascending order of safety-feeling and “safety-thinking”. #5, #6 and #7 are each a bit of a wildcard that depends on the individual person. I expect a moderate number of people to feel that Alice is “more threatening” in an objective sense, but to nonetheless not feel as much triggered fight-or-flight or political response.
#7 is sort of imaginary right now and I’m not quite sure how to operationalize all of it it, but it’s the sort of thing I’m imagining going in the direction of.
But, when I talk about prioritizing “feelings of safety”, the thing I’m thinking about at the group level is “can we have conversations about people’s interests being threatened, without people entering into physiological flight-or-fight/defensive/tribal mode”.
There are a bunch of further complications where people have competing access needs of what makes them feel safe, and some things that make-some-people-feel-safe have varying amounts of expensiveness for different people, and this is not transparent.
(I do not currently have a strong belief about what exactly is right here, but these are terms in the equation I’m think about)
In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…?
In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address.
On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!)
The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”.
I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)
Suppose I’ve neverheard of this—troop-tricking comity?—or whatever it is you said.
Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return?
It’s got to be something really good, right? Because if you couldn’t point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.
People feel “safe” when their interests aren’t being threatened. (Usually the relevant interests are social in nature; we’re not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first.
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (You don’t believe in interpretive labor, but Ray doesn’t believe in answering all of Said’s annoying questions, so it’s my job to fill in the gap.)
In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.
Hmm, I see. That usage makes sense in the context of the hypothetical example. But—
… indeed.
Thanks! However, I have a follow-up question, if you don’t mind:
Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?
I think I’ll go up to 85% confidence that Raemon will affirm the grandparent as a “close enough” explanation of what he means by safe. (“Close enough” meaning, I don’t particularly expect Ray to have thought about how to reduce the meaning of safe and independently come up with the same explanation as me, but I’m predicting that he won’t report major disagreement with my account after reading it.)
It’s similar (I definitely felt it was a good faith attempt and captured at least some of it).
But I think the type-signature of what I meant was more like “a physiological response” than like “a belief about what will happen”. I do think people are more likely to have that physiological response if they feel their interests are threatened, but there’s more to it than that.
Here are a few examples worth examining:
On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob’s interests, B) indicates they don’t understand that they have threatened Bob’s interests (so they aren’t even tracking it as a cost/concern)
#1, but Alice does convey they understood Bob’s interests, and thinks in this case it’s worth sacrificing them for some other purpose
Same as #1, but on a private slack channel (where Bob doesn’t visceral feel the thing is likely to immediately spiral out of control)
Same as #1, but it’s in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
Same as #4, but the conversation by the fireplace is being broadcast live to the world.
Same as #4 (threatening, not understanding, but by nice stream), but in this case Alice is a) high status, and specifically states an explicit plan they intend to follow through on, even though right now technically the conversation is private and Bob has a chance to respond.
We’re back on a public webforum, Alice is high status, announcing a credible threatening plan, doesn’t seem to understand Bob right now, but there is a history of people on the webforum trying to understand where each other are coming from, have some (limited) budget for listening when people say “hey man you’re threatening my interests” until they at least understand what those interests are, and some tradition of looking for third-options that accomplish Alice’s original goal while threatening Bob less. There is also some “being on same-paged-ness” about everyone’s goals (which might include ‘we all care about truth, such that it’s in our interests to get criticized for being wrong even if it’d, say, hurt our chances of getting grant money’. This might further include some history of understanding that people gain status rather than lose status when they admit they’re wrong, etc)
I’d probably expect 1 − 4 to be in ascending order of safety-feeling and “safety-thinking”. #5, #6 and #7 are each a bit of a wildcard that depends on the individual person. I expect a moderate number of people to feel that Alice is “more threatening” in an objective sense, but to nonetheless not feel as much triggered fight-or-flight or political response.
#7 is sort of imaginary right now and I’m not quite sure how to operationalize all of it it, but it’s the sort of thing I’m imagining going in the direction of.
But, when I talk about prioritizing “feelings of safety”, the thing I’m thinking about at the group level is “can we have conversations about people’s interests being threatened, without people entering into physiological flight-or-fight/defensive/tribal mode”.
There are a bunch of further complications where people have competing access needs of what makes them feel safe, and some things that make-some-people-feel-safe have varying amounts of expensiveness for different people, and this is not transparent.
(I do not currently have a strong belief about what exactly is right here, but these are terms in the equation I’m think about)
In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…?
In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address.
On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!)
The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”.
I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)
What’s in it for me?
Suppose I’ve never heard of this—troop-tricking comity?—or whatever it is you said.
Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return?
It’s got to be something really good, right? Because if you couldn’t point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.
This sort of “naive utilitarianism” is a terrible idea for reasons which we are (or should be!) very well familiar with.