I do not reach the point of telling the...humans I know that they’re e.g. dumb or wrong or sick or confused
If you’ll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.
Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they’re wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your own confusion or wrongness, and encouraging others to notice it for you when you don’t; the norm, as I understand it, is a cooperative one).
Telling people when they might be sick is (if you’ll forgive me) hardly something to sneeze at, either. They might want to visit a doctor. Health is, for understandable reasons, generally considered important. (This includes mental health.)
As for dumb, well, I simply doubt that comes up often enough to make the statement meaningful. Whatever may be said about the rationalist community, it does not appear to draw its membership disproportionately from those of specifically low intelligence. Your acquaintances—whatever their other characteristics—probably aren’t “dumb”, so to tell them they are would simply be to assert a falsehood.
So: may I be so bold as to suggest either a reformulation of the thought you were trying to express, or even a reconsideration of the impulse behind it, in the event that the impulse in question wasn’t actually a good one?
This is a fair point. I absolutely do hold as my “daily bread” letting people know when my sense is that they’re wrong or confused, but it becomes trickier when you’re talking about very LARGE topics that represent a large portion of someone’s identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through.
I don’t have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that’ll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.
Here’s what it looks like to me, after a bit of reflection: you’re in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).
In this sort of situation, I don’t think it’s necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it’s probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever reason, do go around proclaiming P. At least leave that to the people who are confident that P is false. Otherwise, you are doing epistemic anti-work, by systematically un-correlating normative group beliefs from reality.
My sense was that you were sort of doing that above: you were seeking to reproach someone for being loudly contrarian in a direction that, from your perspective (according to what you say), may well be the right one. This is against your and your friends’ epistemic interests.
(A friendly reminder, finally, that talk of “being a total jerk” and similar is simply talk about social norms and their enforcement.)
I was not aiming to do “that above.” To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.
Your comment was perfectly fine, and you don’t need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there’s a strong chance I’ll be without internet for several days and likely won’t be able to further engage with this topic.
Duncan’s original wording here was fine. The phrase “telling the humans I know that they’re dumb or wrong or sick or confused” is meant in the sense of “socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect”.
To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that’s a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.
I’m frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they’ve written are all ways of socially discouraging someone from doing something. I think that Duncan’s comment was fine, I certainly think that he didn’t need to apologize for it, and I’m fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.
“socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people “diplomatic” communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary directness is, because the more people are already preoccupied with mentally modeling them.
I’m frustrated by your comment, komponisto
If we’re engaging in disclosures of this sort, I have felt similarly about many a comment of yours, not least the one to which I am replying. In your second paragraph, for example, you engage in passive aggression by deceptively failing to acknowledge that the people you are criticizing would accuse you of the exact same sin you accuse them of (namely, equating “trans people disproportionately have certain traits” and “boo trans people”). That’s not a debate I consider myself to be involved in, but I do, increasingly, feel myself to be involved in a meta-dispute about the relative importance of communicative clarity and so-called “niceness”, and in that dispute, come down firmly on the side of communicative clarity—at least as it pertains to this sort of social context.
I read your comment as a tribal cheer for the other, “niceness”, side, disingenuously phrased as if I were expected to agree with your underlying assumptions, despite the fact that my comments have strongly implied (and now explicitly state) that I don’t.
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term.
As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
Moreover, the cost is not the same for everyone
It’s fairly common for this cost to go down with practice. Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I’m not necessarily claiming that you or any specific person is acting this way; I’m just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.
communicative clarity and so-called “niceness”
That’s a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they’ve made to other’s emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.
communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.
What it looks like to me is that LW and its associated “institutions” and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically endorsing this process, specifically the aspect wherein unique subcultural norms are being overwritten by general societal norms.
The way this comes about is that the high-status members of the subculture eventually become tempted by the prospect of high status in general society, and so in effect “sell out”. Unless previously-lower-status members “step up” to take their place (by becoming as interesting as the original leaders were), the subculture dies, either collapsing due to a power vacuum, or simply by being memetically eaten by the general culture as members continue to follow the old leaders into (what looks like) the promised land.
Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”
If you don’t have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information.
“Did you symptoms X and Y are signs of clinical mental illness Z?” is likely more effective than telling the person “You have mental illness Z.”
If the other person doesn’t feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it’s more likely that they will end up doing what’s right afterwards.
I don’t think “Did you know symptoms X and Y are signs of clinical mental illness Z?” is appreciably different from “You very possibly have mental illness Z”, which is the practical way that “You have mental illness Z” would actually be phrased in most contexts where this would be likely to come up.
Nevertheless, your first and third paragraphs seem right.
In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it.
The phrasing of information matters.
If you’ll allow me, I would like to raise a red-flag alert at this sentence. It seems poorly worded at best, and in worse scenarios indicative of some potentially-bad patterns of thought.
Presumably, as a member of a community of aspiring rationalists, not to mention the staff of CFAR, telling the people you know when (you think) they’re wrong or confused is, or should be...your daily bread. (It goes without saying that this extends to noticing your own confusion or wrongness, and encouraging others to notice it for you when you don’t; the norm, as I understand it, is a cooperative one).
Telling people when they might be sick is (if you’ll forgive me) hardly something to sneeze at, either. They might want to visit a doctor. Health is, for understandable reasons, generally considered important. (This includes mental health.)
As for dumb, well, I simply doubt that comes up often enough to make the statement meaningful. Whatever may be said about the rationalist community, it does not appear to draw its membership disproportionately from those of specifically low intelligence. Your acquaintances—whatever their other characteristics—probably aren’t “dumb”, so to tell them they are would simply be to assert a falsehood.
So: may I be so bold as to suggest either a reformulation of the thought you were trying to express, or even a reconsideration of the impulse behind it, in the event that the impulse in question wasn’t actually a good one?
This is a fair point. I absolutely do hold as my “daily bread” letting people know when my sense is that they’re wrong or confused, but it becomes trickier when you’re talking about very LARGE topics that represent a large portion of someone’s identity, and I proceed more carefully because of both a) politeness/kindness and b) a greater sense that the other person has probably thought things through.
I don’t have the spoons to reformulate the thought right now, but I think your call-out was correct, and if you take it on yourself to moderately steelman the thing I might have been saying, that’ll be closer to what I was struggling to express. The impulse behind making the statement in the first place was to try to highlight a valuable distinction between pumping against the zeitgeist/having idiosyncratic thoughts, and just being a total jerk. You can and should try to do the former, and you can and should try to avoid the latter. That was my main point.
Here’s what it looks like to me, after a bit of reflection: you’re in a state where you think a certain proposition P has a chance of being true, which it is considered a violation of social norms to assert (a situation that comes up more often than we would like).
In this sort of situation, I don’t think it’s necessarily correct to go around loudly asserting, or even mentioning, P. However, I do think it’s probably correct to avoid taking it upon oneself to enforce the (epistemically-deleterious) social norm upon those weird contrarians who, for whatever reason, do go around proclaiming P. At least leave that to the people who are confident that P is false. Otherwise, you are doing epistemic anti-work, by systematically un-correlating normative group beliefs from reality.
My sense was that you were sort of doing that above: you were seeking to reproach someone for being loudly contrarian in a direction that, from your perspective (according to what you say), may well be the right one. This is against your and your friends’ epistemic interests.
(A friendly reminder, finally, that talk of “being a total jerk” and similar is simply talk about social norms and their enforcement.)
I was not aiming to do “that above.” To the extent that I was/came across that way, I disendorse, and appreciate you providing me the chance to clarify. Your models here sound correct to me in general.
Your comment was perfectly fine, and you don’t need to apologize; see my response to komponisto above for my reasons for saying that. Apologies on my part as there’s a strong chance I’ll be without internet for several days and likely won’t be able to further engage with this topic.
Duncan’s original wording here was fine. The phrase “telling the humans I know that they’re dumb or wrong or sick or confused” is meant in the sense of “socially punishing them by making claims in a certain way, when those claims could easily be made without having that effect”.
To put it another way, my view is that Duncan is trying to refrain from adopting behavior that lumps in values (boo trans people) with claims (trans people disproportionately have certain traits). I think that’s a good thing to do for a number of reasons, and have been trying to push the debate in that direction by calling people out (with varying amounts of force) when they have been quick to slip in propositions about values into their claims.
I’m frustrated by your comment, komponisto, since raising a red-flag alert, saying that something is poorly worded at best, and making a large number of more subtle negative implications about what they’ve written are all ways of socially discouraging someone from doing something. I think that Duncan’s comment was fine, I certainly think that he didn’t need to apologize for it, and I’m fucking appalled that this conversation as a whole has managed to simultaneously promote slipping value propositions into factual claims, and promote indirectly encouraging social rudeness, and then successfully assert in social reality that a certain type of overtly abrasive value-loaded proposition making is more cooperative and epistemically useful than a more naturally kind style of non-value-loaded proposition making, all without anyone actually saying something about this.
Your principal mistake lies here:
Putting communication through a filter imposes a cost, which will inevitably tend to discourage communication in the long term. Moreover, the cost is not the same for everyone: for some people “diplomatic” communication comes much more naturally than for others; as I indicate in another comment, this often has to do with their status, which, the higher it is, the less necessary directness is, because the more people are already preoccupied with mentally modeling them.
If we’re engaging in disclosures of this sort, I have felt similarly about many a comment of yours, not least the one to which I am replying. In your second paragraph, for example, you engage in passive aggression by deceptively failing to acknowledge that the people you are criticizing would accuse you of the exact same sin you accuse them of (namely, equating “trans people disproportionately have certain traits” and “boo trans people”). That’s not a debate I consider myself to be involved in, but I do, increasingly, feel myself to be involved in a meta-dispute about the relative importance of communicative clarity and so-called “niceness”, and in that dispute, come down firmly on the side of communicative clarity—at least as it pertains to this sort of social context.
I read your comment as a tribal cheer for the other, “niceness”, side, disingenuously phrased as if I were expected to agree with your underlying assumptions, despite the fact that my comments have strongly implied (and now explicitly state) that I don’t.
As does allowing people to be unduly abrasive. But on top of that, communities where conversations are abrasive attract a lower caliber of person than one where they aren’t. Look at what happened to LW.
It’s fairly common for this cost to go down with practice. Moreover, it seems like there’s an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.
I’m not necessarily claiming that you or any specific person is acting this way; I’m just saying that this incentive gradient exists in this community, and economically rational actors would be expected to follow it.
That’s a horrible framing. Niceness is sometimes important, but what really matters is establishing a set of social norms that incentivize behaviors in a way that leads to the largest positive impact. Sometimes that involves prioritizing communicative clarity (when suggesting that some EA organizations are less effective than previously thought), and sometimes that involves, say, penalizing people for acting on claims they’ve made to other’s emotional resources (reprimanding someone for being rude when that rudeness could have reasonably been expected to hurt someone and was entirely uncalled for). Note that the set of social norms used by normal folks would have gotten both of these cases mostly right, and we tend to get them both mostly wrong.
To whatever extent this is accurate and not just a correlation-causation conversion, this very dynamic is the kind of thing that LW exists (existed) to correct. To yield to it is essentially to give up the entire game.
What it looks like to me is that LW and its associated “institutions” and subcultures are in the process of dissolving and being absorbed into various parts of general society. You are basically endorsing this process, specifically the aspect wherein unique subcultural norms are being overwritten by general societal norms.
The way this comes about is that the high-status members of the subculture eventually become tempted by the prospect of high status in general society, and so in effect “sell out”. Unless previously-lower-status members “step up” to take their place (by becoming as interesting as the original leaders were), the subculture dies, either collapsing due to a power vacuum, or simply by being memetically eaten by the general culture as members continue to follow the old leaders into (what looks like) the promised land.
I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.
Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice’s true rejection of Bob’s complaint probably isn’t, “Yes, I’m inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech.” It’s probably: “I’m not a utilitarian and I reject your standard of decency.”
In most cases calling someone sick when the person suffers from a mental issue isn’t the best way to get them to seek professional help for it.
What is the best way? It’s not like you can trick them into it.
A more serious issue, I would have thought, would be that the “professional help” won’t actually be effective.
If you don’t have any specific tools, I would advocate a mix of asking questions to help the other person clarify their thinking and providing information.
“Did you symptoms X and Y are signs of clinical mental illness Z?” is likely more effective than telling the person “You have mental illness Z.”
If the other person doesn’t feel judged but can explore the issue in a safe space where they are comfortable of working through an ugh-field, it’s more likely that they will end up doing what’s right afterwards.
I don’t think “Did you know symptoms X and Y are signs of clinical mental illness Z?” is appreciably different from “You very possibly have mental illness Z”, which is the practical way that “You have mental illness Z” would actually be phrased in most contexts where this would be likely to come up.
Nevertheless, your first and third paragraphs seem right.
In a conversation, you get another reaction if you ask a question that indirectly implies that the other person has a mental illness than if you are direct about it. The phrasing of information matters.