Thanks for writing this!! There’s a number of places where I don’t think you’ve correctly understood my position, but I really appreciate the engagement with the text I published: if you didn’t get what I “really meant”, I’m happy to do more work to try to clarify.
TEACH, so that B ends up believing X if X is right and Y if Y is right.
CONVINCE, so that B ends up believing X.
EXPOUND, so that the audience ends up believing X.
I’m unhappy with the absence of an audience-focused analogue of TEACH. In the following, I’ll use TEACH to refer to making someone believe X if X is right; whether the learner is the audience or the interlocutor B isn’t relevant to what I’m saying.
this amounts to supposing that everyone is trying to WIN, hopefully with constraints of honesty, which they do partly by trying to CONVINCE and EXPOUND. We may hope that everyone will LEARN but in Zack’s presentation that doesn’t seem to be present as a motive at all. [...] you’re only going to be able to WIN if you start out disagreeing and argue vigorously for your dissenting opinion
That’s not what I meant. In terms of your taxonomy of motives, I would say that people who don’t think they have something to TEACH, mostly don’t end up writing comments: when I’m only trying to LEARN and don’t have anything to TEACH, I usually end up silently reading (and upvoting contributions I LEARNed from) without commenting. LEARNing can sometimes be a motive for commenting: when I think I think an author might have something to TEACH me, but I can’t manage to infer it from the text they’ve already published, I might ask a question. But I do think that’s a minority of comments.
The relevance of WINning is as a reward for TEACHing. I think we should be trying to engineer a culture where the gradients of status, esteem, and WINning are aligned with good epistemology—where the way to WIN is by means of TEACHing truths, rather than CONVINCEing people of falsehoods. I am fundamentally pessimistic about efforts to get people to care less about WINning (as contrasted to my approach of trying to align what WINning means in the local culture). If I claimed that my motive for commenting on this website was simply that I altruistically wanted other users of this website to have more accurate beliefs, I would be lying; I just don’t think that’s how human psychology works.
but this isn’t about diverging-the-opposite-of-converging at all, it’s using “diverge” to mean “differ at the outset” [...] Converging means getting closer, not starting out already in agreement.
You know, that’s a good point! Now that you point it out, I see that the passage you quote is bad writing and sloppy thinking on my part. As a result of you pointing this out, I think it’s healthy for people to think (slightly) more of you for pointing out a flaw in the text I published, and (slightly) less of me for publishing flawed text. (And very slightly more of me for conceding the point as I am doing now, rather than refusing to acknowledge it.) You WIN. I think it’s okay for you to WIN, and to enjoy WINning. You’ve earned it!
in the case where I’m right the others’ problems needn’t be “something wrong with their epistemic process” in any sense that requires disrespect
Isn’t it, though?—at least for some relevant sense of the word “disrespect.” Previously, you thought Tao was so competent that you never expected to find yourself in the position of thinking he was wrong. Now, you think he was wrong. If you’re updating yourself incrementally about Tao’s competence, it seems like your estimate of his competence should be going down. Not by very much! But a little. That’s the sense in which disagreement is disrespect. (The phrase comes from the title of the linked Robin Hanson post, which is also linked when I used the phrase in my post; Hanson explicitly acknowledges that the degree of disrespect might be small.)
he’s conflating “not aiming to converge on truth” with “being wrong”
So, I tried to clarify what I meant there in the parenthesized paragraph starting with the words “This is with respect to the sense”. Did that suffice, or is this more bad writing on my part? I’m not sure; I’m happy to leave it to the reader to decide.
disagree on some difficult question about, say, theoretical physics. The cooperate move [...]
I like the physics debate analogy, but the moral depends on how you map real-world situations to a payoff matrix. When expressing disapproval of the Prisoner’s-Dilemma-like frame, it’s because I was worried about things analogous to finding errors in the other person’s calculation being construed as “defection” (because finding fault in other people’s work feels “adversarial” rather than “collaborative”).
Zack is more willing than Duncan thinks he should be to decide that other people are bozos
I would particularly emphasize that people can be bozos “locally” but not “globally”. If I’m confident that someone is wrong, I don’t want to pretend to be more uncertain than I actually am in order to make them feel respected. But I’m also not casting judgement on the totality of their worth as a person; I’m just saying they’re wrong on this topic.
I don’t think it usually looks like Zack’s optimistic picture
I’m glad you noticed the optimism! Yes, I would say that I’m relatively optimistic about the possibility of keeping discussions on track despite status-fighting instincts—and also relatively pessimistic about the prospects of collaborative norms to actually fix the usual problems with status-seeking rather than merely disguising them and creating new problems.
When having a discussion, I definitely try to keep in mind the possibility that the other person is right and I’m wrong. But if the person I’m currently arguing with were to tell me, “I don’t feel like you’re here to collaborate with me; I think you should be putting in more effort to think of reasons I might be right,” that actually makes me think it’s less likely that they might be right (even though the generic advice is good). Why? Because giving the advice in this context makes me think they’re bluffing: I think if they had an argument, they would stick to the argument (telling me what I’m getting wrong, and possibly condemning my poor reading comprehension if they think they were already adequately clear), rather than trying to rules-lawyer me for being insufficiently collaborative.
That said, I don’t think there’s a unique solution for what the “right” norms are. Different rules might work better for different personality types, and run different risks of different failure modes (like nonsense aggressive status-fighting vs. nonsense passive-aggressive rules-lawyering). Compared to some people, I suppose I tend to be relatively comfortable with spaces where the rules err more on the side of “Punch, but be prepared to take a punch” rather than “Don’t punch anyone”—but I realize that that’s a fact about me, not a fact about the hidden Bayesian structure of reality. That’s why, in “‘Rationalist Discourse’ Is Like ‘Physicist Motors’”, I made an analogy between discourse norms and motors or martial arts—there are principles governing what can work, but there’s not going to be a unique motor, a one “correct” martial art.
(Content-free reply just to note that I have noticed this and do intend to reply to it properly, when unlike now I have a bit of time to give it the attention it deserves. Apologies for slowness.)
Thanks for writing this!! There’s a number of places where I don’t think you’ve correctly understood my position, but I really appreciate the engagement with the text I published: if you didn’t get what I “really meant”, I’m happy to do more work to try to clarify.
I’m unhappy with the absence of an audience-focused analogue of TEACH. In the following, I’ll use TEACH to refer to making someone believe X if X is right; whether the learner is the audience or the interlocutor B isn’t relevant to what I’m saying.
That’s not what I meant. In terms of your taxonomy of motives, I would say that people who don’t think they have something to TEACH, mostly don’t end up writing comments: when I’m only trying to LEARN and don’t have anything to TEACH, I usually end up silently reading (and upvoting contributions I LEARNed from) without commenting. LEARNing can sometimes be a motive for commenting: when I think I think an author might have something to TEACH me, but I can’t manage to infer it from the text they’ve already published, I might ask a question. But I do think that’s a minority of comments.
The relevance of WINning is as a reward for TEACHing. I think we should be trying to engineer a culture where the gradients of status, esteem, and WINning are aligned with good epistemology—where the way to WIN is by means of TEACHing truths, rather than CONVINCEing people of falsehoods. I am fundamentally pessimistic about efforts to get people to care less about WINning (as contrasted to my approach of trying to align what WINning means in the local culture). If I claimed that my motive for commenting on this website was simply that I altruistically wanted other users of this website to have more accurate beliefs, I would be lying; I just don’t think that’s how human psychology works.
You know, that’s a good point! Now that you point it out, I see that the passage you quote is bad writing and sloppy thinking on my part. As a result of you pointing this out, I think it’s healthy for people to think (slightly) more of you for pointing out a flaw in the text I published, and (slightly) less of me for publishing flawed text. (And very slightly more of me for conceding the point as I am doing now, rather than refusing to acknowledge it.) You WIN. I think it’s okay for you to WIN, and to enjoy WINning. You’ve earned it!
Isn’t it, though?—at least for some relevant sense of the word “disrespect.” Previously, you thought Tao was so competent that you never expected to find yourself in the position of thinking he was wrong. Now, you think he was wrong. If you’re updating yourself incrementally about Tao’s competence, it seems like your estimate of his competence should be going down. Not by very much! But a little. That’s the sense in which disagreement is disrespect. (The phrase comes from the title of the linked Robin Hanson post, which is also linked when I used the phrase in my post; Hanson explicitly acknowledges that the degree of disrespect might be small.)
So, I tried to clarify what I meant there in the parenthesized paragraph starting with the words “This is with respect to the sense”. Did that suffice, or is this more bad writing on my part? I’m not sure; I’m happy to leave it to the reader to decide.
I like the physics debate analogy, but the moral depends on how you map real-world situations to a payoff matrix. When expressing disapproval of the Prisoner’s-Dilemma-like frame, it’s because I was worried about things analogous to finding errors in the other person’s calculation being construed as “defection” (because finding fault in other people’s work feels “adversarial” rather than “collaborative”).
I would particularly emphasize that people can be bozos “locally” but not “globally”. If I’m confident that someone is wrong, I don’t want to pretend to be more uncertain than I actually am in order to make them feel respected. But I’m also not casting judgement on the totality of their worth as a person; I’m just saying they’re wrong on this topic.
I’m glad you noticed the optimism! Yes, I would say that I’m relatively optimistic about the possibility of keeping discussions on track despite status-fighting instincts—and also relatively pessimistic about the prospects of collaborative norms to actually fix the usual problems with status-seeking rather than merely disguising them and creating new problems.
When having a discussion, I definitely try to keep in mind the possibility that the other person is right and I’m wrong. But if the person I’m currently arguing with were to tell me, “I don’t feel like you’re here to collaborate with me; I think you should be putting in more effort to think of reasons I might be right,” that actually makes me think it’s less likely that they might be right (even though the generic advice is good). Why? Because giving the advice in this context makes me think they’re bluffing: I think if they had an argument, they would stick to the argument (telling me what I’m getting wrong, and possibly condemning my poor reading comprehension if they think they were already adequately clear), rather than trying to rules-lawyer me for being insufficiently collaborative.
That said, I don’t think there’s a unique solution for what the “right” norms are. Different rules might work better for different personality types, and run different risks of different failure modes (like nonsense aggressive status-fighting vs. nonsense passive-aggressive rules-lawyering). Compared to some people, I suppose I tend to be relatively comfortable with spaces where the rules err more on the side of “Punch, but be prepared to take a punch” rather than “Don’t punch anyone”—but I realize that that’s a fact about me, not a fact about the hidden Bayesian structure of reality. That’s why, in “‘Rationalist Discourse’ Is Like ‘Physicist Motors’”, I made an analogy between discourse norms and motors or martial arts—there are principles governing what can work, but there’s not going to be a unique motor, a one “correct” martial art.
(Content-free reply just to note that I have noticed this and do intend to reply to it properly, when unlike now I have a bit of time to give it the attention it deserves. Apologies for slowness.)
Don’t apologize; please either take your time, or feel free to just not reply at all; I am also very time-poor at the moment.