I may have over-emphasized the “higher karma” thing. I don’t consider that a warning flag in itself; higher karma further down the thread can happen for various perfectly valid reasons. I consider it a minor supporting point because it seems correlated with a particular pattern I’ve noticed on other sites (mainly reddit).
And apparently I underestimated the degree to which it’s possible for a single voter to generate high karma on LessWrong, so I hereby retract that as supporting evidence.
I entirely believe you that your subjective experience was that you read my comment, thought about how it related to the larger topic, generated some new thoughts, and then posted those. I’m not trying to take a stand against that in general, but I’m concerned about the specific relationship between my comment and your follow-up thoughts, and why/how the one prompted the other.
(Maybe pause here for a moment to think about that, and form your own hypothesis about why my comment sent your thoughts in that particular direction.)
It looks to me like things unfolded something like this:
You read the OP, thought about it, and (I suspect) put some effort into making a list of all the relevant things that occurred to you. One of the things in that list was a concern that the experimental endpoint may be bad because of reason X.
I explained why X was not a concern.
You responded that the experimental endpoint may be bad because of reasons Y and Z.
It looks to me like the connection between my comment and your new thoughts is that the new thoughts are new reasons to continue believing what you already believed. Interesting that my comment would suddenly cause you to think of those? (Whereas reading the OP, which explicitly talked about Y and Z, did not make you think of them.)
(As I write this, it occurs to me that what I’m doing in this very post looks kind of similar: I am giving an explanation for objecting to your comment that is not identical to the reason I gave before. Subjectively, this feels like putting my thoughts into a more coherent order so that I have a stronger grasp on my earlier feelings. But perhaps I’m rationalizing? Or, alternately, perhaps I’m not extending enough benefit-of-the-doubt to you? Does this post feel to you like a clarification of my previous reasons or like a new reason?)
I think that Y and Z are legitimate discussion points within the broader context of the experiment but bringing them up in this particular way kind of feels like an attempt to avoid updating.
And I suppose I’m also feeling a bit awkward because I defended the experimental setup against X, and now this conversation flow makes me feel somehow vaguely obligated to also defend the experimental setup against Y and Z (or else “concede” Y and Z) when, in fact, I don’t necessarily have any opinions about the new arguments one way or the other. I’m definitely not saying that’s a reasonable emotional response on my part, yet it also feels like a somewhat predictable result of this conversational pattern where I objected to the local validity of one argument and you responded with unrelated arguments for the same conclusion.
I’d frame my approach to both reading and commenting as “iterative reading.” I read to a certain level of depth, write up thoughts that seem pertinent, and then reread and redirect my attention in response to other people’s replies.
Even for my actual research in grad school, this is inevitable. There’s simply too much information to take it all in and retain it; most is unimportant. This is even more true in responding to a blog post about somebody else’s research.
I look at my comments as trying to provide some value. If they’re wrong, hopefully I’ll be corrected. If they’re redundant, I’ll be ignored. If they’re right, then I contribute a bit. Plus, writing up my thoughts helps me remember and understand more, and the pushback from others helps me stay engaged and to focus on the specific areas where my understanding is incomplete.
In this approach, commenting is more about contributing and learning.
There are other places where I’ve approached commenting with a focus on evaluating an argument. For example, my post the other day about “how to place a bet on the end of the world” led to comments that significantly shifted my view, which is a thought process I recorded in the comments to the post.
So I guess I view your argument as standing on its own. It seems correct to me, but I also am not completely certain, and don’t care to investigate further. But it also does provoke consideration of how much of the point I was making needs to be updated. That’s what I tried to articulate in the subsequent comment.
I think the takeaway here is that there’s a difference between the “learning and contributing to a project” style and the “evaluating an argument” style. Which of course is about emphasis, it’s not a rigid binary.
I had difficulty translating your comments and my thoughts into a mutually-compatible frame so that I could understand how they bear on each other. Could I get your feedback on this translation attempt?
It seems like you have a model for your commenting behavior that looks something like:
You read a piece.
This generates too much mental work for you to do it in one sitting, so you queue some of it to happen later.
In the mean time, you post comments based on the portion of the mental work you’ve completed so far.
In this case, when you read my reply, this refocused your attention on the general topic and caused you to do another chunk of already-queued work (with the subject of my comment maybe influencing which part of the work you focused on).
Completing this queued mental work generated new thoughts.
You posted these new thoughts as a reply to my comment because my comment triggered them, but you were always going to generate approximately those thoughts when you got around to your queued mental labor, regardless of what I posted.
And then this relates to the points I raised as follows:
My concern could be rephrased as: Generating unrelated arguments for the same side is a likely outcome for someone in a soldier mindset, and unlikely for a curious exploration of the specific argument being discussed. This outcome is therefore Bayesian evidence for soldier-mindset.
According to your model, you’re not doing either of those things; you are instead doing curious exploration of the original post, which was merely prompted by my comment.
Generating these new arguments is not a particularly unlikely outcome for curious exploration of the original post, so it doesn’t lose nearly as much probability from this particular piece of evidence.
Given your strong prior on your existing model, your posterior probability for it is still pretty high.
Does this seem like an accurate translation to you?
Point of order: I don’t think “arguments as soldiers” was supposed to be equivalent to “thinking of multiple different ideas for why something would not work”—it was about a lack of intellectual integrity in honestly viewing the opponent’s points on their merits, and simultaneously pretending their are no weaknesses in your own arguments.
Good debate requires adversarial thought, which is why we talk about Steelmanning instead of Strawmanning.
If AllAmericanBreakfast has generated even half a dozen different, seemingly unrelated ideas for why the the OP’s experiment does not measure the value it claims to be studying, that still doesn’t immediately make them a soldier. They’d also need to ignore criticism of the arguments, and ignore opposing arguments or attack the opposing arguments in a way that is hypocritical of how they treat their own arguments.
I view this pivot to focus on how someone generates their ideas (what you called “a model for commenting behavior”) as a far more troublesome road.
If we’re going to dismiss arguments because we think the intellectual process to generate them was invalid, that’s an actual “argument as soldiers” mindset in my opinion because it is diverting attention from the argument itself to a process objection instead.
In other words, if AllAmericanBreakfast had raised an important and critical point that up until now was missed, would it be rational to dismiss it because it was posted as an off-the-cuff reply after talking a walk outside, instead of only after some period of careful examination that one is expected to spend their time in prior to commenting on a new post?
I largely agree with you. Steelmanning, a focus on the object-level argument rather than the meta-process, and a certain graciousness about the messiness of intellectual labor are all helpful in promoting good debate.
If I had to guess, Dweomite might have gotten a “Gish gallop” vibe, in which every rebuttal leads to two new bjections being raised, with scarcely an acknowledgement of the rebuttal itself. Part of the art of good debate is focusing attention in a productive manner. Infodumps and Gish gallops can be counterproductive, even if the object-level information they contain is correct.
It was never my intention to equate “arguments as soldiers” with “multiple arguments for the same conclusion”, or to say that having multiple arguments is inherently bad. That’s why I described this as being (in context) a warning sign, not an error in itself.
It was also never my intention to dismiss these particular arguments. I believe I said above that they seem like valid discussion points. But my interests are not confined solely to the AC experiment; I am also interested in the meta-project of improving our tools for rationality.
(Though I can imagine some situations where I would dismiss arguments based on how they were generated. For instance, if I somehow knew that you had literally rolled dice to choose words off of a list with no regard for semantic content, and then posted the output with no filtering, then I would not feel that either rationality or fairness required me to entertain those arguments.)
.
That said, I think you also got a rather different take-away from “arguments as soldiers” than I did. I see it as being about goals, not rules of conduct. If you identify with a particular side, and try to make that side win, then you’re in a soldier mindset. If, while you do that, you also feel a duty to acknowledge the opponent’s valid points and to be honest about your side’s flaws, then you’re a soldier with rules of engagement, but you’re still a soldier.
The alternative is curiosity and truth-seeking. If your goal is to find the truth, then acknowledging someone else’s valid point isn’t a mere duty, it’s good strategy.
You wrote: “Good debate requires adversarial thought”. I might or might not agree, depending on how you define “debate”. But regardless, adversarial thought is NOT a requirement for truth-seeking. You can investigate, share information, and teach others, and even resolve factual agreements without it.
For instance, Double Crux is a strategy for resolving disagreements that doesn’t rely on adversarial thought. I’m also reminded of Aumann-style consensus.
Rules of engagement are certainly better than nothing. Thus is it written:
A burning itch to know is higher than a solemn vow to pursue truth. But you can’t produce curiosity just by willing it, any more than you can will your foot to feel warm when it feels cold. Sometimes, all we have is our mere solemn vows.
But duties are not what you’re ideally hoping for.
Thank you. If I add your model to my hypothesis space, the probability on soldier-mindset does seem a lot less worrying.
I also now feel like I understand why you initially tried to frame this as a disagreement about posting etiquette. Posting the output of your queued work as a reply to a comment that refocused your attention (but is otherwise unrelated) seems weird to me.
It seems like you’re desiring a sort of Kialo-like approach to commenting, in which each comment chain is tackling an ever-more-narrow subargument. This does seem to be how some comment chains progress, and it would probably make for more legible reading. In the case of the comment you objected to, I could have said “I think you’re right,” realized the rest of my commentary could be split off into a separate comment, and then we wouldn’t have had an issue.
There’s something about the perception of being involved in a conversation with another person that keeps my attention anchored on the range of topics associated with that conversation. But rather than being ever-more-narrowly focused on the most recent reply, my attention fans out throughout the available text.
For example, in writing this comment, I find myself considering not only commenting etiquette, but also re-reading my original comment and your reply, and considering why I didn’t find your reply 100% convincing (instead saying “I think you’re mostly right”).
Then I start typing those thoughts, because the cursor’s in the text box. It would be inconvenient to split off AC-relevant thoughts into a different comment. It also feels weirder to me to make lots of comments on different subtopics than one long comment with all my thoughts. But in this case, I’m also paying enough attention to notice that most of these thoughts are not immediately relevant to this sub-topic, and delete them.
If I don’t edit my own comments to exclude thoughts that aren’t relevant to the subtopic under immediate discussion, all my thoughts at a particular moment in time tend to wind up in the same comment.
I suspect this habit comes from verbal debate, in which there isn’t really a convenient way to separate out thoughts into subtopics, and where a thought not verbalized can easily be forgotten.
I don’t think your description of what I want is entirely accurate. I wouldn’t say that I expect sub-comments to never be wider than their parent, but I expect that they’re somehow a response to the parent, rather than just being whatever you happened to be thinking about at the moment you wrote the sub-comment.
For example, if I posted an analogy about how air conditioners are somehow like kittens, then all of these would seem like reasonable responses that could be considered to widen the topic:
I think air conditioners are more like jellyfish because (reasons)
I’ve long thought that alarm clocks are similar to kittens for largely similar reasons; perhaps there’s an unexplored connection between air conditioners and alarm clocks?
That analogy makes sense, but it doesn’t address X, which seems to me like an important consideration
But it seems disconnected to me to post something like:
My cat just had a litter of kittens and I’m trying to find homes for them; anyone want one?
This summer is so hot. I really wish I had a better air conditioner right now.
It’s understandable that you would think of those things right after reading my hypothetical comment, but they’re not really responses to it.
I agree spoken conversations need somewhat different rules; however, even in spoken conversations there’s some etiquette limiting when and how you can change the topic of discussion.
Unfortunately, I don’t think the lines between a direct response to a comment and a non-response are clear. My reply to your comment wasn’t unrelated to your response. It just wasn’t as carefully focused as you desired.
I’ll also say that, no matter what rules we might come up with for commenting, at the end of the day the ability to coordinate around those rules, and people’s mental budget for following them, will dictate how conversation flows. At this point, I feel that this conversation has shifted from feeing like an exploration of commenting norms using our exchange as an example, and begun to feel like an evaluation of the adequacy of my commenting behavior. The latter is not really something I’m interested in.
I agree my line isn’t particularly sharp. This is less of a considered policy and more an attempt to articulate my intuitions.
Ending the discussion would be fair.
I’m glad I eventually understood your commenting model, though. I don’t feel like I often have opportunities to explore conflicts of expectations in detail, so this was valuable evidence for updating my overall Internet-discussions-model. (As well as a reminder that other peoples’ frames are both harder to predict and harder to communicate than my intuitions would suggest.) So thanks.
I may have over-emphasized the “higher karma” thing. I don’t consider that a warning flag in itself; higher karma further down the thread can happen for various perfectly valid reasons. I consider it a minor supporting point because it seems correlated with a particular pattern I’ve noticed on other sites (mainly reddit).
And apparently I underestimated the degree to which it’s possible for a single voter to generate high karma on LessWrong, so I hereby retract that as supporting evidence.
I entirely believe you that your subjective experience was that you read my comment, thought about how it related to the larger topic, generated some new thoughts, and then posted those. I’m not trying to take a stand against that in general, but I’m concerned about the specific relationship between my comment and your follow-up thoughts, and why/how the one prompted the other.
(Maybe pause here for a moment to think about that, and form your own hypothesis about why my comment sent your thoughts in that particular direction.)
It looks to me like things unfolded something like this:
You read the OP, thought about it, and (I suspect) put some effort into making a list of all the relevant things that occurred to you. One of the things in that list was a concern that the experimental endpoint may be bad because of reason X.
I explained why X was not a concern.
You responded that the experimental endpoint may be bad because of reasons Y and Z.
It looks to me like the connection between my comment and your new thoughts is that the new thoughts are new reasons to continue believing what you already believed. Interesting that my comment would suddenly cause you to think of those? (Whereas reading the OP, which explicitly talked about Y and Z, did not make you think of them.)
(As I write this, it occurs to me that what I’m doing in this very post looks kind of similar: I am giving an explanation for objecting to your comment that is not identical to the reason I gave before. Subjectively, this feels like putting my thoughts into a more coherent order so that I have a stronger grasp on my earlier feelings. But perhaps I’m rationalizing? Or, alternately, perhaps I’m not extending enough benefit-of-the-doubt to you? Does this post feel to you like a clarification of my previous reasons or like a new reason?)
I think that Y and Z are legitimate discussion points within the broader context of the experiment but bringing them up in this particular way kind of feels like an attempt to avoid updating.
And I suppose I’m also feeling a bit awkward because I defended the experimental setup against X, and now this conversation flow makes me feel somehow vaguely obligated to also defend the experimental setup against Y and Z (or else “concede” Y and Z) when, in fact, I don’t necessarily have any opinions about the new arguments one way or the other. I’m definitely not saying that’s a reasonable emotional response on my part, yet it also feels like a somewhat predictable result of this conversational pattern where I objected to the local validity of one argument and you responded with unrelated arguments for the same conclusion.
I’d frame my approach to both reading and commenting as “iterative reading.” I read to a certain level of depth, write up thoughts that seem pertinent, and then reread and redirect my attention in response to other people’s replies.
Even for my actual research in grad school, this is inevitable. There’s simply too much information to take it all in and retain it; most is unimportant. This is even more true in responding to a blog post about somebody else’s research.
I look at my comments as trying to provide some value. If they’re wrong, hopefully I’ll be corrected. If they’re redundant, I’ll be ignored. If they’re right, then I contribute a bit. Plus, writing up my thoughts helps me remember and understand more, and the pushback from others helps me stay engaged and to focus on the specific areas where my understanding is incomplete.
In this approach, commenting is more about contributing and learning.
There are other places where I’ve approached commenting with a focus on evaluating an argument. For example, my post the other day about “how to place a bet on the end of the world” led to comments that significantly shifted my view, which is a thought process I recorded in the comments to the post.
So I guess I view your argument as standing on its own. It seems correct to me, but I also am not completely certain, and don’t care to investigate further. But it also does provoke consideration of how much of the point I was making needs to be updated. That’s what I tried to articulate in the subsequent comment.
I think the takeaway here is that there’s a difference between the “learning and contributing to a project” style and the “evaluating an argument” style. Which of course is about emphasis, it’s not a rigid binary.
I had difficulty translating your comments and my thoughts into a mutually-compatible frame so that I could understand how they bear on each other. Could I get your feedback on this translation attempt?
It seems like you have a model for your commenting behavior that looks something like:
You read a piece.
This generates too much mental work for you to do it in one sitting, so you queue some of it to happen later.
In the mean time, you post comments based on the portion of the mental work you’ve completed so far.
In this case, when you read my reply, this refocused your attention on the general topic and caused you to do another chunk of already-queued work (with the subject of my comment maybe influencing which part of the work you focused on).
Completing this queued mental work generated new thoughts.
You posted these new thoughts as a reply to my comment because my comment triggered them, but you were always going to generate approximately those thoughts when you got around to your queued mental labor, regardless of what I posted.
And then this relates to the points I raised as follows:
My concern could be rephrased as: Generating unrelated arguments for the same side is a likely outcome for someone in a soldier mindset, and unlikely for a curious exploration of the specific argument being discussed. This outcome is therefore Bayesian evidence for soldier-mindset.
According to your model, you’re not doing either of those things; you are instead doing curious exploration of the original post, which was merely prompted by my comment.
Generating these new arguments is not a particularly unlikely outcome for curious exploration of the original post, so it doesn’t lose nearly as much probability from this particular piece of evidence.
Given your strong prior on your existing model, your posterior probability for it is still pretty high.
Does this seem like an accurate translation to you?
Point of order: I don’t think “arguments as soldiers” was supposed to be equivalent to “thinking of multiple different ideas for why something would not work”—it was about a lack of intellectual integrity in honestly viewing the opponent’s points on their merits, and simultaneously pretending their are no weaknesses in your own arguments.
Good debate requires adversarial thought, which is why we talk about Steelmanning instead of Strawmanning.
If AllAmericanBreakfast has generated even half a dozen different, seemingly unrelated ideas for why the the OP’s experiment does not measure the value it claims to be studying, that still doesn’t immediately make them a soldier. They’d also need to ignore criticism of the arguments, and ignore opposing arguments or attack the opposing arguments in a way that is hypocritical of how they treat their own arguments.
I view this pivot to focus on how someone generates their ideas (what you called “a model for commenting behavior”) as a far more troublesome road.
If we’re going to dismiss arguments because we think the intellectual process to generate them was invalid, that’s an actual “argument as soldiers” mindset in my opinion because it is diverting attention from the argument itself to a process objection instead.
In other words, if AllAmericanBreakfast had raised an important and critical point that up until now was missed, would it be rational to dismiss it because it was posted as an off-the-cuff reply after talking a walk outside, instead of only after some period of careful examination that one is expected to spend their time in prior to commenting on a new post?
I largely agree with you. Steelmanning, a focus on the object-level argument rather than the meta-process, and a certain graciousness about the messiness of intellectual labor are all helpful in promoting good debate.
If I had to guess, Dweomite might have gotten a “Gish gallop” vibe, in which every rebuttal leads to two new bjections being raised, with scarcely an acknowledgement of the rebuttal itself. Part of the art of good debate is focusing attention in a productive manner. Infodumps and Gish gallops can be counterproductive, even if the object-level information they contain is correct.
It was never my intention to equate “arguments as soldiers” with “multiple arguments for the same conclusion”, or to say that having multiple arguments is inherently bad. That’s why I described this as being (in context) a warning sign, not an error in itself.
It was also never my intention to dismiss these particular arguments. I believe I said above that they seem like valid discussion points. But my interests are not confined solely to the AC experiment; I am also interested in the meta-project of improving our tools for rationality.
(Though I can imagine some situations where I would dismiss arguments based on how they were generated. For instance, if I somehow knew that you had literally rolled dice to choose words off of a list with no regard for semantic content, and then posted the output with no filtering, then I would not feel that either rationality or fairness required me to entertain those arguments.)
.
That said, I think you also got a rather different take-away from “arguments as soldiers” than I did. I see it as being about goals, not rules of conduct. If you identify with a particular side, and try to make that side win, then you’re in a soldier mindset. If, while you do that, you also feel a duty to acknowledge the opponent’s valid points and to be honest about your side’s flaws, then you’re a soldier with rules of engagement, but you’re still a soldier.
The alternative is curiosity and truth-seeking. If your goal is to find the truth, then acknowledging someone else’s valid point isn’t a mere duty, it’s good strategy.
You wrote: “Good debate requires adversarial thought”. I might or might not agree, depending on how you define “debate”. But regardless, adversarial thought is NOT a requirement for truth-seeking. You can investigate, share information, and teach others, and even resolve factual agreements without it.
For instance, Double Crux is a strategy for resolving disagreements that doesn’t rely on adversarial thought. I’m also reminded of Aumann-style consensus.
Rules of engagement are certainly better than nothing. Thus is it written:
But duties are not what you’re ideally hoping for.
That seems about right :)
Thank you. If I add your model to my hypothesis space, the probability on soldier-mindset does seem a lot less worrying.
I also now feel like I understand why you initially tried to frame this as a disagreement about posting etiquette. Posting the output of your queued work as a reply to a comment that refocused your attention (but is otherwise unrelated) seems weird to me.
It seems like you’re desiring a sort of Kialo-like approach to commenting, in which each comment chain is tackling an ever-more-narrow subargument. This does seem to be how some comment chains progress, and it would probably make for more legible reading. In the case of the comment you objected to, I could have said “I think you’re right,” realized the rest of my commentary could be split off into a separate comment, and then we wouldn’t have had an issue.
There’s something about the perception of being involved in a conversation with another person that keeps my attention anchored on the range of topics associated with that conversation. But rather than being ever-more-narrowly focused on the most recent reply, my attention fans out throughout the available text.
For example, in writing this comment, I find myself considering not only commenting etiquette, but also re-reading my original comment and your reply, and considering why I didn’t find your reply 100% convincing (instead saying “I think you’re mostly right”).
Then I start typing those thoughts, because the cursor’s in the text box. It would be inconvenient to split off AC-relevant thoughts into a different comment. It also feels weirder to me to make lots of comments on different subtopics than one long comment with all my thoughts. But in this case, I’m also paying enough attention to notice that most of these thoughts are not immediately relevant to this sub-topic, and delete them.
If I don’t edit my own comments to exclude thoughts that aren’t relevant to the subtopic under immediate discussion, all my thoughts at a particular moment in time tend to wind up in the same comment.
I suspect this habit comes from verbal debate, in which there isn’t really a convenient way to separate out thoughts into subtopics, and where a thought not verbalized can easily be forgotten.
I don’t think your description of what I want is entirely accurate. I wouldn’t say that I expect sub-comments to never be wider than their parent, but I expect that they’re somehow a response to the parent, rather than just being whatever you happened to be thinking about at the moment you wrote the sub-comment.
For example, if I posted an analogy about how air conditioners are somehow like kittens, then all of these would seem like reasonable responses that could be considered to widen the topic:
I think air conditioners are more like jellyfish because (reasons)
I’ve long thought that alarm clocks are similar to kittens for largely similar reasons; perhaps there’s an unexplored connection between air conditioners and alarm clocks?
That analogy makes sense, but it doesn’t address X, which seems to me like an important consideration
But it seems disconnected to me to post something like:
My cat just had a litter of kittens and I’m trying to find homes for them; anyone want one?
This summer is so hot. I really wish I had a better air conditioner right now.
It’s understandable that you would think of those things right after reading my hypothetical comment, but they’re not really responses to it.
I agree spoken conversations need somewhat different rules; however, even in spoken conversations there’s some etiquette limiting when and how you can change the topic of discussion.
Unfortunately, I don’t think the lines between a direct response to a comment and a non-response are clear. My reply to your comment wasn’t unrelated to your response. It just wasn’t as carefully focused as you desired.
I’ll also say that, no matter what rules we might come up with for commenting, at the end of the day the ability to coordinate around those rules, and people’s mental budget for following them, will dictate how conversation flows. At this point, I feel that this conversation has shifted from feeing like an exploration of commenting norms using our exchange as an example, and begun to feel like an evaluation of the adequacy of my commenting behavior. The latter is not really something I’m interested in.
I agree my line isn’t particularly sharp. This is less of a considered policy and more an attempt to articulate my intuitions.
Ending the discussion would be fair.
I’m glad I eventually understood your commenting model, though. I don’t feel like I often have opportunities to explore conflicts of expectations in detail, so this was valuable evidence for updating my overall Internet-discussions-model. (As well as a reminder that other peoples’ frames are both harder to predict and harder to communicate than my intuitions would suggest.) So thanks.