As such, the rate of change of heat is reduced. This won’t necessarily result in a different equilibration temperature, however. Instead, I would expect it to affect the rate at which temperature equilibrates.
My impression was that these two things are necessarily linked, in a fairly direct fashion.
Equilibrium means that increases and decreases cancel out. In the absence of an AC, the rate at which heat enters a given building is proportional to the difference between interior and exterior temperature. Therefore, the maximum temperature delta that an AC can maintain should be directly proportional to how quickly it can pump (net) heat out.
I mean, you could choose to think about infiltration as an intensified pressure towards equilibrium instead of as a decrease in the net effectiveness of the pump, and then equilibrium temperature would cease to be a good measure of “pump effectiveness”. But that would effectively be asking to have the one-hose design not be penalized for the infiltration losses that it directly causes.
EDIT: Addendum: Notice that the rate at which the one-hose AC will pump (net) heat out depends onthe temperature delta. (Replacing inside air with outside air doesn’t matter if they’re the same temperature, but it matters a lot if there’s a large temperature difference.) So “the rate of change of heat” isn’t actually well-defined until you’ve specified what temperature delta you’re measuring at. (Which is why it’s possible to invent formulas for the official efficiency stats that would favor or disfavor one-hose models.)
I think you’re mostly right? But if both pumps have an equilibration temperature below 60 degrees, then we can only get their efficiency difference by looking at the cooling rate. Perhaps if this is the case, we are saying that there just isn’t a difference from the point of view of this experiment.
On the other hand, my impression is that efficiency ratings are mostly supposed to be about how much energy it takes to reach a given equilibrium. So I’m not sure if this experiment is really a referendum on the claimed differences between AC units. We can imagine that both AC types could get the house equally cold, but one-hose units use a lot more energy. From the perspective of equilibrium temps, there’s no difference, but from the perspective of efficiency ratings there is!
Since the point of the experiment is to determine the adequacy of AC ratings as a proxy for AI issues, it seems like you’d want to focus on efficiency rather than equilibrium temperature.
...I’m going to try making a point that would be generally unacceptable to make in wider Internet culture, but which I think will be considered acceptable on LessWrong. Apologies if I miss.
Meta observation: You’ve just made several points that seem connected to the OP, but not to anything that I said, and in so doing have quickly earned a karma higher than any other comment in this particular comment chain. This seems like a warning sign for arguments as soldiers (i.e. you’re treating any point about the larger topic as being substitutable into a discussion about a narrow sub-topic, and earning more karma because there are a larger number of people who care about the larger topic than the smaller one).
Also, both of the topics you just raised (possible equilibrium below 60 degrees and electrical efficiency vs maximum cooling) are things that were mentioned in the OP. I feel that, ideally, discussion of them should acknowledge and respond to the OP’s position on these points instead of raising them as if they were new.
I think it’s fine to sidebar about this. If you didn’t know, you can hover over the karma indicator on a comment or post to see how many people have voted on it. In the case of my comment, only 1 person (gbear) has voted on it at the time of writing.
However, I’m not sure about the point you’re making.
you’re treating any point about the larger topic as being substitutable into a discussion about a narrow sub-topic
This quote suggests that you prefer a norm that comment responses be carefully focused on the specific topic raised by the comment. While that is reasonable, it is also a reasonable norm to use comments in an open-ended fashion to better understand the main topic at hand.
I lean heavily on the latter norm, partly because I tend to do a lot of my thinking out loud. This comes out in my comments. My internal experience is that I was thinking out loud about the experiment, taking into the point you raised, without carefully checking to see if John had raised the issue already. That to me is not “soldier mindset,” but I could be criticized as just adding noise to the discussion, as you suggest at the bottom of your comment.
and earning more karma because there are a larger number of people who care about the larger topic than the smaller one
This might be true, but I think this is a lot of analysis for a phenomenon of minor overall importance with limited evidence. Consider that I could respond by suggesting that your comment here, a complete diversion from the topic of ACs, is transparently a reaction to perceiving yourself to be getting less karma than me, and much more in keeping with a “soldier” mentality than anything I did in my preceding comment.
I don’t actually want to accuse you of that, because I do think that the social dynamics of commenting is interesting. According to the norm I described above, I think it’s fine to divert in most cases.
I’ve written a couple posts on commenting norms on LW, and others have brought up the topic as well. Since you seem to have thoughts along these lines, it might be worth branching off a separate post comparing the pros and cons of some alternatives for community consideration.
Upvoted both this and the parent comment for doing a good job discussing a notoriously flamey topic politely and with cool heads. Good job both of you.
I may have over-emphasized the “higher karma” thing. I don’t consider that a warning flag in itself; higher karma further down the thread can happen for various perfectly valid reasons. I consider it a minor supporting point because it seems correlated with a particular pattern I’ve noticed on other sites (mainly reddit).
And apparently I underestimated the degree to which it’s possible for a single voter to generate high karma on LessWrong, so I hereby retract that as supporting evidence.
I entirely believe you that your subjective experience was that you read my comment, thought about how it related to the larger topic, generated some new thoughts, and then posted those. I’m not trying to take a stand against that in general, but I’m concerned about the specific relationship between my comment and your follow-up thoughts, and why/how the one prompted the other.
(Maybe pause here for a moment to think about that, and form your own hypothesis about why my comment sent your thoughts in that particular direction.)
It looks to me like things unfolded something like this:
You read the OP, thought about it, and (I suspect) put some effort into making a list of all the relevant things that occurred to you. One of the things in that list was a concern that the experimental endpoint may be bad because of reason X.
I explained why X was not a concern.
You responded that the experimental endpoint may be bad because of reasons Y and Z.
It looks to me like the connection between my comment and your new thoughts is that the new thoughts are new reasons to continue believing what you already believed. Interesting that my comment would suddenly cause you to think of those? (Whereas reading the OP, which explicitly talked about Y and Z, did not make you think of them.)
(As I write this, it occurs to me that what I’m doing in this very post looks kind of similar: I am giving an explanation for objecting to your comment that is not identical to the reason I gave before. Subjectively, this feels like putting my thoughts into a more coherent order so that I have a stronger grasp on my earlier feelings. But perhaps I’m rationalizing? Or, alternately, perhaps I’m not extending enough benefit-of-the-doubt to you? Does this post feel to you like a clarification of my previous reasons or like a new reason?)
I think that Y and Z are legitimate discussion points within the broader context of the experiment but bringing them up in this particular way kind of feels like an attempt to avoid updating.
And I suppose I’m also feeling a bit awkward because I defended the experimental setup against X, and now this conversation flow makes me feel somehow vaguely obligated to also defend the experimental setup against Y and Z (or else “concede” Y and Z) when, in fact, I don’t necessarily have any opinions about the new arguments one way or the other. I’m definitely not saying that’s a reasonable emotional response on my part, yet it also feels like a somewhat predictable result of this conversational pattern where I objected to the local validity of one argument and you responded with unrelated arguments for the same conclusion.
I’d frame my approach to both reading and commenting as “iterative reading.” I read to a certain level of depth, write up thoughts that seem pertinent, and then reread and redirect my attention in response to other people’s replies.
Even for my actual research in grad school, this is inevitable. There’s simply too much information to take it all in and retain it; most is unimportant. This is even more true in responding to a blog post about somebody else’s research.
I look at my comments as trying to provide some value. If they’re wrong, hopefully I’ll be corrected. If they’re redundant, I’ll be ignored. If they’re right, then I contribute a bit. Plus, writing up my thoughts helps me remember and understand more, and the pushback from others helps me stay engaged and to focus on the specific areas where my understanding is incomplete.
In this approach, commenting is more about contributing and learning.
There are other places where I’ve approached commenting with a focus on evaluating an argument. For example, my post the other day about “how to place a bet on the end of the world” led to comments that significantly shifted my view, which is a thought process I recorded in the comments to the post.
So I guess I view your argument as standing on its own. It seems correct to me, but I also am not completely certain, and don’t care to investigate further. But it also does provoke consideration of how much of the point I was making needs to be updated. That’s what I tried to articulate in the subsequent comment.
I think the takeaway here is that there’s a difference between the “learning and contributing to a project” style and the “evaluating an argument” style. Which of course is about emphasis, it’s not a rigid binary.
I had difficulty translating your comments and my thoughts into a mutually-compatible frame so that I could understand how they bear on each other. Could I get your feedback on this translation attempt?
It seems like you have a model for your commenting behavior that looks something like:
You read a piece.
This generates too much mental work for you to do it in one sitting, so you queue some of it to happen later.
In the mean time, you post comments based on the portion of the mental work you’ve completed so far.
In this case, when you read my reply, this refocused your attention on the general topic and caused you to do another chunk of already-queued work (with the subject of my comment maybe influencing which part of the work you focused on).
Completing this queued mental work generated new thoughts.
You posted these new thoughts as a reply to my comment because my comment triggered them, but you were always going to generate approximately those thoughts when you got around to your queued mental labor, regardless of what I posted.
And then this relates to the points I raised as follows:
My concern could be rephrased as: Generating unrelated arguments for the same side is a likely outcome for someone in a soldier mindset, and unlikely for a curious exploration of the specific argument being discussed. This outcome is therefore Bayesian evidence for soldier-mindset.
According to your model, you’re not doing either of those things; you are instead doing curious exploration of the original post, which was merely prompted by my comment.
Generating these new arguments is not a particularly unlikely outcome for curious exploration of the original post, so it doesn’t lose nearly as much probability from this particular piece of evidence.
Given your strong prior on your existing model, your posterior probability for it is still pretty high.
Does this seem like an accurate translation to you?
Point of order: I don’t think “arguments as soldiers” was supposed to be equivalent to “thinking of multiple different ideas for why something would not work”—it was about a lack of intellectual integrity in honestly viewing the opponent’s points on their merits, and simultaneously pretending their are no weaknesses in your own arguments.
Good debate requires adversarial thought, which is why we talk about Steelmanning instead of Strawmanning.
If AllAmericanBreakfast has generated even half a dozen different, seemingly unrelated ideas for why the the OP’s experiment does not measure the value it claims to be studying, that still doesn’t immediately make them a soldier. They’d also need to ignore criticism of the arguments, and ignore opposing arguments or attack the opposing arguments in a way that is hypocritical of how they treat their own arguments.
I view this pivot to focus on how someone generates their ideas (what you called “a model for commenting behavior”) as a far more troublesome road.
If we’re going to dismiss arguments because we think the intellectual process to generate them was invalid, that’s an actual “argument as soldiers” mindset in my opinion because it is diverting attention from the argument itself to a process objection instead.
In other words, if AllAmericanBreakfast had raised an important and critical point that up until now was missed, would it be rational to dismiss it because it was posted as an off-the-cuff reply after talking a walk outside, instead of only after some period of careful examination that one is expected to spend their time in prior to commenting on a new post?
I largely agree with you. Steelmanning, a focus on the object-level argument rather than the meta-process, and a certain graciousness about the messiness of intellectual labor are all helpful in promoting good debate.
If I had to guess, Dweomite might have gotten a “Gish gallop” vibe, in which every rebuttal leads to two new bjections being raised, with scarcely an acknowledgement of the rebuttal itself. Part of the art of good debate is focusing attention in a productive manner. Infodumps and Gish gallops can be counterproductive, even if the object-level information they contain is correct.
It was never my intention to equate “arguments as soldiers” with “multiple arguments for the same conclusion”, or to say that having multiple arguments is inherently bad. That’s why I described this as being (in context) a warning sign, not an error in itself.
It was also never my intention to dismiss these particular arguments. I believe I said above that they seem like valid discussion points. But my interests are not confined solely to the AC experiment; I am also interested in the meta-project of improving our tools for rationality.
(Though I can imagine some situations where I would dismiss arguments based on how they were generated. For instance, if I somehow knew that you had literally rolled dice to choose words off of a list with no regard for semantic content, and then posted the output with no filtering, then I would not feel that either rationality or fairness required me to entertain those arguments.)
.
That said, I think you also got a rather different take-away from “arguments as soldiers” than I did. I see it as being about goals, not rules of conduct. If you identify with a particular side, and try to make that side win, then you’re in a soldier mindset. If, while you do that, you also feel a duty to acknowledge the opponent’s valid points and to be honest about your side’s flaws, then you’re a soldier with rules of engagement, but you’re still a soldier.
The alternative is curiosity and truth-seeking. If your goal is to find the truth, then acknowledging someone else’s valid point isn’t a mere duty, it’s good strategy.
You wrote: “Good debate requires adversarial thought”. I might or might not agree, depending on how you define “debate”. But regardless, adversarial thought is NOT a requirement for truth-seeking. You can investigate, share information, and teach others, and even resolve factual agreements without it.
For instance, Double Crux is a strategy for resolving disagreements that doesn’t rely on adversarial thought. I’m also reminded of Aumann-style consensus.
Rules of engagement are certainly better than nothing. Thus is it written:
A burning itch to know is higher than a solemn vow to pursue truth. But you can’t produce curiosity just by willing it, any more than you can will your foot to feel warm when it feels cold. Sometimes, all we have is our mere solemn vows.
But duties are not what you’re ideally hoping for.
Thank you. If I add your model to my hypothesis space, the probability on soldier-mindset does seem a lot less worrying.
I also now feel like I understand why you initially tried to frame this as a disagreement about posting etiquette. Posting the output of your queued work as a reply to a comment that refocused your attention (but is otherwise unrelated) seems weird to me.
It seems like you’re desiring a sort of Kialo-like approach to commenting, in which each comment chain is tackling an ever-more-narrow subargument. This does seem to be how some comment chains progress, and it would probably make for more legible reading. In the case of the comment you objected to, I could have said “I think you’re right,” realized the rest of my commentary could be split off into a separate comment, and then we wouldn’t have had an issue.
There’s something about the perception of being involved in a conversation with another person that keeps my attention anchored on the range of topics associated with that conversation. But rather than being ever-more-narrowly focused on the most recent reply, my attention fans out throughout the available text.
For example, in writing this comment, I find myself considering not only commenting etiquette, but also re-reading my original comment and your reply, and considering why I didn’t find your reply 100% convincing (instead saying “I think you’re mostly right”).
Then I start typing those thoughts, because the cursor’s in the text box. It would be inconvenient to split off AC-relevant thoughts into a different comment. It also feels weirder to me to make lots of comments on different subtopics than one long comment with all my thoughts. But in this case, I’m also paying enough attention to notice that most of these thoughts are not immediately relevant to this sub-topic, and delete them.
If I don’t edit my own comments to exclude thoughts that aren’t relevant to the subtopic under immediate discussion, all my thoughts at a particular moment in time tend to wind up in the same comment.
I suspect this habit comes from verbal debate, in which there isn’t really a convenient way to separate out thoughts into subtopics, and where a thought not verbalized can easily be forgotten.
I don’t think your description of what I want is entirely accurate. I wouldn’t say that I expect sub-comments to never be wider than their parent, but I expect that they’re somehow a response to the parent, rather than just being whatever you happened to be thinking about at the moment you wrote the sub-comment.
For example, if I posted an analogy about how air conditioners are somehow like kittens, then all of these would seem like reasonable responses that could be considered to widen the topic:
I think air conditioners are more like jellyfish because (reasons)
I’ve long thought that alarm clocks are similar to kittens for largely similar reasons; perhaps there’s an unexplored connection between air conditioners and alarm clocks?
That analogy makes sense, but it doesn’t address X, which seems to me like an important consideration
But it seems disconnected to me to post something like:
My cat just had a litter of kittens and I’m trying to find homes for them; anyone want one?
This summer is so hot. I really wish I had a better air conditioner right now.
It’s understandable that you would think of those things right after reading my hypothetical comment, but they’re not really responses to it.
I agree spoken conversations need somewhat different rules; however, even in spoken conversations there’s some etiquette limiting when and how you can change the topic of discussion.
Unfortunately, I don’t think the lines between a direct response to a comment and a non-response are clear. My reply to your comment wasn’t unrelated to your response. It just wasn’t as carefully focused as you desired.
I’ll also say that, no matter what rules we might come up with for commenting, at the end of the day the ability to coordinate around those rules, and people’s mental budget for following them, will dictate how conversation flows. At this point, I feel that this conversation has shifted from feeing like an exploration of commenting norms using our exchange as an example, and begun to feel like an evaluation of the adequacy of my commenting behavior. The latter is not really something I’m interested in.
I agree my line isn’t particularly sharp. This is less of a considered policy and more an attempt to articulate my intuitions.
Ending the discussion would be fair.
I’m glad I eventually understood your commenting model, though. I don’t feel like I often have opportunities to explore conflicts of expectations in detail, so this was valuable evidence for updating my overall Internet-discussions-model. (As well as a reminder that other peoples’ frames are both harder to predict and harder to communicate than my intuitions would suggest.) So thanks.
I strong upvoted AllAmericanBreakfast’s comment, so the high relative karma is entirely my fault. I basically strong upvoted because it felt right to me, not thinking about how much karma the other comments in the chain had, so I’m sorry that it didn’t match your assumptions about how karma in threads should work. I don’t think that I’m behaving in an arguments-as-soldiers way, but that’s difficult to prove to myself, let alone to another person.
This is the reasoning that I had, but I’m not strongly attached to it: Thinking to the original post about takeoff/air conditioning, the original discussion was about whether an AC unit is useful to the consumer, which means that it achieves the goal of an air conditioned room in a reasonable length of time without being wasteful or expensive. In my experience, AC units generally can achieve their goal of an air conditioned room, so it seems likely that the considerations from the OP ([0], [1]) aren’t helpful and the tests won’t achieve the purpose from the original post. Even if the AC is not able to air condition the room to an arbitrary point (perhaps OP’s room has a lot of glass windows or is poorly insulated), it seems like it will be measuring the wrong things and that OP didn’t fully consider them.
[0]: “I am assuming that the AC runs continuously (as opposed to getting the room down to target temperature easily, at which point it will shut off until the temperature goes back up). If that’s not the case, I will consider the test invalid, and retry on a hotter day.”
[1]: “Equilibrium indoor temperature was the main thing I cared about when using this air conditioner; electricity is relatively cheap”
My impression was that these two things are necessarily linked, in a fairly direct fashion.
Equilibrium means that increases and decreases cancel out. In the absence of an AC, the rate at which heat enters a given building is proportional to the difference between interior and exterior temperature. Therefore, the maximum temperature delta that an AC can maintain should be directly proportional to how quickly it can pump (net) heat out.
I mean, you could choose to think about infiltration as an intensified pressure towards equilibrium instead of as a decrease in the net effectiveness of the pump, and then equilibrium temperature would cease to be a good measure of “pump effectiveness”. But that would effectively be asking to have the one-hose design not be penalized for the infiltration losses that it directly causes.
EDIT: Addendum: Notice that the rate at which the one-hose AC will pump (net) heat out depends on the temperature delta. (Replacing inside air with outside air doesn’t matter if they’re the same temperature, but it matters a lot if there’s a large temperature difference.) So “the rate of change of heat” isn’t actually well-defined until you’ve specified what temperature delta you’re measuring at. (Which is why it’s possible to invent formulas for the official efficiency stats that would favor or disfavor one-hose models.)
I think you’re mostly right? But if both pumps have an equilibration temperature below 60 degrees, then we can only get their efficiency difference by looking at the cooling rate. Perhaps if this is the case, we are saying that there just isn’t a difference from the point of view of this experiment.
On the other hand, my impression is that efficiency ratings are mostly supposed to be about how much energy it takes to reach a given equilibrium. So I’m not sure if this experiment is really a referendum on the claimed differences between AC units. We can imagine that both AC types could get the house equally cold, but one-hose units use a lot more energy. From the perspective of equilibrium temps, there’s no difference, but from the perspective of efficiency ratings there is!
Since the point of the experiment is to determine the adequacy of AC ratings as a proxy for AI issues, it seems like you’d want to focus on efficiency rather than equilibrium temperature.
...I’m going to try making a point that would be generally unacceptable to make in wider Internet culture, but which I think will be considered acceptable on LessWrong. Apologies if I miss.
Meta observation: You’ve just made several points that seem connected to the OP, but not to anything that I said, and in so doing have quickly earned a karma higher than any other comment in this particular comment chain. This seems like a warning sign for arguments as soldiers (i.e. you’re treating any point about the larger topic as being substitutable into a discussion about a narrow sub-topic, and earning more karma because there are a larger number of people who care about the larger topic than the smaller one).
Also, both of the topics you just raised (possible equilibrium below 60 degrees and electrical efficiency vs maximum cooling) are things that were mentioned in the OP. I feel that, ideally, discussion of them should acknowledge and respond to the OP’s position on these points instead of raising them as if they were new.
I think it’s fine to sidebar about this. If you didn’t know, you can hover over the karma indicator on a comment or post to see how many people have voted on it. In the case of my comment, only 1 person (gbear) has voted on it at the time of writing.
However, I’m not sure about the point you’re making.
This quote suggests that you prefer a norm that comment responses be carefully focused on the specific topic raised by the comment. While that is reasonable, it is also a reasonable norm to use comments in an open-ended fashion to better understand the main topic at hand.
I lean heavily on the latter norm, partly because I tend to do a lot of my thinking out loud. This comes out in my comments. My internal experience is that I was thinking out loud about the experiment, taking into the point you raised, without carefully checking to see if John had raised the issue already. That to me is not “soldier mindset,” but I could be criticized as just adding noise to the discussion, as you suggest at the bottom of your comment.
This might be true, but I think this is a lot of analysis for a phenomenon of minor overall importance with limited evidence. Consider that I could respond by suggesting that your comment here, a complete diversion from the topic of ACs, is transparently a reaction to perceiving yourself to be getting less karma than me, and much more in keeping with a “soldier” mentality than anything I did in my preceding comment.
I don’t actually want to accuse you of that, because I do think that the social dynamics of commenting is interesting. According to the norm I described above, I think it’s fine to divert in most cases.
I’ve written a couple posts on commenting norms on LW, and others have brought up the topic as well. Since you seem to have thoughts along these lines, it might be worth branching off a separate post comparing the pros and cons of some alternatives for community consideration.
Upvoted both this and the parent comment for doing a good job discussing a notoriously flamey topic politely and with cool heads. Good job both of you.
Cheers!
Thanks
I may have over-emphasized the “higher karma” thing. I don’t consider that a warning flag in itself; higher karma further down the thread can happen for various perfectly valid reasons. I consider it a minor supporting point because it seems correlated with a particular pattern I’ve noticed on other sites (mainly reddit).
And apparently I underestimated the degree to which it’s possible for a single voter to generate high karma on LessWrong, so I hereby retract that as supporting evidence.
I entirely believe you that your subjective experience was that you read my comment, thought about how it related to the larger topic, generated some new thoughts, and then posted those. I’m not trying to take a stand against that in general, but I’m concerned about the specific relationship between my comment and your follow-up thoughts, and why/how the one prompted the other.
(Maybe pause here for a moment to think about that, and form your own hypothesis about why my comment sent your thoughts in that particular direction.)
It looks to me like things unfolded something like this:
You read the OP, thought about it, and (I suspect) put some effort into making a list of all the relevant things that occurred to you. One of the things in that list was a concern that the experimental endpoint may be bad because of reason X.
I explained why X was not a concern.
You responded that the experimental endpoint may be bad because of reasons Y and Z.
It looks to me like the connection between my comment and your new thoughts is that the new thoughts are new reasons to continue believing what you already believed. Interesting that my comment would suddenly cause you to think of those? (Whereas reading the OP, which explicitly talked about Y and Z, did not make you think of them.)
(As I write this, it occurs to me that what I’m doing in this very post looks kind of similar: I am giving an explanation for objecting to your comment that is not identical to the reason I gave before. Subjectively, this feels like putting my thoughts into a more coherent order so that I have a stronger grasp on my earlier feelings. But perhaps I’m rationalizing? Or, alternately, perhaps I’m not extending enough benefit-of-the-doubt to you? Does this post feel to you like a clarification of my previous reasons or like a new reason?)
I think that Y and Z are legitimate discussion points within the broader context of the experiment but bringing them up in this particular way kind of feels like an attempt to avoid updating.
And I suppose I’m also feeling a bit awkward because I defended the experimental setup against X, and now this conversation flow makes me feel somehow vaguely obligated to also defend the experimental setup against Y and Z (or else “concede” Y and Z) when, in fact, I don’t necessarily have any opinions about the new arguments one way or the other. I’m definitely not saying that’s a reasonable emotional response on my part, yet it also feels like a somewhat predictable result of this conversational pattern where I objected to the local validity of one argument and you responded with unrelated arguments for the same conclusion.
I’d frame my approach to both reading and commenting as “iterative reading.” I read to a certain level of depth, write up thoughts that seem pertinent, and then reread and redirect my attention in response to other people’s replies.
Even for my actual research in grad school, this is inevitable. There’s simply too much information to take it all in and retain it; most is unimportant. This is even more true in responding to a blog post about somebody else’s research.
I look at my comments as trying to provide some value. If they’re wrong, hopefully I’ll be corrected. If they’re redundant, I’ll be ignored. If they’re right, then I contribute a bit. Plus, writing up my thoughts helps me remember and understand more, and the pushback from others helps me stay engaged and to focus on the specific areas where my understanding is incomplete.
In this approach, commenting is more about contributing and learning.
There are other places where I’ve approached commenting with a focus on evaluating an argument. For example, my post the other day about “how to place a bet on the end of the world” led to comments that significantly shifted my view, which is a thought process I recorded in the comments to the post.
So I guess I view your argument as standing on its own. It seems correct to me, but I also am not completely certain, and don’t care to investigate further. But it also does provoke consideration of how much of the point I was making needs to be updated. That’s what I tried to articulate in the subsequent comment.
I think the takeaway here is that there’s a difference between the “learning and contributing to a project” style and the “evaluating an argument” style. Which of course is about emphasis, it’s not a rigid binary.
I had difficulty translating your comments and my thoughts into a mutually-compatible frame so that I could understand how they bear on each other. Could I get your feedback on this translation attempt?
It seems like you have a model for your commenting behavior that looks something like:
You read a piece.
This generates too much mental work for you to do it in one sitting, so you queue some of it to happen later.
In the mean time, you post comments based on the portion of the mental work you’ve completed so far.
In this case, when you read my reply, this refocused your attention on the general topic and caused you to do another chunk of already-queued work (with the subject of my comment maybe influencing which part of the work you focused on).
Completing this queued mental work generated new thoughts.
You posted these new thoughts as a reply to my comment because my comment triggered them, but you were always going to generate approximately those thoughts when you got around to your queued mental labor, regardless of what I posted.
And then this relates to the points I raised as follows:
My concern could be rephrased as: Generating unrelated arguments for the same side is a likely outcome for someone in a soldier mindset, and unlikely for a curious exploration of the specific argument being discussed. This outcome is therefore Bayesian evidence for soldier-mindset.
According to your model, you’re not doing either of those things; you are instead doing curious exploration of the original post, which was merely prompted by my comment.
Generating these new arguments is not a particularly unlikely outcome for curious exploration of the original post, so it doesn’t lose nearly as much probability from this particular piece of evidence.
Given your strong prior on your existing model, your posterior probability for it is still pretty high.
Does this seem like an accurate translation to you?
Point of order: I don’t think “arguments as soldiers” was supposed to be equivalent to “thinking of multiple different ideas for why something would not work”—it was about a lack of intellectual integrity in honestly viewing the opponent’s points on their merits, and simultaneously pretending their are no weaknesses in your own arguments.
Good debate requires adversarial thought, which is why we talk about Steelmanning instead of Strawmanning.
If AllAmericanBreakfast has generated even half a dozen different, seemingly unrelated ideas for why the the OP’s experiment does not measure the value it claims to be studying, that still doesn’t immediately make them a soldier. They’d also need to ignore criticism of the arguments, and ignore opposing arguments or attack the opposing arguments in a way that is hypocritical of how they treat their own arguments.
I view this pivot to focus on how someone generates their ideas (what you called “a model for commenting behavior”) as a far more troublesome road.
If we’re going to dismiss arguments because we think the intellectual process to generate them was invalid, that’s an actual “argument as soldiers” mindset in my opinion because it is diverting attention from the argument itself to a process objection instead.
In other words, if AllAmericanBreakfast had raised an important and critical point that up until now was missed, would it be rational to dismiss it because it was posted as an off-the-cuff reply after talking a walk outside, instead of only after some period of careful examination that one is expected to spend their time in prior to commenting on a new post?
I largely agree with you. Steelmanning, a focus on the object-level argument rather than the meta-process, and a certain graciousness about the messiness of intellectual labor are all helpful in promoting good debate.
If I had to guess, Dweomite might have gotten a “Gish gallop” vibe, in which every rebuttal leads to two new bjections being raised, with scarcely an acknowledgement of the rebuttal itself. Part of the art of good debate is focusing attention in a productive manner. Infodumps and Gish gallops can be counterproductive, even if the object-level information they contain is correct.
It was never my intention to equate “arguments as soldiers” with “multiple arguments for the same conclusion”, or to say that having multiple arguments is inherently bad. That’s why I described this as being (in context) a warning sign, not an error in itself.
It was also never my intention to dismiss these particular arguments. I believe I said above that they seem like valid discussion points. But my interests are not confined solely to the AC experiment; I am also interested in the meta-project of improving our tools for rationality.
(Though I can imagine some situations where I would dismiss arguments based on how they were generated. For instance, if I somehow knew that you had literally rolled dice to choose words off of a list with no regard for semantic content, and then posted the output with no filtering, then I would not feel that either rationality or fairness required me to entertain those arguments.)
.
That said, I think you also got a rather different take-away from “arguments as soldiers” than I did. I see it as being about goals, not rules of conduct. If you identify with a particular side, and try to make that side win, then you’re in a soldier mindset. If, while you do that, you also feel a duty to acknowledge the opponent’s valid points and to be honest about your side’s flaws, then you’re a soldier with rules of engagement, but you’re still a soldier.
The alternative is curiosity and truth-seeking. If your goal is to find the truth, then acknowledging someone else’s valid point isn’t a mere duty, it’s good strategy.
You wrote: “Good debate requires adversarial thought”. I might or might not agree, depending on how you define “debate”. But regardless, adversarial thought is NOT a requirement for truth-seeking. You can investigate, share information, and teach others, and even resolve factual agreements without it.
For instance, Double Crux is a strategy for resolving disagreements that doesn’t rely on adversarial thought. I’m also reminded of Aumann-style consensus.
Rules of engagement are certainly better than nothing. Thus is it written:
But duties are not what you’re ideally hoping for.
That seems about right :)
Thank you. If I add your model to my hypothesis space, the probability on soldier-mindset does seem a lot less worrying.
I also now feel like I understand why you initially tried to frame this as a disagreement about posting etiquette. Posting the output of your queued work as a reply to a comment that refocused your attention (but is otherwise unrelated) seems weird to me.
It seems like you’re desiring a sort of Kialo-like approach to commenting, in which each comment chain is tackling an ever-more-narrow subargument. This does seem to be how some comment chains progress, and it would probably make for more legible reading. In the case of the comment you objected to, I could have said “I think you’re right,” realized the rest of my commentary could be split off into a separate comment, and then we wouldn’t have had an issue.
There’s something about the perception of being involved in a conversation with another person that keeps my attention anchored on the range of topics associated with that conversation. But rather than being ever-more-narrowly focused on the most recent reply, my attention fans out throughout the available text.
For example, in writing this comment, I find myself considering not only commenting etiquette, but also re-reading my original comment and your reply, and considering why I didn’t find your reply 100% convincing (instead saying “I think you’re mostly right”).
Then I start typing those thoughts, because the cursor’s in the text box. It would be inconvenient to split off AC-relevant thoughts into a different comment. It also feels weirder to me to make lots of comments on different subtopics than one long comment with all my thoughts. But in this case, I’m also paying enough attention to notice that most of these thoughts are not immediately relevant to this sub-topic, and delete them.
If I don’t edit my own comments to exclude thoughts that aren’t relevant to the subtopic under immediate discussion, all my thoughts at a particular moment in time tend to wind up in the same comment.
I suspect this habit comes from verbal debate, in which there isn’t really a convenient way to separate out thoughts into subtopics, and where a thought not verbalized can easily be forgotten.
I don’t think your description of what I want is entirely accurate. I wouldn’t say that I expect sub-comments to never be wider than their parent, but I expect that they’re somehow a response to the parent, rather than just being whatever you happened to be thinking about at the moment you wrote the sub-comment.
For example, if I posted an analogy about how air conditioners are somehow like kittens, then all of these would seem like reasonable responses that could be considered to widen the topic:
I think air conditioners are more like jellyfish because (reasons)
I’ve long thought that alarm clocks are similar to kittens for largely similar reasons; perhaps there’s an unexplored connection between air conditioners and alarm clocks?
That analogy makes sense, but it doesn’t address X, which seems to me like an important consideration
But it seems disconnected to me to post something like:
My cat just had a litter of kittens and I’m trying to find homes for them; anyone want one?
This summer is so hot. I really wish I had a better air conditioner right now.
It’s understandable that you would think of those things right after reading my hypothetical comment, but they’re not really responses to it.
I agree spoken conversations need somewhat different rules; however, even in spoken conversations there’s some etiquette limiting when and how you can change the topic of discussion.
Unfortunately, I don’t think the lines between a direct response to a comment and a non-response are clear. My reply to your comment wasn’t unrelated to your response. It just wasn’t as carefully focused as you desired.
I’ll also say that, no matter what rules we might come up with for commenting, at the end of the day the ability to coordinate around those rules, and people’s mental budget for following them, will dictate how conversation flows. At this point, I feel that this conversation has shifted from feeing like an exploration of commenting norms using our exchange as an example, and begun to feel like an evaluation of the adequacy of my commenting behavior. The latter is not really something I’m interested in.
I agree my line isn’t particularly sharp. This is less of a considered policy and more an attempt to articulate my intuitions.
Ending the discussion would be fair.
I’m glad I eventually understood your commenting model, though. I don’t feel like I often have opportunities to explore conflicts of expectations in detail, so this was valuable evidence for updating my overall Internet-discussions-model. (As well as a reminder that other peoples’ frames are both harder to predict and harder to communicate than my intuitions would suggest.) So thanks.
I strong upvoted AllAmericanBreakfast’s comment, so the high relative karma is entirely my fault. I basically strong upvoted because it felt right to me, not thinking about how much karma the other comments in the chain had, so I’m sorry that it didn’t match your assumptions about how karma in threads should work. I don’t think that I’m behaving in an arguments-as-soldiers way, but that’s difficult to prove to myself, let alone to another person.
This is the reasoning that I had, but I’m not strongly attached to it: Thinking to the original post about takeoff/air conditioning, the original discussion was about whether an AC unit is useful to the consumer, which means that it achieves the goal of an air conditioned room in a reasonable length of time without being wasteful or expensive. In my experience, AC units generally can achieve their goal of an air conditioned room, so it seems likely that the considerations from the OP ([0], [1]) aren’t helpful and the tests won’t achieve the purpose from the original post. Even if the AC is not able to air condition the room to an arbitrary point (perhaps OP’s room has a lot of glass windows or is poorly insulated), it seems like it will be measuring the wrong things and that OP didn’t fully consider them.
[0]: “I am assuming that the AC runs continuously (as opposed to getting the room down to target temperature easily, at which point it will shut off until the temperature goes back up). If that’s not the case, I will consider the test invalid, and retry on a hotter day.”
[1]: “Equilibrium indoor temperature was the main thing I cared about when using this air conditioner; electricity is relatively cheap”