If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Like… this is literally black and white thinking?
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
And why would a good and sane person ever want
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
And what the fuck with “weeds” and “weeding” where the bad species is locally genocided?
Just because a plant is “non-desired” doesn’t actually mean you need to make it not thrive. It might be mostly harmless. It might be non-obviously commensal. Maybe your initial desires are improper? Have some humility.
Abusing the metaphor. Seizing on one of multiple metaphors, which were headlined explicitly as being attempts to clumsily gesture at or triangulate a thing, and importing a bunch of emotion on an irrelevant axis. Trying to tinge the position you’re disagreeing with as genocide. A social “gotcha.” An applause light. At the end, a hypocritical call for humility, right after not having humility yourself about whether or not weeding is good or necessary.Black and white thinking, right after using the label “black and white” as a rhetorical weapon. You later go on to talk about a property of actual weeds but don’t even try to establish any way in which it’s relevantly analogous.
Maybe your initial desires are improper?
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Just because voting is wrong, here and there… like… so what? Some of my best comments have gotten negative votes and some of the ones I’m most ashamed of go to the top. This means that the voters are sometimes dumb. That’s OK. That’s life. Maybe educate them?
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don’t make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
We don’t need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn’t too much work to do so. The rest might be mulch, but mulch is good too <3
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working. And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work. Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
Literally listed in the essay. Literally listed in the essay.
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
Again the trap; “just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?” Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
But anyway, I’m doing it (a little) here. For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
Strawmanning. Strawmanning.
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
“I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good.
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
I am less confident than you are in your points, and I am also of the opinion that both of Jennifer’s comments were posted in good faith. I wanted to say, however, that I strongly appreciate your highlighting of this dynamic, which I myself have observed play out too many times to count. I want to reinforce the norm of pointing out fucky dynamics when they occur, since I think the failure to do this is one of the primary routes through which “not enough concentration of force” can corrode discussion; that alone would have been enough to merit a strong upvote of the parent comment.
(Separately I would also like to offer commiseration, since I perceive that you are Feeling Bad at the moment. It’s not clear to me what the best way is to do this, so I settled for adding this parenthetical note.)
I’d contend that a post can be “in good faith” in the sense of being a sincere attempt to communicate your actual beliefs and your actual reasons for them, while nonetheless containing harmful patterns such as logical fallacies, misleading rhetorical tricks, excessive verbosity, and low effort to understand your conversational partner. Accusing someone of perpetuating harmful dynamics doesn’t necessarily imply bad faith.
In fact, I see this distinction as being central to the OP. Duncan talks about how his brain does bad things on autopilot when his focus slips, and he wants to be called on them so that he can get better at avoiding them.
I want to reinforce the norm of pointing out fucky dynamics when they occur...
Calling this subthread part of a fucky dynamic is begging the question a bit, I think.
If I post something that’s wrong, I’ll get a lot of replies pushing back. It’ll be hard for me to write persuasive responses, since I’ll have to work around the holes in my post and won’t be able to engage the strongest counterarguments directly. I’ll face the exact quadrilemma you quoted, and if I don’t admit my mistake, it’ll be unpleasant for me! But, there’s nothing fucky happening: that’s just how it goes when you’re wrong in a place where lots of bored people can see.
When the replies are arrant, bad faith nonsense, it becomes fucky. But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn’t be able to tell whether you were looking at a good dynamic or a bad one.
So, calling this “fucky” is calling JenniferRM’s post “bullshit”. Maybe that’s your model of JenniferRM’s post, in which case I guess I just wasted your time, sorry about that. If not, I hope this was a helpful refinement.
(My sense is that dxu is not referring to JenniferRM’s post, so much as the broader dynamic of how disagreement and engagement unfold, and what incentives that creates.)
Fair enough! My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.
(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others’ posts “bullshit” without getting too worked up or disrupting the overall social order. I’ve been in communities that worked that way but it seemed to just be a founder effect, I’m not sure how you’d create that norm in a group with a strong existing culture).
It’s often useful to have possibly false things pointed out to keep them in mind as hypotheses or even raw material for new hypotheses. When these things are confidently asserted as obviously correct, or given irredeemably faulty justifications, that doesn’t diminish their value in this respect, it just creates a separate problem.
A healthy framing for this activity is to explain theories without claiming their truth or relevance. Here, judging what’s true acts as a “solution” for the problem, while understanding available theories of what might plausibly be true is the phase of discussing the problem. So when others do propose solutions, do claim what’s true, a useful process is to ignore that aspect at first.
Only once there is saturation, and more claims don’t help new hypotheses to become thinkable, only then this becomes counterproductive and possibly mostly manipulation of popular opinion.
This word “fucky” is not native to my idiolect, but I’ve heard it from Berkeley folks in the last year or two. Some of the “fuckiness” of the dynamic might be reduced if tapping out as a respectable move in a conversation.
I’m trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days.
I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not.
If I get voted down, or not upvoted, I don’t care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in “damaging lesswrong”.
I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?
Maybe he’s trying to respond to every response as a potential “cost of doing the great work” which he is willing to shoulder? But… I would expect him to get a sore shoulder though, eventually :-(
If “the general audience” is the causal locus through which a person’s speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor’s mind (who you are speaking to “in front of the audience”)) then tapping out of a conversation might “make the original thesis seem to the audience to have less justification” and then, if the audience’s brains were the thing truly of value to you, you might refuse to tap out?
This is a real stress. It can take lots and lots of minutes to respond to everything.
Sometimes problems are so constrained that the solution set is empty, and in this case it might be that “the minutes being too few” is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like “being in the same room with a whiteboard nearby”. It is hard for me to math very well in the absence of shared scratchspace for diagrams.
Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I’m coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C’est la vie <3
I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like “illusion of transparency” and “trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame”, but am not confident.
I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don’t actually really know what to do about the second one, like, on a deeper level. I feel like “people wanting to have a different type of discussion than the OP wants to have” is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it’s a huge problem and worth focussing on at the margin.
In light of your recent post on trying to establish a set of norms and guidelines for LessWrong (I think you accidentally posted it before it was finished, since some chunks of it were still missing, but it seemed to elaborate on things you put forth in stag hunt), it seems worthwhile to revisit this comment you made about a month ago that I commented on. In my comment I focused on the heat of your comment, and how that heat could lead to misunderstandings. In that context, I was worried that a more incisive critique would be counterproductive. Among other things, it would be increasing the heat in a conversation that I believed to be too heated. The other worries were that I expected that you would interpret the critique as an attack that needed defending, I intuited that you were feeling bad and that taking a very critical lens to your words would worsen your mood, and that this comment is going to take me a bunch of work (Author’s note: I’ve finished writing it. It took about 6 hours to compose, although that includes some breaks). In this comment, I’m going to provide that more incisive critique.
My goal is to engender a greater degree of empathy in you when you engage with commenters that disagree with you. This higher empathy would probably result in lower heat, which would allow you to more come closer to the truth since you would receive higher quality criticism. This is related to what habryka says here, where they say that ”...I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch...”, and Elizabeth says here that “I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.” In order to do this, I’m going to reread your Stag Hunt post, reread the comment chain leading up to your comment, and then do a line-by-line analysis of that comment looking for violations of the guidelines to rationalist discourse that you set in Stag Hunt.
My goal is twofold: to provide evidence that you would be helped by greater empathy (and lower heat) directed towards your critics, and to echo what I see as the meat of Jennifer’s comment; that if I were to adopt the framing I see in Stag Hunt, it would be on net detrimental to the LessWrong community.
Before all that, I want to reiterate: I like the beginning of your comment. Pointing out the rock-and-a-hard-place dilemma that you feel after reading her comment is a valuable insight, but I think that for the most part your comment would be stronger without the heated line-by-line critique of her comment. She gave you that invitation to do this and so the line-by-line focus on flaws in her comment is appropriate, but the heat you brought and your apparent confidence in assessing her mental state seems unwarranted. While you did not give such permission in that comment of yours, in the post itself you said:
I’d really like it if I were embedded in a supportive ecosystem. If there were clear, immediate, and reliable incentives for doing it right, and clear, immediate, and reliable disincentives for doing it wrong. If there were actual norms (as opposed to nominal ones, norms-in-name-only) that gave me hints and guidance and encouragement. If there were dozens or even hundreds of people around, such that I could be confident that, when I lose focus for a minute, someone else will catch me.
Catch me, and set me straight.
Because I want to be set straight.
Because I actually care about what’s real, and what’s true, and what’s justified, and what’s rational, even though my brain is only kinda-sorta halfway on board, and keeps thinking that the right thing to do is Win.
Sometimes, when people catch me, I wince, and sometimes, I get grumpy, because I’m working with a pretty crappy OS, here. But I try to get past the wince as quickly as possible, and I try to say “thank you,” and I try to make it clear that I mean it, because honestly, the people that catch me are on my side. They are helping me live up to a value that I hold in my own heart, even though I don’t always succeed in embodying it.
I like it when people save me from the mistakes I listed above. I genuinely like it, even if sometimes it takes my brain a moment to catch up.
I think that Jennifer’s comment was, in part, doing this. I agree that her comment was highly flawed, and many of the critiques in your line-by-line are valid, but I expect that the net effect of your comment is to discourage both comments like hers (which it seems to me you think are a net negative contribution to the discussion), and also comments like this one. I should note here a great irony in the fact that this particular comment of yours has garnered the most analysis of this sort by me compared to any of your others. I think this is simply because I take great joy in pointing out what I see as hypocrisies, and so I would be surprised if it generalized to a similar comment to this one that was made in a different context. The rubric I’ll be using to evaluate your comments is going to be the degree to which the comment falls into the mistakes you outline in Stag Hunt:
1 Make no attempt to distinguish between what it feels is true and what is reasonable to believe. 2 Make no attempt to distinguish between what it feels is good and what is actually good. 3 Make wildly overconfident assertions that it doesn’t even believe (that it will e.g. abandon immediately if forced to make a bet). 4 Weaponize equivocation and maximize plausible deniability à la motte-and-bailey, squeezing the maximum amount of wiggle room out of words and phrases. Say things that it knows will be interpreted a certain way, while knowing that they can be defended as if they meant something more innocent. 5 Neglect the difference between what things look like and what they actually are; fail to retain any skepticism on behalf of the possibility that I might be deceived by surface resemblance. 6 Treat a 70% probability of innocence and a 30% probability of guilt as a 100% chance that the person is 30% guilty (i.e. kinda guilty). 7 Wantonly project or otherwise read into people’s actions and statements; evaluate those actions and statements by asking “what would have to be true inside my head, for me to output this behavior?” and then just assume that that’s what’s going on for them. 8 Pretend that it is speaking directly to a specific person while secretly spending the majority of its attention and optimization power on playing to some imagined larger audience. 9 Generate interventions that will make me feel better, regardless of whether or not they’ll solve the problem (and regardless of whether or not there even is a real problem to be solved, versus an ungrounded anxiety/imaginary injury).
I added the numbers because that makes them easier to reference. I am sufficiently confused by 1, 2, and 9 that I don’t think I’d be able to identify them if I saw them, so I’ll ignore those. The rest I’ll summarize in one-or-two word phrases, which will make them easier to reference throughout in a way that is more legible to readers.
3: Overconfidence 4: Motte-and-bailey 5: [blank] (In the process of making this list, I couldn’t figure out a short handle for this that wasn’t just “Overconfidence” or “Strawmanning”, although there does seem to be a difference between this and those. I’m a bit stuck and confused here, presumably I’m lacking some understanding of what this is that would let me compress it.) 6: Failure to track uncertainty. (I’m not sure if this point is intended to be an instance of the broader class of not tracking uncertainty or specific to tracking guilt). 7: Failure of empathy. 8: Playing to the crowd.
You also accuse Jennifer of strawmanning throughout, which I’ll add to the argumentative tactics that you would like pointed out to you. I take strawmanning to mean “The act of presenting a weaker version of someone’s argument to argue against. This is most noticable when paraphrasing their statement in words they would not endorse, and then putting those words in quotation marks”.
Before any analysis of your comment, I’d like to summarize Jennifer’s comment in my own words (from memory, I read her comment for the second time about 2 hours ago and I’m doing this while about 1⁄4 of the way through analyzing your comment):
You seem to be advocating for a more conflict-oriented framing of lesswrong discourse than I’m comfortable with. You keep coming back to a weed/weeding framing and a stag hunt, but I don’t think that the rate of comments that violate an unstated set of rationalist norms has a substantive impact on our ability to engage in good discussions. When you propose that weeds be pruned from our garden, I take you to mean that users who violate those norms ought to be banned, and I wonder what metric will be used to do the banning. I suspect it will be on net destructive towards the goal of a prosperous garden for rationalist discourse. Indeed, if people who violate those norms ought to be banned, I suspect that I would advocate for your banning because you do those very things. I’m being critical of your post (“pokey”), and it seems to me that you find it unpleasant. Do we really want the levels of criticality to increase?
This is presumably quite different from what she actually said, but that’s the essence of what I understood her to mean.
Anyways, enough exposition. I’ll be quoting everything you say, line by line, and doing my best to describe the degree to which it lapses into any of the fallacies outlined above. I’ll also provide running commentary to stitch everything together into a cohesive mass. Some lines won’t have any commentary, which I’ll denote with ”.”. If I interrupt a paragraph, I’ll end the quote with ”...” and begin the next quote with ”...”. I’m aiming for either dispassionate or empathetic tone throughout, wish me great skill:
If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
This makes it easier for me to model you and improves my sense of clarity surrounding the disagreement since I read it as a description of how you see yourself and how you see the disagreement between yourself and Jennifer. This is far and away my favorite part of your post.
In my view the individual points take an overly negative view of the outcomes of your potential options. If you didn’t respond, I think you are overestimating the degree to which I and other commenters will think that Jennifer is right (relative to how “right” I think she is now, having read your response several times). If you responded in brief, it’s harder for me to guess how I would view your comment because you did not respond in brief. Had you only included the part quoted above, for instance, I would have flagged Stag Hunt and Jennifer’s comments as likely rooted in an unstated disagreement about something more fundamental than what the two of you are explicitly talking about, but I wouldn’t know what it was (although it’s hard to say how much of that is my current view intruding).
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
This comment supposes in a parenthetical that there are many things wrong with Jennifer’s comment, but has not yet fortified that claim. From a rhetorical standpoint, I see this as justifying the subsequent line-by-line analysis of Jennifer’s comment. It’s also not clear to me why the existence of essays that describe the issues with Jennifer’s comment make the citation of those essays in refuting her comment sensation-of-doom inducing. I’m guessing it’s because you believe that if an essay exists that describes the problematic outcomes of a rhetorical/argumentative device you are about to use, you should never use that device?
There might be some Overconfidence in here, since I suspect that (had people not read your comment) Jennifer’s comment would score less-than-the-mean in terms of its violation of site norms, although I don’t know how we would measure this (and therefore turn it into a bet, which would let you examine the degree to which your comment engages in Overconfidence for yourself).
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
I notice that this implies, but does not quite state, that Jennifer’s comment is bullshit.
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Strawmanning. Jennifer’s comment seems closer to “while weeds may indeed exist, they are hard to differentiate from the plants the garden is intended to cultivate and may have no negative effects on those plants”.
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
I took Jennifer’s comment as disagreeing with that state of affairs, proposing that weeds might not be easily differentiable from non-weeds, and challenging the weeding/garden framing entirely. I think that Jennifer’s comment would be stronger if she spoke to the specific instances you highlighted in the parenthetical of commenting/upvotes-gone-awry, although I should note that I found the comments that did that elsewhere somewhat confusing.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
This reads to me as a mixture of several things:
A statement about your own mind (i.e. that you feel you are losing a social war), which you are the true authority on.
A statement about the state of LessWrong norms (i.e. that you feel that LessWrong norms are bad, and that your current attempts to improve them have no impact)
A statement about me and others who are reading this exchange between you and Jennifer (that we have not noticed that Jennifer violates some discourse norms in her comment because she is upvoted: a Failure of empathy)
I also have a couple points I’d like to respond to:
When you say “I’ve answered the call that you’re making here...”, I don’t know what call you’re referencing.
You say that “there aren’t enough other people chiming in” in reference to “in-depth analysis of all the ways that a bad comment was bad”. I think I’m doing here (although I don’t endorse it phrased in those terms). I also feel discouraged w.r.t. making comments like these when I read that, although I’m not sure why. Perhaps I don’t like being told I’m on the losing side of a war. Perhaps I don’t like anticipating that this comment is futile.
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Like… this is literally black and white thinking?
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
And why would a good and sane person ever want
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—...
This seems like a good critique.
...having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
That isn’t the effect that her rhetoric had on me, so I disagree with you on the object level.
I also think that normatively people ought to be cautious about reasoning about the consequences that other people’s comments might have on an imagined audience, since it seems like the sort of thing that can be leveraged to disparage many comments that are on net beneficial to the platform.
Maybe your initial desires are improper?
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Strawmanning, playing to the crowd.
Just because voting is wrong, here and there… like… so what? Some of my best comments have gotten negative votes and some of the ones I’m most ashamed of go to the top. This means that the voters are sometimes dumb. That’s OK. That’s life. Maybe educate them?
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
Failure of empathy. It seems to me that Jennifer’s dismissal of the importance of the relative scoring of a couple of comments stemmed from not seeing it tied to the point that the little things matter. There are 2173 words between the paragraph that begins “Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important...” and the paragraph in which you identify comments that had bad outcomes as measured by upvotes in your view (which begins “(I set aside a few minutes to go grab some examples...)”). That’s a fair bit of time to track that particular point. Do you expect everyone to track your arguments with that level of fidelity? Do you track others’ arguments that well? I’ll remark that I typically don’t, although I might manage to when it comes to pointing out hypocrisy because it’s something that I have a proclivity for.
I’ll also remark that I read this response as smug and dismissive, although my hypocrisy detector is rather highly tuned right now, and so I’m more likely to read hypocrisy when it isn’t present.
Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don’t make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
Strawmanning of the hypocritical variety.
I take Jennifer to be talking about the fact that the community does not agree with her with respect to voting norms (as measured by the behavior that she observes on LessWrong).
We don’t need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn’t too much work to do so. The rest might be mulch, but mulch is good too <3
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working...
Her statement here seems to follow from her elsewhere stating that the goal of gardening is to grow the desired plants, and that weeding is largely immaterial to that goal. I agree that she has not provided a causal mechanism by which weeding, when brought back to the state of LessWrong comment culture, is immaterial to thriving plant life. However, I don’t recall you making the other argument in your OP. You gestured towards that fact and it rested as a background assumption in much of your post, but it’s not one that I remember you arguing or providing evidence for (beyond the claim that you are better than average at detecting the degree to which such things are problematic). I’m not going to re-re-read your OP to check this, but if you did make this claim I would like to hear it.
… And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
I did not read her comment as a zinger. Also playing to the audience.
A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work...
Hmm, it looks like I also missed your argument in favor of the cost effectiveness of adversarial attacks on the weeds. I recall that your previous essay discussed the value of a concentration of force, which is a reason to support such attacks, but is not an argument about its cost effectiveness (you say a valuable use of resources, and I use cost effective. If there’s a material difference there, let me know).
Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
Strawmanning.
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
Literally listed in the essay. Literally listed in the essay.
From memory, you listed fallacies that you yourself tended to fall into but when it came to evidence taken from other commenters it was a list of links without much context. There’s also a difference between having a list of fallacies and having a mechanism by which those fallacies can be detected and corrected. Perhaps you’re referring to the list of ideas that you list as “bad ideas” at the end, but then I’m confused about the degree to which you actually believe they’re bad ideas. If she is saying that the strategy of selecting for weeds against desirable plants is necessary before the call to action (she is saying something probably importantly different, but tracking point of views is getting exhausting), and you have preemptively agreed that you do not have a good mechanism to do this, then I don’t understand why you disagree with her disagreement here.
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
Again the trap
I feel I’ve talked about this particular phrase enough.
...”just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?”...
Strawmanning
...Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
Failure of empathy, and possibly playing to the audience (to the extent that you are accusing her of playing to the audience without outright saying it).
But anyway, I’m doing it (a little) here...
Good!
...For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
Overconfidence.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
To the extent that you’re accusing Jennifer of sneering about you caring about rationalist discourse norms on LessWrong, this is a failure of empathy.
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
My understanding of Jennifer’s comment is that she believes you will make the garden messier with the arguments you are putting forth in Stag Hunt.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
I don’t know the extent to which this is a rhetorical question, but to answer it earnestly I would expect that telling a user to read the sequences is an act that takes several orders of magnitude less effort than actually reading the sequences. I’m not confident about what the relative orders of magnitude should be between the critique-er and the critique-ee, but 1:2 (for a total of 1:10 effort) is where my intuition places the ratio. Reading a comment, deciding that it is unworthy of LessWrong discourse norms, and typing “read the sequences” is probably closer to a 1:5 ratio between the orders of magnitude of effort (i.e. it takes 100,000 times much effort to read the entirety of the sequences than it does to make such a comment).
I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
This read to me as Jennifer stating her desire for cooperation, which is a signal that doesn’t come free! It cost her something, at a minimum the effort to type it.
Your response reads to me as throwing that request for cooperation back in her face and using her intent to cooperate as evidence that she is somehow even less cooperative than you expected prior to this statement. It’s possible that you just intended to disagree with her on the material fact that she intends cooperation, or observing that her actions do not align with her words.
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
Strawmanning. Strawmanning.
I agree that the beginning of that statement is strawmanning.
The core of that statement in my eyes is the last statement; that if she agreed with the argument you put forth in stag hunt as she understands it, she would advocate for your banning.
To avoid further illusions of transparency, I’ll analyze how I would act if I based my actions on what I understand you to argue in Stag Hunt: If I were to suspend my own judgment and base my actions solely on my best attempt to interpret what you advocate for in stag hunt, I would strong downvote your comment because I see it as much much more “weed-like” than the average comment on LessWrong. It is a violation of the point of view you put forth in stag hunt because it normalizes bad forms (I suspect it succeeds despite this because it is prefaced with a valuable insight). I believe it normalizes bad forms because I see it as strawmanning, projecting statements and actions into others minds, pretending to speak to Jennifer while actually speaking mostly to the LessWrong community at large, and failing to retain skepticism that you might have deceived yourself w.r.t. the extent of Jennifer’s violations of rationalist discourse.
Instead, I weakly upvoted it because the first part of it is very useful, and responded to what I saw as the primary fault with the rest of it; that you engaged with Jennifer’s comment from a very conflict-centric point of view which led to high heat. As a result of this framing, you misunderstood most of her comment.
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
The boundaries that Jennifer is referring to here are boundaries on the extent of the conflict. What you advocate for in Stag Hunt is an expanding of those boundaries, and it was not clear to me upon reading it where those boundaries would end.
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
While I agree that Jennifer is strawmanning here, this is the second instance of accusing Jennifer of strawmanning while strawmanning.
“I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good.
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
Same as above.
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
Strawmanning. I take Jennifer as reiterating one of her central points here: if we take it as true that there are good comments and bad comments, and that we want to do something about the bad comments, then through what policy are we going to identify those bad comments (leaving aside what we then do about those bad comments)?
You had what you remarked were very bad ideas. Jennifer’s argument rests on the claim that such methods are rare, costly, or do not exist (but does not make that claim explicit).
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
This seems mean to me. You already don’t quote everything she says, you don’t have to remark on those last two paragraphs.
I’m not sure that going line by line was the most effective way to achieve my goals. It was costly, but I didn’t see another way to get you to internalize the fact that people are regularly taking costly measures to try to improve your model of the world, and I see you as largely ignoring them or accusing them of wrongdoing. Not all critiques of your work can be as comprehensive as mine is here, since as you pointed out, “it’s easier to produce bullshit than to refute bullshit” (I granted myself this one zinger as motivation for finishing this comment, if others remain in the text they are not intended).
Meta-question: Is this the sort of thing that’s appropriate to post as a top-level post? It seems fairly specific, but I worked hard on it and I imagine it as encapsulating the virtues that you put forth in Stag Hunt and your hopefully-soon-to-be-posted guidelines for rationalist discourse.
Edited for clarity on the 1:5 point and a few typos.
I’m glad you took the time to respond here, and there is a lot I like about this comment. In particular, I appreciate this comment for:
Being specific without losing sight of the general message of the parent comment.
Sharing how you see your situation at the outset, which puts the tone of the comment in context.
Identifying clear points of disagreement where possible.
There are, however, some points of disagreement I’d like to raise and some possible deleterious consequences I’d like to flag.
I share the concern raised by habryka about the illusion of transparency, which may be increasing your confidence that you are interpreting the intended meaning (and intended consequences) of Jennifer’s words. I’ll go into (possibly too much) detail on one very short example of what you’ve written and how it may involve some misreading of Jennifer’s comment. You quote Jennifer:
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
and respond:
Again the trap; …
I was also confused about what you meant by epistemic hygiene when finishing the essays. Elsewhere someone asked whether they were one of the ones doing the bad thing you were gesturing towards, which is another question/insecurity I shared (I do not recall how you responded to that question). It is hopefully clear that when I say this here, in this way, that it is not a trap for you. It’s statement of my confusion embedded in a broader point and I hope you feel no obligation to respond. The point of this exposition isn’t to get clarity on that point, it’s to (hopefully) inspire a shift of perspective. Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw Jennifer’s comments as threats. To the extent that you see a comment as a threat, the individual components of the comment take on more sinister airs. I tend to post in a calm tone, so most people have difficulty maintaining perspectives that see me as a threat. The perspective I’m hoping to affect in you is one of collaboration. I am hoping to leverage my nonthreatening way of raising the same confusion as Jennifer so that it is more natural to see that question of Jennifer’s in a nonthreatening light. In doing so, I’m hoping to provide a method by which her comment as a whole takes on a less threatening tone (Again, I expect this characterization of your perspective to be wrong in important ways—you may not see her comment as precisely “threatening”)
Framing her question as a trap also implies that it was “set”, i.e. that putting you in a weakened position was part of her intent (although you might not have intended to imply this). It’s possible that Jennifer had this intention, but I don’t know and I suspect that you don’t either. Perhaps you meant that it was a trap in the normative sense, i.e. that because Jennifer included that question you are placed (whether Jennifer intends it or not) in a no-win situation; that it’s a statement about you (i.e. you have been trapped even if no one is a hunter setting traps). In the context of your high-heat comment, however, I as a reader expect that you believe Jennifer intended it as a trap.
I mentioned that I was trying to shift your perspective to one of collaboration, but I never gave the motivation for why. What are some of the negative consequences of the high-heat framing? I expect that you will get less of the kind of feedback you want on your posts. I tend to avoid social conflict—particularly social conflict that is high in heat. This neuroticism makes me disinclined to converse with people who adopt high-heat tones, in part because I worry that I will get a high-heat reaction. I do not think I would attempt to convey a broad-scope confusion/disagreement with you of the type that Jennifer did here. I would probably choose to nitpick or simply not respond instead, letting the general confusion remain (in part I do this here; quibbling over tone instead of trying to resolve the major points of confusion with your post. I might try to figure out how to describe my confusion with your post and ask you later). Now, I don’t think you should be optimizing solely to get broad-scope-disagreement/confusion responses from neurotic people like me, but I expect you to want to know how your responses are received. The high heat from this comment, even though it is not directed at me, makes me (very slightly) afraid of you.
This relates back to Elizabeth’s comment elsewhere, where she says
I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.
I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety). Mostly this is a fault of mine, but high heat responses are part of what I fear when I do not respond (there are lots of other things too, so don’t please do not update strongly on times when I do not respond).
It’s likely that this comment should have contained (or simply been entirely composed of) questions, since it instead relied on a fair bit of speculation on my part (although I tried to make most of my statements about my reading of your comment rather than your comment itself). I’m including some of those questions here instead of doing the hard work of rewriting my comment to include them in more natural places (along with some other questions I have). I also don’t think it would be productive to respond to all of these at once, so respond only to the ones that you feel like:
Did you find my response nonthreatening?
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and Jennifer stating confusion at that point?
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
Is a more collaborative perspective available to you at this moment?
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
Do you feel that your comment was high heat?
If so, what goals did the high heat accomplish for you?
And, do you believe they were worth the costs?
Did you find my comment welcome?
I share dxu’s perception that you are Feeling Bad and want to extend you some sympathy (my expectation is that you’ll enjoy a parenthetical here—all the more if I go meta and reference dxu’s parenthetical—so here it is with reference and all).
I was also confused about what you meant by epistemic hygiene when finishing the essays.
In part, this is because a major claim of the OP is “LessWrong has a canon; there’s an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts).” I didn’t set out to describe and define epistemic hygiene within the essay, because one of my foundational assumptions is “this work has already been done; we’re just not holding each other to the available existing standards found in all the highly upvoted common memes.”
It is hopefully clear that when I say this here, in this way, that it is not a trap for you.
This is evidence I wasn’t sufficiently clear. The “trap” I was referring to was the bulleted dynamic, whereby I either cede the argument or have to put forth infinite effort. I agree that it wasn’t at all likely deliberately set by Jennifer, but also there are ways to avoid accidentally setting such traps, such as not strawmanning your conversational partner.
(Strawmanning being, basically, redefining what they’re saying in the eyes of the audience. Which they then either tacitly accept or have to actively overturn.)
I think that, in the context of an essay specifically highlighting “people on this site often behave in ways that make it harder to think,” doing a bunch of the stuff Jennifer did is reasonably less forgivable than usual. It’s one thing to, I dunno, use coarse and foul language; it’s another thing to use it in response to somebody who’s just asked that we maybe swear a little less. Especially if the locale for the discussion is named LessSwearing (i.e. the person isn’t randomly bidding for the adoption of some out-of-the-blue standard).
Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw jessica’s comments as threats.
Yes. I do not think it was a genuine attempt to engage or converge with me (the way that Said, Elizabeth, johnswentsworth, supposedlyfun, and even agrippa were clearly doing or willing to do), so much as an attempt to condescend, lecture, and belittle, and the crowd of upvotes seemed to indicate either general endorsement of those actions, or a belief that it’s fine/doesn’t matter/isn’t a dealbreaker. This impression has not shifted much on rereads, and is reminiscent of exactly the prior experiences on LW that caused me to feel the need to write the OP in the first place.
Did you find my response nonthreatening?
Yes.
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and jessica stating confusion at that point?
Yes.
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
It was trust-increasing and felt cooperative throughout.
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
For the most part, yes.
Is a more collaborative perspective available to you at this moment?
I’m not quite sure what you’re asking, here. I can certainly access a desire to collaborate that is zero percent contingent on agreement with my claims.
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
No, or at least not yet. supposedlyfun, for example, seems at least as “hostile” as Jennifer on the level of agreement, but at least bothered to cut out paragraphs they estimated would be likely to be triggering, and mention that fact. That’s a costly signal of “look, I’m really trying to establish a handshake, here,” and it engendered substantial desire to reciprocate. You, too, are making such costly signals. If Jennifer chose to, that would reframe things somewhat, but in Jennifer’s second comment there was a lot of doubling down.
Do you feel that your comment was high heat?
Yes.
If so, what goals did the high heat accomplish for you?
This presupposes that it was … sufficiently strategic, or something?
Goals that were not necessarily well-achieved by the reply:
Putting object-level critique in a public place, so the norm violations didn’t go unnoticed (I’m not confident anyone else would have objected to the objectionable stuff)
Demonstrating that at least one person will in fact push back if someone does the epistemically sloppy bullying thing (I regularly receive messages thanking me for this service)
And, do you believe they were worth the costs?
I don’t actively believe this, no. It seems like it could still go either way. I would be slightly more surprised by it turning out worth it, than by it turning out not worth it.
This is an example of the illusion of transparency issue. Many salient interpretations of what this means (informed by the popularposts on the topic, that are actually not explicitly on this topic) motivate actions that I consider deleterious overall, like punishing half-baked/wild/probably-wrong hypotheses or things that are not obsequiously disclaimed as such, in a way that’s insensitive to the actual level of danger of being misleading. A more salient cost is nonsense hogging attention, but that doesn’t distinguish it from well-reasoned clear points that don’t add insight hogging attention.
The actually serious problem is when this is a symptom of not distinguishing epistemic status of ideas on part of the author, but then it’s not at all clear that punishing publication of such thoughts helps the author fix the problem. The personal skill of tagging epistemic status of ideas in one’s own mind correctly is what I think of as epistemic hygiene, but I don’t expect this to be canon, and I’m not sure that there is no serious disagreement on this point with people who also thought about this. For one, the interpretation I have doesn’t specify community norms, and I don’t know what epistemic-hygiene-the-norm should be.
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
Abusing the metaphor. Seizing on one of multiple metaphors, which were headlined explicitly as being attempts to clumsily gesture at or triangulate a thing, and importing a bunch of emotion on an irrelevant axis. Trying to tinge the position you’re disagreeing with as genocide. A social “gotcha.” An applause light. At the end, a hypocritical call for humility, right after not having humility yourself about whether or not weeding is good or necessary. Black and white thinking, right after using the label “black and white” as a rhetorical weapon. You later go on to talk about a property of actual weeds but don’t even try to establish any way in which it’s relevantly analogous.
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working. And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work. Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
Literally listed in the essay. Literally listed in the essay.
Again the trap; “just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?” Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
But anyway, I’m doing it (a little) here. For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
Strawmanning. Strawmanning.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
I am less confident than you are in your points, and I am also of the opinion that both of Jennifer’s comments were posted in good faith. I wanted to say, however, that I strongly appreciate your highlighting of this dynamic, which I myself have observed play out too many times to count. I want to reinforce the norm of pointing out fucky dynamics when they occur, since I think the failure to do this is one of the primary routes through which “not enough concentration of force” can corrode discussion; that alone would have been enough to merit a strong upvote of the parent comment.
(Separately I would also like to offer commiseration, since I perceive that you are Feeling Bad at the moment. It’s not clear to me what the best way is to do this, so I settled for adding this parenthetical note.)
I’d contend that a post can be “in good faith” in the sense of being a sincere attempt to communicate your actual beliefs and your actual reasons for them, while nonetheless containing harmful patterns such as logical fallacies, misleading rhetorical tricks, excessive verbosity, and low effort to understand your conversational partner. Accusing someone of perpetuating harmful dynamics doesn’t necessarily imply bad faith.
In fact, I see this distinction as being central to the OP. Duncan talks about how his brain does bad things on autopilot when his focus slips, and he wants to be called on them so that he can get better at avoiding them.
Calling this subthread part of a fucky dynamic is begging the question a bit, I think.
If I post something that’s wrong, I’ll get a lot of replies pushing back. It’ll be hard for me to write persuasive responses, since I’ll have to work around the holes in my post and won’t be able to engage the strongest counterarguments directly. I’ll face the exact quadrilemma you quoted, and if I don’t admit my mistake, it’ll be unpleasant for me! But, there’s nothing fucky happening: that’s just how it goes when you’re wrong in a place where lots of bored people can see.
When the replies are arrant, bad faith nonsense, it becomes fucky. But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn’t be able to tell whether you were looking at a good dynamic or a bad one.
So, calling this “fucky” is calling JenniferRM’s post “bullshit”. Maybe that’s your model of JenniferRM’s post, in which case I guess I just wasted your time, sorry about that. If not, I hope this was a helpful refinement.
(My sense is that dxu is not referring to JenniferRM’s post, so much as the broader dynamic of how disagreement and engagement unfold, and what incentives that creates.)
Endorsed.
Fair enough! My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.
(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others’ posts “bullshit” without getting too worked up or disrupting the overall social order. I’ve been in communities that worked that way but it seemed to just be a founder effect, I’m not sure how you’d create that norm in a group with a strong existing culture).
It’s often useful to have possibly false things pointed out to keep them in mind as hypotheses or even raw material for new hypotheses. When these things are confidently asserted as obviously correct, or given irredeemably faulty justifications, that doesn’t diminish their value in this respect, it just creates a separate problem.
A healthy framing for this activity is to explain theories without claiming their truth or relevance. Here, judging what’s true acts as a “solution” for the problem, while understanding available theories of what might plausibly be true is the phase of discussing the problem. So when others do propose solutions, do claim what’s true, a useful process is to ignore that aspect at first.
Only once there is saturation, and more claims don’t help new hypotheses to become thinkable, only then this becomes counterproductive and possibly mostly manipulation of popular opinion.
This word “fucky” is not native to my idiolect, but I’ve heard it from Berkeley folks in the last year or two. Some of the “fuckiness” of the dynamic might be reduced if tapping out as a respectable move in a conversation.
I’m trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days.
I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not.
If I get voted down, or not upvoted, I don’t care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in “damaging lesswrong”.
I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?
Maybe he’s trying to respond to every response as a potential “cost of doing the great work” which he is willing to shoulder? But… I would expect him to get a sore shoulder though, eventually :-(
If “the general audience” is the causal locus through which a person’s speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor’s mind (who you are speaking to “in front of the audience”)) then tapping out of a conversation might “make the original thesis seem to the audience to have less justification” and then, if the audience’s brains were the thing truly of value to you, you might refuse to tap out?
This is a real stress. It can take lots and lots of minutes to respond to everything.
Sometimes problems are so constrained that the solution set is empty, and in this case it might be that “the minutes being too few” is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like “being in the same room with a whiteboard nearby”. It is hard for me to math very well in the absence of shared scratchspace for diagrams.
Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I’m coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C’est la vie <3
If your goal is to somehow help Duncan, you could start by ceasing to relentlessly and overconfidently proceed with wrong models of me.
I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like “illusion of transparency” and “trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame”, but am not confident.
I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don’t actually really know what to do about the second one, like, on a deeper level. I feel like “people wanting to have a different type of discussion than the OP wants to have” is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it’s a huge problem and worth focussing on at the margin.
In light of your recent post on trying to establish a set of norms and guidelines for LessWrong (I think you accidentally posted it before it was finished, since some chunks of it were still missing, but it seemed to elaborate on things you put forth in stag hunt), it seems worthwhile to revisit this comment you made about a month ago that I commented on. In my comment I focused on the heat of your comment, and how that heat could lead to misunderstandings. In that context, I was worried that a more incisive critique would be counterproductive. Among other things, it would be increasing the heat in a conversation that I believed to be too heated. The other worries were that I expected that you would interpret the critique as an attack that needed defending, I intuited that you were feeling bad and that taking a very critical lens to your words would worsen your mood, and that this comment is going to take me a bunch of work (Author’s note: I’ve finished writing it. It took about 6 hours to compose, although that includes some breaks). In this comment, I’m going to provide that more incisive critique.
My goal is to engender a greater degree of empathy in you when you engage with commenters that disagree with you. This higher empathy would probably result in lower heat, which would allow you to more come closer to the truth since you would receive higher quality criticism. This is related to what habryka says here, where they say that ”...I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch...”, and Elizabeth says here that “I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.” In order to do this, I’m going to reread your Stag Hunt post, reread the comment chain leading up to your comment, and then do a line-by-line analysis of that comment looking for violations of the guidelines to rationalist discourse that you set in Stag Hunt.
My goal is twofold: to provide evidence that you would be helped by greater empathy (and lower heat) directed towards your critics, and to echo what I see as the meat of Jennifer’s comment; that if I were to adopt the framing I see in Stag Hunt, it would be on net detrimental to the LessWrong community.
Before all that, I want to reiterate: I like the beginning of your comment. Pointing out the rock-and-a-hard-place dilemma that you feel after reading her comment is a valuable insight, but I think that for the most part your comment would be stronger without the heated line-by-line critique of her comment. She gave you that invitation to do this and so the line-by-line focus on flaws in her comment is appropriate, but the heat you brought and your apparent confidence in assessing her mental state seems unwarranted. While you did not give such permission in that comment of yours, in the post itself you said:
I think that Jennifer’s comment was, in part, doing this. I agree that her comment was highly flawed, and many of the critiques in your line-by-line are valid, but I expect that the net effect of your comment is to discourage both comments like hers (which it seems to me you think are a net negative contribution to the discussion), and also comments like this one. I should note here a great irony in the fact that this particular comment of yours has garnered the most analysis of this sort by me compared to any of your others. I think this is simply because I take great joy in pointing out what I see as hypocrisies, and so I would be surprised if it generalized to a similar comment to this one that was made in a different context. The rubric I’ll be using to evaluate your comments is going to be the degree to which the comment falls into the mistakes you outline in Stag Hunt:
I added the numbers because that makes them easier to reference. I am sufficiently confused by 1, 2, and 9 that I don’t think I’d be able to identify them if I saw them, so I’ll ignore those. The rest I’ll summarize in one-or-two word phrases, which will make them easier to reference throughout in a way that is more legible to readers.
3: Overconfidence
4: Motte-and-bailey
5: [blank] (In the process of making this list, I couldn’t figure out a short handle for this that wasn’t just “Overconfidence” or “Strawmanning”, although there does seem to be a difference between this and those. I’m a bit stuck and confused here, presumably I’m lacking some understanding of what this is that would let me compress it.)
6: Failure to track uncertainty. (I’m not sure if this point is intended to be an instance of the broader class of not tracking uncertainty or specific to tracking guilt).
7: Failure of empathy.
8: Playing to the crowd.
You also accuse Jennifer of strawmanning throughout, which I’ll add to the argumentative tactics that you would like pointed out to you. I take strawmanning to mean “The act of presenting a weaker version of someone’s argument to argue against. This is most noticable when paraphrasing their statement in words they would not endorse, and then putting those words in quotation marks”.
Before any analysis of your comment, I’d like to summarize Jennifer’s comment in my own words (from memory, I read her comment for the second time about 2 hours ago and I’m doing this while about 1⁄4 of the way through analyzing your comment):
This is presumably quite different from what she actually said, but that’s the essence of what I understood her to mean.
Anyways, enough exposition. I’ll be quoting everything you say, line by line, and doing my best to describe the degree to which it lapses into any of the fallacies outlined above. I’ll also provide running commentary to stitch everything together into a cohesive mass. Some lines won’t have any commentary, which I’ll denote with ”.”. If I interrupt a paragraph, I’ll end the quote with ”...” and begin the next quote with ”...”. I’m aiming for either dispassionate or empathetic tone throughout, wish me great skill:
This makes it easier for me to model you and improves my sense of clarity surrounding the disagreement since I read it as a description of how you see yourself and how you see the disagreement between yourself and Jennifer. This is far and away my favorite part of your post.
In my view the individual points take an overly negative view of the outcomes of your potential options. If you didn’t respond, I think you are overestimating the degree to which I and other commenters will think that Jennifer is right (relative to how “right” I think she is now, having read your response several times). If you responded in brief, it’s harder for me to guess how I would view your comment because you did not respond in brief. Had you only included the part quoted above, for instance, I would have flagged Stag Hunt and Jennifer’s comments as likely rooted in an unstated disagreement about something more fundamental than what the two of you are explicitly talking about, but I wouldn’t know what it was (although it’s hard to say how much of that is my current view intruding).
This comment supposes in a parenthetical that there are many things wrong with Jennifer’s comment, but has not yet fortified that claim. From a rhetorical standpoint, I see this as justifying the subsequent line-by-line analysis of Jennifer’s comment. It’s also not clear to me why the existence of essays that describe the issues with Jennifer’s comment make the citation of those essays in refuting her comment sensation-of-doom inducing. I’m guessing it’s because you believe that if an essay exists that describes the problematic outcomes of a rhetorical/argumentative device you are about to use, you should never use that device?
There might be some Overconfidence in here, since I suspect that (had people not read your comment) Jennifer’s comment would score less-than-the-mean in terms of its violation of site norms, although I don’t know how we would measure this (and therefore turn it into a bet, which would let you examine the degree to which your comment engages in Overconfidence for yourself).
I notice that this implies, but does not quite state, that Jennifer’s comment is bullshit.
Strawmanning. Jennifer’s comment seems closer to “while weeds may indeed exist, they are hard to differentiate from the plants the garden is intended to cultivate and may have no negative effects on those plants”.
I took Jennifer’s comment as disagreeing with that state of affairs, proposing that weeds might not be easily differentiable from non-weeds, and challenging the weeding/garden framing entirely. I think that Jennifer’s comment would be stronger if she spoke to the specific instances you highlighted in the parenthetical of commenting/upvotes-gone-awry, although I should note that I found the comments that did that elsewhere somewhat confusing.
This reads to me as a mixture of several things:
A statement about your own mind (i.e. that you feel you are losing a social war), which you are the true authority on.
A statement about the state of LessWrong norms (i.e. that you feel that LessWrong norms are bad, and that your current attempts to improve them have no impact)
A statement about me and others who are reading this exchange between you and Jennifer (that we have not noticed that Jennifer violates some discourse norms in her comment because she is upvoted: a Failure of empathy)
I also have a couple points I’d like to respond to:
When you say “I’ve answered the call that you’re making here...”, I don’t know what call you’re referencing.
You say that “there aren’t enough other people chiming in” in reference to “in-depth analysis of all the ways that a bad comment was bad”. I think I’m doing here (although I don’t endorse it phrased in those terms). I also feel discouraged w.r.t. making comments like these when I read that, although I’m not sure why. Perhaps I don’t like being told I’m on the losing side of a war. Perhaps I don’t like anticipating that this comment is futile.
This seems like a good critique.
That isn’t the effect that her rhetoric had on me, so I disagree with you on the object level.
I also think that normatively people ought to be cautious about reasoning about the consequences that other people’s comments might have on an imagined audience, since it seems like the sort of thing that can be leveraged to disparage many comments that are on net beneficial to the platform.
Strawmanning, playing to the crowd.
Failure of empathy. It seems to me that Jennifer’s dismissal of the importance of the relative scoring of a couple of comments stemmed from not seeing it tied to the point that the little things matter. There are 2173 words between the paragraph that begins “Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important...” and the paragraph in which you identify comments that had bad outcomes as measured by upvotes in your view (which begins “(I set aside a few minutes to go grab some examples...)”). That’s a fair bit of time to track that particular point. Do you expect everyone to track your arguments with that level of fidelity? Do you track others’ arguments that well? I’ll remark that I typically don’t, although I might manage to when it comes to pointing out hypocrisy because it’s something that I have a proclivity for.
I’ll also remark that I read this response as smug and dismissive, although my hypocrisy detector is rather highly tuned right now, and so I’m more likely to read hypocrisy when it isn’t present.
Strawmanning of the hypocritical variety.
I take Jennifer to be talking about the fact that the community does not agree with her with respect to voting norms (as measured by the behavior that she observes on LessWrong).
Her statement here seems to follow from her elsewhere stating that the goal of gardening is to grow the desired plants, and that weeding is largely immaterial to that goal. I agree that she has not provided a causal mechanism by which weeding, when brought back to the state of LessWrong comment culture, is immaterial to thriving plant life. However, I don’t recall you making the other argument in your OP. You gestured towards that fact and it rested as a background assumption in much of your post, but it’s not one that I remember you arguing or providing evidence for (beyond the claim that you are better than average at detecting the degree to which such things are problematic). I’m not going to re-re-read your OP to check this, but if you did make this claim I would like to hear it.
I did not read her comment as a zinger. Also playing to the audience.
Hmm, it looks like I also missed your argument in favor of the cost effectiveness of adversarial attacks on the weeds. I recall that your previous essay discussed the value of a concentration of force, which is a reason to support such attacks, but is not an argument about its cost effectiveness (you say a valuable use of resources, and I use cost effective. If there’s a material difference there, let me know).
Strawmanning.
From memory, you listed fallacies that you yourself tended to fall into but when it came to evidence taken from other commenters it was a list of links without much context. There’s also a difference between having a list of fallacies and having a mechanism by which those fallacies can be detected and corrected. Perhaps you’re referring to the list of ideas that you list as “bad ideas” at the end, but then I’m confused about the degree to which you actually believe they’re bad ideas. If she is saying that the strategy of selecting for weeds against desirable plants is necessary before the call to action (she is saying something probably importantly different, but tracking point of views is getting exhausting), and you have preemptively agreed that you do not have a good mechanism to do this, then I don’t understand why you disagree with her disagreement here.
I feel I’ve talked about this particular phrase enough.
Strawmanning
Failure of empathy, and possibly playing to the audience (to the extent that you are accusing her of playing to the audience without outright saying it).
Good!
Overconfidence.
To the extent that you’re accusing Jennifer of sneering about you caring about rationalist discourse norms on LessWrong, this is a failure of empathy.
My understanding of Jennifer’s comment is that she believes you will make the garden messier with the arguments you are putting forth in Stag Hunt.
I don’t know the extent to which this is a rhetorical question, but to answer it earnestly I would expect that telling a user to read the sequences is an act that takes several orders of magnitude less effort than actually reading the sequences. I’m not confident about what the relative orders of magnitude should be between the critique-er and the critique-ee, but 1:2 (for a total of 1:10 effort) is where my intuition places the ratio. Reading a comment, deciding that it is unworthy of LessWrong discourse norms, and typing “read the sequences” is probably closer to a 1:5 ratio between the orders of magnitude of effort (i.e. it takes 100,000 times much effort to read the entirety of the sequences than it does to make such a comment).
This read to me as Jennifer stating her desire for cooperation, which is a signal that doesn’t come free! It cost her something, at a minimum the effort to type it.
Your response reads to me as throwing that request for cooperation back in her face and using her intent to cooperate as evidence that she is somehow even less cooperative than you expected prior to this statement. It’s possible that you just intended to disagree with her on the material fact that she intends cooperation, or observing that her actions do not align with her words.
I agree that the beginning of that statement is strawmanning.
The core of that statement in my eyes is the last statement; that if she agreed with the argument you put forth in stag hunt as she understands it, she would advocate for your banning.
To avoid further illusions of transparency, I’ll analyze how I would act if I based my actions on what I understand you to argue in Stag Hunt: If I were to suspend my own judgment and base my actions solely on my best attempt to interpret what you advocate for in stag hunt, I would strong downvote your comment because I see it as much much more “weed-like” than the average comment on LessWrong. It is a violation of the point of view you put forth in stag hunt because it normalizes bad forms (I suspect it succeeds despite this because it is prefaced with a valuable insight). I believe it normalizes bad forms because I see it as strawmanning, projecting statements and actions into others minds, pretending to speak to Jennifer while actually speaking mostly to the LessWrong community at large, and failing to retain skepticism that you might have deceived yourself w.r.t. the extent of Jennifer’s violations of rationalist discourse.
Instead, I weakly upvoted it because the first part of it is very useful, and responded to what I saw as the primary fault with the rest of it; that you engaged with Jennifer’s comment from a very conflict-centric point of view which led to high heat. As a result of this framing, you misunderstood most of her comment.
The boundaries that Jennifer is referring to here are boundaries on the extent of the conflict. What you advocate for in Stag Hunt is an expanding of those boundaries, and it was not clear to me upon reading it where those boundaries would end.
While I agree that Jennifer is strawmanning here, this is the second instance of accusing Jennifer of strawmanning while strawmanning.
Same as above.
Strawmanning. I take Jennifer as reiterating one of her central points here: if we take it as true that there are good comments and bad comments, and that we want to do something about the bad comments, then through what policy are we going to identify those bad comments (leaving aside what we then do about those bad comments)?
You had what you remarked were very bad ideas. Jennifer’s argument rests on the claim that such methods are rare, costly, or do not exist (but does not make that claim explicit).
This seems mean to me. You already don’t quote everything she says, you don’t have to remark on those last two paragraphs.
I’m not sure that going line by line was the most effective way to achieve my goals. It was costly, but I didn’t see another way to get you to internalize the fact that people are regularly taking costly measures to try to improve your model of the world, and I see you as largely ignoring them or accusing them of wrongdoing. Not all critiques of your work can be as comprehensive as mine is here, since as you pointed out, “it’s easier to produce bullshit than to refute bullshit” (I granted myself this one zinger as motivation for finishing this comment, if others remain in the text they are not intended).
Meta-question: Is this the sort of thing that’s appropriate to post as a top-level post? It seems fairly specific, but I worked hard on it and I imagine it as encapsulating the virtues that you put forth in Stag Hunt and your hopefully-soon-to-be-posted guidelines for rationalist discourse.
Edited for clarity on the 1:5 point and a few typos.
I’m glad you took the time to respond here, and there is a lot I like about this comment. In particular, I appreciate this comment for:
Being specific without losing sight of the general message of the parent comment.
Sharing how you see your situation at the outset, which puts the tone of the comment in context.
Identifying clear points of disagreement where possible.
There are, however, some points of disagreement I’d like to raise and some possible deleterious consequences I’d like to flag.
I share the concern raised by habryka about the illusion of transparency, which may be increasing your confidence that you are interpreting the intended meaning (and intended consequences) of Jennifer’s words. I’ll go into (possibly too much) detail on one very short example of what you’ve written and how it may involve some misreading of Jennifer’s comment. You quote Jennifer:
and respond:
I was also confused about what you meant by epistemic hygiene when finishing the essays. Elsewhere someone asked whether they were one of the ones doing the bad thing you were gesturing towards, which is another question/insecurity I shared (I do not recall how you responded to that question). It is hopefully clear that when I say this here, in this way, that it is not a trap for you. It’s statement of my confusion embedded in a broader point and I hope you feel no obligation to respond. The point of this exposition isn’t to get clarity on that point, it’s to (hopefully) inspire a shift of perspective. Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw Jennifer’s comments as threats. To the extent that you see a comment as a threat, the individual components of the comment take on more sinister airs. I tend to post in a calm tone, so most people have difficulty maintaining perspectives that see me as a threat. The perspective I’m hoping to affect in you is one of collaboration. I am hoping to leverage my nonthreatening way of raising the same confusion as Jennifer so that it is more natural to see that question of Jennifer’s in a nonthreatening light. In doing so, I’m hoping to provide a method by which her comment as a whole takes on a less threatening tone (Again, I expect this characterization of your perspective to be wrong in important ways—you may not see her comment as precisely “threatening”)
Framing her question as a trap also implies that it was “set”, i.e. that putting you in a weakened position was part of her intent (although you might not have intended to imply this). It’s possible that Jennifer had this intention, but I don’t know and I suspect that you don’t either. Perhaps you meant that it was a trap in the normative sense, i.e. that because Jennifer included that question you are placed (whether Jennifer intends it or not) in a no-win situation; that it’s a statement about you (i.e. you have been trapped even if no one is a hunter setting traps). In the context of your high-heat comment, however, I as a reader expect that you believe Jennifer intended it as a trap.
I mentioned that I was trying to shift your perspective to one of collaboration, but I never gave the motivation for why. What are some of the negative consequences of the high-heat framing? I expect that you will get less of the kind of feedback you want on your posts. I tend to avoid social conflict—particularly social conflict that is high in heat. This neuroticism makes me disinclined to converse with people who adopt high-heat tones, in part because I worry that I will get a high-heat reaction. I do not think I would attempt to convey a broad-scope confusion/disagreement with you of the type that Jennifer did here. I would probably choose to nitpick or simply not respond instead, letting the general confusion remain (in part I do this here; quibbling over tone instead of trying to resolve the major points of confusion with your post. I might try to figure out how to describe my confusion with your post and ask you later). Now, I don’t think you should be optimizing solely to get broad-scope-disagreement/confusion responses from neurotic people like me, but I expect you to want to know how your responses are received. The high heat from this comment, even though it is not directed at me, makes me (very slightly) afraid of you.
This relates back to Elizabeth’s comment elsewhere, where she says
I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety). Mostly this is a fault of mine, but high heat responses are part of what I fear when I do not respond (there are lots of other things too, so don’t please do not update strongly on times when I do not respond).
It’s likely that this comment should have contained (or simply been entirely composed of) questions, since it instead relied on a fair bit of speculation on my part (although I tried to make most of my statements about my reading of your comment rather than your comment itself). I’m including some of those questions here instead of doing the hard work of rewriting my comment to include them in more natural places (along with some other questions I have). I also don’t think it would be productive to respond to all of these at once, so respond only to the ones that you feel like:
Did you find my response nonthreatening?
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and Jennifer stating confusion at that point?
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
Is a more collaborative perspective available to you at this moment?
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
Do you feel that your comment was high heat?
If so, what goals did the high heat accomplish for you?
And, do you believe they were worth the costs?
Did you find my comment welcome?
I share dxu’s perception that you are Feeling Bad and want to extend you some sympathy (my expectation is that you’ll enjoy a parenthetical here—all the more if I go meta and reference dxu’s parenthetical—so here it is with reference and all).
EDIT: jessica → Jennifer. Thanks localdeity.
In part, this is because a major claim of the OP is “LessWrong has a canon; there’s an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts).” I didn’t set out to describe and define epistemic hygiene within the essay, because one of my foundational assumptions is “this work has already been done; we’re just not holding each other to the available existing standards found in all the highly upvoted common memes.”
This is evidence I wasn’t sufficiently clear. The “trap” I was referring to was the bulleted dynamic, whereby I either cede the argument or have to put forth infinite effort. I agree that it wasn’t at all likely deliberately set by Jennifer, but also there are ways to avoid accidentally setting such traps, such as not strawmanning your conversational partner.
(Strawmanning being, basically, redefining what they’re saying in the eyes of the audience. Which they then either tacitly accept or have to actively overturn.)
I think that, in the context of an essay specifically highlighting “people on this site often behave in ways that make it harder to think,” doing a bunch of the stuff Jennifer did is reasonably less forgivable than usual. It’s one thing to, I dunno, use coarse and foul language; it’s another thing to use it in response to somebody who’s just asked that we maybe swear a little less. Especially if the locale for the discussion is named LessSwearing (i.e. the person isn’t randomly bidding for the adoption of some out-of-the-blue standard).
Yes. I do not think it was a genuine attempt to engage or converge with me (the way that Said, Elizabeth, johnswentsworth, supposedlyfun, and even agrippa were clearly doing or willing to do), so much as an attempt to condescend, lecture, and belittle, and the crowd of upvotes seemed to indicate either general endorsement of those actions, or a belief that it’s fine/doesn’t matter/isn’t a dealbreaker. This impression has not shifted much on rereads, and is reminiscent of exactly the prior experiences on LW that caused me to feel the need to write the OP in the first place.
Yes.
Yes.
It was trust-increasing and felt cooperative throughout.
For the most part, yes.
I’m not quite sure what you’re asking, here. I can certainly access a desire to collaborate that is zero percent contingent on agreement with my claims.
No, or at least not yet. supposedlyfun, for example, seems at least as “hostile” as Jennifer on the level of agreement, but at least bothered to cut out paragraphs they estimated would be likely to be triggering, and mention that fact. That’s a costly signal of “look, I’m really trying to establish a handshake, here,” and it engendered substantial desire to reciprocate. You, too, are making such costly signals. If Jennifer chose to, that would reframe things somewhat, but in Jennifer’s second comment there was a lot of doubling down.
Yes.
This presupposes that it was … sufficiently strategic, or something?
Goals that were not necessarily well-achieved by the reply:
Putting object-level critique in a public place, so the norm violations didn’t go unnoticed (I’m not confident anyone else would have objected to the objectionable stuff)
Demonstrating that at least one person will in fact push back if someone does the epistemically sloppy bullying thing (I regularly receive messages thanking me for this service)
I don’t actively believe this, no. It seems like it could still go either way. I would be slightly more surprised by it turning out worth it, than by it turning out not worth it.
Yes.
This is an example of the illusion of transparency issue. Many salient interpretations of what this means (informed by the popular posts on the topic, that are actually not explicitly on this topic) motivate actions that I consider deleterious overall, like punishing half-baked/wild/probably-wrong hypotheses or things that are not obsequiously disclaimed as such, in a way that’s insensitive to the actual level of danger of being misleading. A more salient cost is nonsense hogging attention, but that doesn’t distinguish it from well-reasoned clear points that don’t add insight hogging attention.
The actually serious problem is when this is a symptom of not distinguishing epistemic status of ideas on part of the author, but then it’s not at all clear that punishing publication of such thoughts helps the author fix the problem. The personal skill of tagging epistemic status of ideas in one’s own mind correctly is what I think of as epistemic hygiene, but I don’t expect this to be canon, and I’m not sure that there is no serious disagreement on this point with people who also thought about this. For one, the interpretation I have doesn’t specify community norms, and I don’t know what epistemic-hygiene-the-norm should be.