On epistemic grounds: The thing you should be objecting to in my mind is not the part where I said that “because I can’t think of a reason for X, that implies that there might not be a reason for X”.
(This isn’t great reasoning, but it is the start of something coherent. (Also, it is an invitation to defend X coherently and directly. (A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?)))
On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally were might not be “sane and good” because your advice seemed to be violating ideas about conflict and economics that seem normative to me.
This accusation could also have an epistemic component (which would be an ad hominem) if I were saying “you are saying X and are not sane and good and therefore not-X”. But I’m not saying this.
I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.
This is another instance of the whole “weed/conflict/fighting” frame to me, and my claim is that the whole frame is broken for any kind of communal/cooperative truth-seeking enterprise:
There are some things that just do not belong in a subculture that’s trying to figure out what’s true.
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like. I’m sure I have flaws, I’m just not sure which of my many flaws you think is a problem here. Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
But, there is a difference between being irrational or impolite.
If you think I’m being impolite to you personally, feel free to say how and why (with nuance, etc) and demand an apology. I would probably offer one. I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
However, I “poked you” someone on purpose, because you strongly seem to me to be advocating a general strategy of “all of us being pokey at each other in general for <points at moon> reasons that might be summarized as a natural and normal failure to live up to potentially pragmatically impossible ideals”.
You’re sad about the world. I’m sad about it too. I think a major cause is too much poking. You’re saying the cause is too little poking. So I poked you. Now what?
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of “people can’t grow, errors are defection, ban the defectors” because you would also think that you can’t grow, and I can’t grow, and if we’re calling for each other’s banning based on “essentializing pro-conflict social logic” because we both think the other is a “weed”… well… I guess its a fight then?
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Debate is fun for kids. When I taught a debate team, I tried to make sure it stayed fun, and we won a lot, and years later I heard how the private prep schools tried to share research against us, with all this grinding and library time. (I think maybe they didn’t realize that the important part is just a good skeleton of “what an actual good argument looks like” and hitting people in at the center of their argument based on prima facie logical/policy problems.) People can be good sports about disagreements and it helps with educational processes, but it is important to tolerate missteps and focus on incremental improvement in an environment of quick clear feedback <3
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
Proposing to pro-actively harm people for pre-systematic or post-systematic reasons is bad because unsystematic negative incentive systems don’t scale. “I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good. That’s a formula for the social equivalent of an autoimmune disorder :-(
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
You didn’t mention the word “dunbar” for example that I can tell? You don’t seem to have a theory of governance? You don’t seem to have a theory of local normative validity (other than epistemic hygiene)? You didn’t mention “rights” or “elections” or “prices”? You haven’t talked about virtue epistemology or the principle of charity? You don’t seem to be citing studies in organizational psychology? It seems to all route through the “stag hunt” idea (and perhaps an implicit (and as yet largely unrealized in practice) sense that more is possible) and that’s almost all there is? And based on that you seem to be calling for “weeding” and conflict against imperfectly rational people, which… frankly… seems unwise to me.
Do you see how I’m trying to respond to a gestalt posture you’ve adopted here that I think leads to lower utility for individuals in little scuffles where each thinks the other is a white raven (I assume albinism is the unnatual, rare, presumptively deleterious pheotype?) and is trying to “weed them”, and then ultimately (maybe) it could be very bad for the larger community if “conflict-of-interest based fighting (as distinct from epistemic disagreement)” escalates (RO>1.0) instead of decaying (R0<1.0)?
If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Like… this is literally black and white thinking?
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
And why would a good and sane person ever want
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
And what the fuck with “weeds” and “weeding” where the bad species is locally genocided?
Just because a plant is “non-desired” doesn’t actually mean you need to make it not thrive. It might be mostly harmless. It might be non-obviously commensal. Maybe your initial desires are improper? Have some humility.
Abusing the metaphor. Seizing on one of multiple metaphors, which were headlined explicitly as being attempts to clumsily gesture at or triangulate a thing, and importing a bunch of emotion on an irrelevant axis. Trying to tinge the position you’re disagreeing with as genocide. A social “gotcha.” An applause light. At the end, a hypocritical call for humility, right after not having humility yourself about whether or not weeding is good or necessary.Black and white thinking, right after using the label “black and white” as a rhetorical weapon. You later go on to talk about a property of actual weeds but don’t even try to establish any way in which it’s relevantly analogous.
Maybe your initial desires are improper?
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Just because voting is wrong, here and there… like… so what? Some of my best comments have gotten negative votes and some of the ones I’m most ashamed of go to the top. This means that the voters are sometimes dumb. That’s OK. That’s life. Maybe educate them?
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don’t make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
We don’t need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn’t too much work to do so. The rest might be mulch, but mulch is good too <3
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working. And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work. Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
Literally listed in the essay. Literally listed in the essay.
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
Again the trap; “just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?” Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
But anyway, I’m doing it (a little) here. For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
Strawmanning. Strawmanning.
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
“I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good.
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
I am less confident than you are in your points, and I am also of the opinion that both of Jennifer’s comments were posted in good faith. I wanted to say, however, that I strongly appreciate your highlighting of this dynamic, which I myself have observed play out too many times to count. I want to reinforce the norm of pointing out fucky dynamics when they occur, since I think the failure to do this is one of the primary routes through which “not enough concentration of force” can corrode discussion; that alone would have been enough to merit a strong upvote of the parent comment.
(Separately I would also like to offer commiseration, since I perceive that you are Feeling Bad at the moment. It’s not clear to me what the best way is to do this, so I settled for adding this parenthetical note.)
I’d contend that a post can be “in good faith” in the sense of being a sincere attempt to communicate your actual beliefs and your actual reasons for them, while nonetheless containing harmful patterns such as logical fallacies, misleading rhetorical tricks, excessive verbosity, and low effort to understand your conversational partner. Accusing someone of perpetuating harmful dynamics doesn’t necessarily imply bad faith.
In fact, I see this distinction as being central to the OP. Duncan talks about how his brain does bad things on autopilot when his focus slips, and he wants to be called on them so that he can get better at avoiding them.
I want to reinforce the norm of pointing out fucky dynamics when they occur...
Calling this subthread part of a fucky dynamic is begging the question a bit, I think.
If I post something that’s wrong, I’ll get a lot of replies pushing back. It’ll be hard for me to write persuasive responses, since I’ll have to work around the holes in my post and won’t be able to engage the strongest counterarguments directly. I’ll face the exact quadrilemma you quoted, and if I don’t admit my mistake, it’ll be unpleasant for me! But, there’s nothing fucky happening: that’s just how it goes when you’re wrong in a place where lots of bored people can see.
When the replies are arrant, bad faith nonsense, it becomes fucky. But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn’t be able to tell whether you were looking at a good dynamic or a bad one.
So, calling this “fucky” is calling JenniferRM’s post “bullshit”. Maybe that’s your model of JenniferRM’s post, in which case I guess I just wasted your time, sorry about that. If not, I hope this was a helpful refinement.
(My sense is that dxu is not referring to JenniferRM’s post, so much as the broader dynamic of how disagreement and engagement unfold, and what incentives that creates.)
Fair enough! My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.
(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others’ posts “bullshit” without getting too worked up or disrupting the overall social order. I’ve been in communities that worked that way but it seemed to just be a founder effect, I’m not sure how you’d create that norm in a group with a strong existing culture).
It’s often useful to have possibly false things pointed out to keep them in mind as hypotheses or even raw material for new hypotheses. When these things are confidently asserted as obviously correct, or given irredeemably faulty justifications, that doesn’t diminish their value in this respect, it just creates a separate problem.
A healthy framing for this activity is to explain theories without claiming their truth or relevance. Here, judging what’s true acts as a “solution” for the problem, while understanding available theories of what might plausibly be true is the phase of discussing the problem. So when others do propose solutions, do claim what’s true, a useful process is to ignore that aspect at first.
Only once there is saturation, and more claims don’t help new hypotheses to become thinkable, only then this becomes counterproductive and possibly mostly manipulation of popular opinion.
This word “fucky” is not native to my idiolect, but I’ve heard it from Berkeley folks in the last year or two. Some of the “fuckiness” of the dynamic might be reduced if tapping out as a respectable move in a conversation.
I’m trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days.
I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not.
If I get voted down, or not upvoted, I don’t care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in “damaging lesswrong”.
I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?
Maybe he’s trying to respond to every response as a potential “cost of doing the great work” which he is willing to shoulder? But… I would expect him to get a sore shoulder though, eventually :-(
If “the general audience” is the causal locus through which a person’s speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor’s mind (who you are speaking to “in front of the audience”)) then tapping out of a conversation might “make the original thesis seem to the audience to have less justification” and then, if the audience’s brains were the thing truly of value to you, you might refuse to tap out?
This is a real stress. It can take lots and lots of minutes to respond to everything.
Sometimes problems are so constrained that the solution set is empty, and in this case it might be that “the minutes being too few” is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like “being in the same room with a whiteboard nearby”. It is hard for me to math very well in the absence of shared scratchspace for diagrams.
Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I’m coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C’est la vie <3
I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like “illusion of transparency” and “trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame”, but am not confident.
I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don’t actually really know what to do about the second one, like, on a deeper level. I feel like “people wanting to have a different type of discussion than the OP wants to have” is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it’s a huge problem and worth focussing on at the margin.
In light of your recent post on trying to establish a set of norms and guidelines for LessWrong (I think you accidentally posted it before it was finished, since some chunks of it were still missing, but it seemed to elaborate on things you put forth in stag hunt), it seems worthwhile to revisit this comment you made about a month ago that I commented on. In my comment I focused on the heat of your comment, and how that heat could lead to misunderstandings. In that context, I was worried that a more incisive critique would be counterproductive. Among other things, it would be increasing the heat in a conversation that I believed to be too heated. The other worries were that I expected that you would interpret the critique as an attack that needed defending, I intuited that you were feeling bad and that taking a very critical lens to your words would worsen your mood, and that this comment is going to take me a bunch of work (Author’s note: I’ve finished writing it. It took about 6 hours to compose, although that includes some breaks). In this comment, I’m going to provide that more incisive critique.
My goal is to engender a greater degree of empathy in you when you engage with commenters that disagree with you. This higher empathy would probably result in lower heat, which would allow you to more come closer to the truth since you would receive higher quality criticism. This is related to what habryka says here, where they say that ”...I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch...”, and Elizabeth says here that “I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.” In order to do this, I’m going to reread your Stag Hunt post, reread the comment chain leading up to your comment, and then do a line-by-line analysis of that comment looking for violations of the guidelines to rationalist discourse that you set in Stag Hunt.
My goal is twofold: to provide evidence that you would be helped by greater empathy (and lower heat) directed towards your critics, and to echo what I see as the meat of Jennifer’s comment; that if I were to adopt the framing I see in Stag Hunt, it would be on net detrimental to the LessWrong community.
Before all that, I want to reiterate: I like the beginning of your comment. Pointing out the rock-and-a-hard-place dilemma that you feel after reading her comment is a valuable insight, but I think that for the most part your comment would be stronger without the heated line-by-line critique of her comment. She gave you that invitation to do this and so the line-by-line focus on flaws in her comment is appropriate, but the heat you brought and your apparent confidence in assessing her mental state seems unwarranted. While you did not give such permission in that comment of yours, in the post itself you said:
I’d really like it if I were embedded in a supportive ecosystem. If there were clear, immediate, and reliable incentives for doing it right, and clear, immediate, and reliable disincentives for doing it wrong. If there were actual norms (as opposed to nominal ones, norms-in-name-only) that gave me hints and guidance and encouragement. If there were dozens or even hundreds of people around, such that I could be confident that, when I lose focus for a minute, someone else will catch me.
Catch me, and set me straight.
Because I want to be set straight.
Because I actually care about what’s real, and what’s true, and what’s justified, and what’s rational, even though my brain is only kinda-sorta halfway on board, and keeps thinking that the right thing to do is Win.
Sometimes, when people catch me, I wince, and sometimes, I get grumpy, because I’m working with a pretty crappy OS, here. But I try to get past the wince as quickly as possible, and I try to say “thank you,” and I try to make it clear that I mean it, because honestly, the people that catch me are on my side. They are helping me live up to a value that I hold in my own heart, even though I don’t always succeed in embodying it.
I like it when people save me from the mistakes I listed above. I genuinely like it, even if sometimes it takes my brain a moment to catch up.
I think that Jennifer’s comment was, in part, doing this. I agree that her comment was highly flawed, and many of the critiques in your line-by-line are valid, but I expect that the net effect of your comment is to discourage both comments like hers (which it seems to me you think are a net negative contribution to the discussion), and also comments like this one. I should note here a great irony in the fact that this particular comment of yours has garnered the most analysis of this sort by me compared to any of your others. I think this is simply because I take great joy in pointing out what I see as hypocrisies, and so I would be surprised if it generalized to a similar comment to this one that was made in a different context. The rubric I’ll be using to evaluate your comments is going to be the degree to which the comment falls into the mistakes you outline in Stag Hunt:
1 Make no attempt to distinguish between what it feels is true and what is reasonable to believe. 2 Make no attempt to distinguish between what it feels is good and what is actually good. 3 Make wildly overconfident assertions that it doesn’t even believe (that it will e.g. abandon immediately if forced to make a bet). 4 Weaponize equivocation and maximize plausible deniability à la motte-and-bailey, squeezing the maximum amount of wiggle room out of words and phrases. Say things that it knows will be interpreted a certain way, while knowing that they can be defended as if they meant something more innocent. 5 Neglect the difference between what things look like and what they actually are; fail to retain any skepticism on behalf of the possibility that I might be deceived by surface resemblance. 6 Treat a 70% probability of innocence and a 30% probability of guilt as a 100% chance that the person is 30% guilty (i.e. kinda guilty). 7 Wantonly project or otherwise read into people’s actions and statements; evaluate those actions and statements by asking “what would have to be true inside my head, for me to output this behavior?” and then just assume that that’s what’s going on for them. 8 Pretend that it is speaking directly to a specific person while secretly spending the majority of its attention and optimization power on playing to some imagined larger audience. 9 Generate interventions that will make me feel better, regardless of whether or not they’ll solve the problem (and regardless of whether or not there even is a real problem to be solved, versus an ungrounded anxiety/imaginary injury).
I added the numbers because that makes them easier to reference. I am sufficiently confused by 1, 2, and 9 that I don’t think I’d be able to identify them if I saw them, so I’ll ignore those. The rest I’ll summarize in one-or-two word phrases, which will make them easier to reference throughout in a way that is more legible to readers.
3: Overconfidence 4: Motte-and-bailey 5: [blank] (In the process of making this list, I couldn’t figure out a short handle for this that wasn’t just “Overconfidence” or “Strawmanning”, although there does seem to be a difference between this and those. I’m a bit stuck and confused here, presumably I’m lacking some understanding of what this is that would let me compress it.) 6: Failure to track uncertainty. (I’m not sure if this point is intended to be an instance of the broader class of not tracking uncertainty or specific to tracking guilt). 7: Failure of empathy. 8: Playing to the crowd.
You also accuse Jennifer of strawmanning throughout, which I’ll add to the argumentative tactics that you would like pointed out to you. I take strawmanning to mean “The act of presenting a weaker version of someone’s argument to argue against. This is most noticable when paraphrasing their statement in words they would not endorse, and then putting those words in quotation marks”.
Before any analysis of your comment, I’d like to summarize Jennifer’s comment in my own words (from memory, I read her comment for the second time about 2 hours ago and I’m doing this while about 1⁄4 of the way through analyzing your comment):
You seem to be advocating for a more conflict-oriented framing of lesswrong discourse than I’m comfortable with. You keep coming back to a weed/weeding framing and a stag hunt, but I don’t think that the rate of comments that violate an unstated set of rationalist norms has a substantive impact on our ability to engage in good discussions. When you propose that weeds be pruned from our garden, I take you to mean that users who violate those norms ought to be banned, and I wonder what metric will be used to do the banning. I suspect it will be on net destructive towards the goal of a prosperous garden for rationalist discourse. Indeed, if people who violate those norms ought to be banned, I suspect that I would advocate for your banning because you do those very things. I’m being critical of your post (“pokey”), and it seems to me that you find it unpleasant. Do we really want the levels of criticality to increase?
This is presumably quite different from what she actually said, but that’s the essence of what I understood her to mean.
Anyways, enough exposition. I’ll be quoting everything you say, line by line, and doing my best to describe the degree to which it lapses into any of the fallacies outlined above. I’ll also provide running commentary to stitch everything together into a cohesive mass. Some lines won’t have any commentary, which I’ll denote with ”.”. If I interrupt a paragraph, I’ll end the quote with ”...” and begin the next quote with ”...”. I’m aiming for either dispassionate or empathetic tone throughout, wish me great skill:
If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like.
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
This makes it easier for me to model you and improves my sense of clarity surrounding the disagreement since I read it as a description of how you see yourself and how you see the disagreement between yourself and Jennifer. This is far and away my favorite part of your post.
In my view the individual points take an overly negative view of the outcomes of your potential options. If you didn’t respond, I think you are overestimating the degree to which I and other commenters will think that Jennifer is right (relative to how “right” I think she is now, having read your response several times). If you responded in brief, it’s harder for me to guess how I would view your comment because you did not respond in brief. Had you only included the part quoted above, for instance, I would have flagged Stag Hunt and Jennifer’s comments as likely rooted in an unstated disagreement about something more fundamental than what the two of you are explicitly talking about, but I wouldn’t know what it was (although it’s hard to say how much of that is my current view intruding).
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
This comment supposes in a parenthetical that there are many things wrong with Jennifer’s comment, but has not yet fortified that claim. From a rhetorical standpoint, I see this as justifying the subsequent line-by-line analysis of Jennifer’s comment. It’s also not clear to me why the existence of essays that describe the issues with Jennifer’s comment make the citation of those essays in refuting her comment sensation-of-doom inducing. I’m guessing it’s because you believe that if an essay exists that describes the problematic outcomes of a rhetorical/argumentative device you are about to use, you should never use that device?
There might be some Overconfidence in here, since I suspect that (had people not read your comment) Jennifer’s comment would score less-than-the-mean in terms of its violation of site norms, although I don’t know how we would measure this (and therefore turn it into a bet, which would let you examine the degree to which your comment engages in Overconfidence for yourself).
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
I notice that this implies, but does not quite state, that Jennifer’s comment is bullshit.
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Strawmanning. Jennifer’s comment seems closer to “while weeds may indeed exist, they are hard to differentiate from the plants the garden is intended to cultivate and may have no negative effects on those plants”.
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
I took Jennifer’s comment as disagreeing with that state of affairs, proposing that weeds might not be easily differentiable from non-weeds, and challenging the weeding/garden framing entirely. I think that Jennifer’s comment would be stronger if she spoke to the specific instances you highlighted in the parenthetical of commenting/upvotes-gone-awry, although I should note that I found the comments that did that elsewhere somewhat confusing.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
This reads to me as a mixture of several things:
A statement about your own mind (i.e. that you feel you are losing a social war), which you are the true authority on.
A statement about the state of LessWrong norms (i.e. that you feel that LessWrong norms are bad, and that your current attempts to improve them have no impact)
A statement about me and others who are reading this exchange between you and Jennifer (that we have not noticed that Jennifer violates some discourse norms in her comment because she is upvoted: a Failure of empathy)
I also have a couple points I’d like to respond to:
When you say “I’ve answered the call that you’re making here...”, I don’t know what call you’re referencing.
You say that “there aren’t enough other people chiming in” in reference to “in-depth analysis of all the ways that a bad comment was bad”. I think I’m doing here (although I don’t endorse it phrased in those terms). I also feel discouraged w.r.t. making comments like these when I read that, although I’m not sure why. Perhaps I don’t like being told I’m on the losing side of a war. Perhaps I don’t like anticipating that this comment is futile.
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Like… this is literally black and white thinking?
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
And why would a good and sane person ever want
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—...
This seems like a good critique.
...having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
That isn’t the effect that her rhetoric had on me, so I disagree with you on the object level.
I also think that normatively people ought to be cautious about reasoning about the consequences that other people’s comments might have on an imagined audience, since it seems like the sort of thing that can be leveraged to disparage many comments that are on net beneficial to the platform.
Maybe your initial desires are improper?
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Strawmanning, playing to the crowd.
Just because voting is wrong, here and there… like… so what? Some of my best comments have gotten negative votes and some of the ones I’m most ashamed of go to the top. This means that the voters are sometimes dumb. That’s OK. That’s life. Maybe educate them?
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
Failure of empathy. It seems to me that Jennifer’s dismissal of the importance of the relative scoring of a couple of comments stemmed from not seeing it tied to the point that the little things matter. There are 2173 words between the paragraph that begins “Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important...” and the paragraph in which you identify comments that had bad outcomes as measured by upvotes in your view (which begins “(I set aside a few minutes to go grab some examples...)”). That’s a fair bit of time to track that particular point. Do you expect everyone to track your arguments with that level of fidelity? Do you track others’ arguments that well? I’ll remark that I typically don’t, although I might manage to when it comes to pointing out hypocrisy because it’s something that I have a proclivity for.
I’ll also remark that I read this response as smug and dismissive, although my hypocrisy detector is rather highly tuned right now, and so I’m more likely to read hypocrisy when it isn’t present.
Lesswrong never understood this stuff, and I once thought I could/should teach it but then I just drifted away instead. I feel bad about that. Please don’t make this place worse again by caring about points for reasons other than making comments occur in the right order on the page.
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
Strawmanning of the hypocritical variety.
I take Jennifer to be talking about the fact that the community does not agree with her with respect to voting norms (as measured by the behavior that she observes on LessWrong).
We don’t need to organize a stag hunt to exterminate the weeds. We need to plant good seeds and get them into the sunlight at the top of the trellis, so long as it isn’t too much work to do so. The rest might be mulch, but mulch is good too <3
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working...
Her statement here seems to follow from her elsewhere stating that the goal of gardening is to grow the desired plants, and that weeding is largely immaterial to that goal. I agree that she has not provided a causal mechanism by which weeding, when brought back to the state of LessWrong comment culture, is immaterial to thriving plant life. However, I don’t recall you making the other argument in your OP. You gestured towards that fact and it rested as a background assumption in much of your post, but it’s not one that I remember you arguing or providing evidence for (beyond the claim that you are better than average at detecting the degree to which such things are problematic). I’m not going to re-re-read your OP to check this, but if you did make this claim I would like to hear it.
… And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
I did not read her comment as a zinger. Also playing to the audience.
A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work...
Hmm, it looks like I also missed your argument in favor of the cost effectiveness of adversarial attacks on the weeds. I recall that your previous essay discussed the value of a concentration of force, which is a reason to support such attacks, but is not an argument about its cost effectiveness (you say a valuable use of resources, and I use cost effective. If there’s a material difference there, let me know).
Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
Strawmanning.
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
Literally listed in the essay. Literally listed in the essay.
From memory, you listed fallacies that you yourself tended to fall into but when it came to evidence taken from other commenters it was a list of links without much context. There’s also a difference between having a list of fallacies and having a mechanism by which those fallacies can be detected and corrected. Perhaps you’re referring to the list of ideas that you list as “bad ideas” at the end, but then I’m confused about the degree to which you actually believe they’re bad ideas. If she is saying that the strategy of selecting for weeds against desirable plants is necessary before the call to action (she is saying something probably importantly different, but tracking point of views is getting exhausting), and you have preemptively agreed that you do not have a good mechanism to do this, then I don’t understand why you disagree with her disagreement here.
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
Again the trap
I feel I’ve talked about this particular phrase enough.
...”just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?”...
Strawmanning
...Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
Failure of empathy, and possibly playing to the audience (to the extent that you are accusing her of playing to the audience without outright saying it).
But anyway, I’m doing it (a little) here...
Good!
...For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
Overconfidence.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
To the extent that you’re accusing Jennifer of sneering about you caring about rationalist discourse norms on LessWrong, this is a failure of empathy.
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
My understanding of Jennifer’s comment is that she believes you will make the garden messier with the arguments you are putting forth in Stag Hunt.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
I don’t know the extent to which this is a rhetorical question, but to answer it earnestly I would expect that telling a user to read the sequences is an act that takes several orders of magnitude less effort than actually reading the sequences. I’m not confident about what the relative orders of magnitude should be between the critique-er and the critique-ee, but 1:2 (for a total of 1:10 effort) is where my intuition places the ratio. Reading a comment, deciding that it is unworthy of LessWrong discourse norms, and typing “read the sequences” is probably closer to a 1:5 ratio between the orders of magnitude of effort (i.e. it takes 100,000 times much effort to read the entirety of the sequences than it does to make such a comment).
I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
This read to me as Jennifer stating her desire for cooperation, which is a signal that doesn’t come free! It cost her something, at a minimum the effort to type it.
Your response reads to me as throwing that request for cooperation back in her face and using her intent to cooperate as evidence that she is somehow even less cooperative than you expected prior to this statement. It’s possible that you just intended to disagree with her on the material fact that she intends cooperation, or observing that her actions do not align with her words.
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
Strawmanning. Strawmanning.
I agree that the beginning of that statement is strawmanning.
The core of that statement in my eyes is the last statement; that if she agreed with the argument you put forth in stag hunt as she understands it, she would advocate for your banning.
To avoid further illusions of transparency, I’ll analyze how I would act if I based my actions on what I understand you to argue in Stag Hunt: If I were to suspend my own judgment and base my actions solely on my best attempt to interpret what you advocate for in stag hunt, I would strong downvote your comment because I see it as much much more “weed-like” than the average comment on LessWrong. It is a violation of the point of view you put forth in stag hunt because it normalizes bad forms (I suspect it succeeds despite this because it is prefaced with a valuable insight). I believe it normalizes bad forms because I see it as strawmanning, projecting statements and actions into others minds, pretending to speak to Jennifer while actually speaking mostly to the LessWrong community at large, and failing to retain skepticism that you might have deceived yourself w.r.t. the extent of Jennifer’s violations of rationalist discourse.
Instead, I weakly upvoted it because the first part of it is very useful, and responded to what I saw as the primary fault with the rest of it; that you engaged with Jennifer’s comment from a very conflict-centric point of view which led to high heat. As a result of this framing, you misunderstood most of her comment.
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
The boundaries that Jennifer is referring to here are boundaries on the extent of the conflict. What you advocate for in Stag Hunt is an expanding of those boundaries, and it was not clear to me upon reading it where those boundaries would end.
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
While I agree that Jennifer is strawmanning here, this is the second instance of accusing Jennifer of strawmanning while strawmanning.
“I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good.
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
Same as above.
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
Strawmanning. I take Jennifer as reiterating one of her central points here: if we take it as true that there are good comments and bad comments, and that we want to do something about the bad comments, then through what policy are we going to identify those bad comments (leaving aside what we then do about those bad comments)?
You had what you remarked were very bad ideas. Jennifer’s argument rests on the claim that such methods are rare, costly, or do not exist (but does not make that claim explicit).
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
This seems mean to me. You already don’t quote everything she says, you don’t have to remark on those last two paragraphs.
I’m not sure that going line by line was the most effective way to achieve my goals. It was costly, but I didn’t see another way to get you to internalize the fact that people are regularly taking costly measures to try to improve your model of the world, and I see you as largely ignoring them or accusing them of wrongdoing. Not all critiques of your work can be as comprehensive as mine is here, since as you pointed out, “it’s easier to produce bullshit than to refute bullshit” (I granted myself this one zinger as motivation for finishing this comment, if others remain in the text they are not intended).
Meta-question: Is this the sort of thing that’s appropriate to post as a top-level post? It seems fairly specific, but I worked hard on it and I imagine it as encapsulating the virtues that you put forth in Stag Hunt and your hopefully-soon-to-be-posted guidelines for rationalist discourse.
Edited for clarity on the 1:5 point and a few typos.
I’m glad you took the time to respond here, and there is a lot I like about this comment. In particular, I appreciate this comment for:
Being specific without losing sight of the general message of the parent comment.
Sharing how you see your situation at the outset, which puts the tone of the comment in context.
Identifying clear points of disagreement where possible.
There are, however, some points of disagreement I’d like to raise and some possible deleterious consequences I’d like to flag.
I share the concern raised by habryka about the illusion of transparency, which may be increasing your confidence that you are interpreting the intended meaning (and intended consequences) of Jennifer’s words. I’ll go into (possibly too much) detail on one very short example of what you’ve written and how it may involve some misreading of Jennifer’s comment. You quote Jennifer:
Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
and respond:
Again the trap; …
I was also confused about what you meant by epistemic hygiene when finishing the essays. Elsewhere someone asked whether they were one of the ones doing the bad thing you were gesturing towards, which is another question/insecurity I shared (I do not recall how you responded to that question). It is hopefully clear that when I say this here, in this way, that it is not a trap for you. It’s statement of my confusion embedded in a broader point and I hope you feel no obligation to respond. The point of this exposition isn’t to get clarity on that point, it’s to (hopefully) inspire a shift of perspective. Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw Jennifer’s comments as threats. To the extent that you see a comment as a threat, the individual components of the comment take on more sinister airs. I tend to post in a calm tone, so most people have difficulty maintaining perspectives that see me as a threat. The perspective I’m hoping to affect in you is one of collaboration. I am hoping to leverage my nonthreatening way of raising the same confusion as Jennifer so that it is more natural to see that question of Jennifer’s in a nonthreatening light. In doing so, I’m hoping to provide a method by which her comment as a whole takes on a less threatening tone (Again, I expect this characterization of your perspective to be wrong in important ways—you may not see her comment as precisely “threatening”)
Framing her question as a trap also implies that it was “set”, i.e. that putting you in a weakened position was part of her intent (although you might not have intended to imply this). It’s possible that Jennifer had this intention, but I don’t know and I suspect that you don’t either. Perhaps you meant that it was a trap in the normative sense, i.e. that because Jennifer included that question you are placed (whether Jennifer intends it or not) in a no-win situation; that it’s a statement about you (i.e. you have been trapped even if no one is a hunter setting traps). In the context of your high-heat comment, however, I as a reader expect that you believe Jennifer intended it as a trap.
I mentioned that I was trying to shift your perspective to one of collaboration, but I never gave the motivation for why. What are some of the negative consequences of the high-heat framing? I expect that you will get less of the kind of feedback you want on your posts. I tend to avoid social conflict—particularly social conflict that is high in heat. This neuroticism makes me disinclined to converse with people who adopt high-heat tones, in part because I worry that I will get a high-heat reaction. I do not think I would attempt to convey a broad-scope confusion/disagreement with you of the type that Jennifer did here. I would probably choose to nitpick or simply not respond instead, letting the general confusion remain (in part I do this here; quibbling over tone instead of trying to resolve the major points of confusion with your post. I might try to figure out how to describe my confusion with your post and ask you later). Now, I don’t think you should be optimizing solely to get broad-scope-disagreement/confusion responses from neurotic people like me, but I expect you to want to know how your responses are received. The high heat from this comment, even though it is not directed at me, makes me (very slightly) afraid of you.
This relates back to Elizabeth’s comment elsewhere, where she says
I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.
I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety). Mostly this is a fault of mine, but high heat responses are part of what I fear when I do not respond (there are lots of other things too, so don’t please do not update strongly on times when I do not respond).
It’s likely that this comment should have contained (or simply been entirely composed of) questions, since it instead relied on a fair bit of speculation on my part (although I tried to make most of my statements about my reading of your comment rather than your comment itself). I’m including some of those questions here instead of doing the hard work of rewriting my comment to include them in more natural places (along with some other questions I have). I also don’t think it would be productive to respond to all of these at once, so respond only to the ones that you feel like:
Did you find my response nonthreatening?
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and Jennifer stating confusion at that point?
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
Is a more collaborative perspective available to you at this moment?
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
Do you feel that your comment was high heat?
If so, what goals did the high heat accomplish for you?
And, do you believe they were worth the costs?
Did you find my comment welcome?
I share dxu’s perception that you are Feeling Bad and want to extend you some sympathy (my expectation is that you’ll enjoy a parenthetical here—all the more if I go meta and reference dxu’s parenthetical—so here it is with reference and all).
I was also confused about what you meant by epistemic hygiene when finishing the essays.
In part, this is because a major claim of the OP is “LessWrong has a canon; there’s an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts).” I didn’t set out to describe and define epistemic hygiene within the essay, because one of my foundational assumptions is “this work has already been done; we’re just not holding each other to the available existing standards found in all the highly upvoted common memes.”
It is hopefully clear that when I say this here, in this way, that it is not a trap for you.
This is evidence I wasn’t sufficiently clear. The “trap” I was referring to was the bulleted dynamic, whereby I either cede the argument or have to put forth infinite effort. I agree that it wasn’t at all likely deliberately set by Jennifer, but also there are ways to avoid accidentally setting such traps, such as not strawmanning your conversational partner.
(Strawmanning being, basically, redefining what they’re saying in the eyes of the audience. Which they then either tacitly accept or have to actively overturn.)
I think that, in the context of an essay specifically highlighting “people on this site often behave in ways that make it harder to think,” doing a bunch of the stuff Jennifer did is reasonably less forgivable than usual. It’s one thing to, I dunno, use coarse and foul language; it’s another thing to use it in response to somebody who’s just asked that we maybe swear a little less. Especially if the locale for the discussion is named LessSwearing (i.e. the person isn’t randomly bidding for the adoption of some out-of-the-blue standard).
Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw jessica’s comments as threats.
Yes. I do not think it was a genuine attempt to engage or converge with me (the way that Said, Elizabeth, johnswentsworth, supposedlyfun, and even agrippa were clearly doing or willing to do), so much as an attempt to condescend, lecture, and belittle, and the crowd of upvotes seemed to indicate either general endorsement of those actions, or a belief that it’s fine/doesn’t matter/isn’t a dealbreaker. This impression has not shifted much on rereads, and is reminiscent of exactly the prior experiences on LW that caused me to feel the need to write the OP in the first place.
Did you find my response nonthreatening?
Yes.
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and jessica stating confusion at that point?
Yes.
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
It was trust-increasing and felt cooperative throughout.
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
For the most part, yes.
Is a more collaborative perspective available to you at this moment?
I’m not quite sure what you’re asking, here. I can certainly access a desire to collaborate that is zero percent contingent on agreement with my claims.
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
No, or at least not yet. supposedlyfun, for example, seems at least as “hostile” as Jennifer on the level of agreement, but at least bothered to cut out paragraphs they estimated would be likely to be triggering, and mention that fact. That’s a costly signal of “look, I’m really trying to establish a handshake, here,” and it engendered substantial desire to reciprocate. You, too, are making such costly signals. If Jennifer chose to, that would reframe things somewhat, but in Jennifer’s second comment there was a lot of doubling down.
Do you feel that your comment was high heat?
Yes.
If so, what goals did the high heat accomplish for you?
This presupposes that it was … sufficiently strategic, or something?
Goals that were not necessarily well-achieved by the reply:
Putting object-level critique in a public place, so the norm violations didn’t go unnoticed (I’m not confident anyone else would have objected to the objectionable stuff)
Demonstrating that at least one person will in fact push back if someone does the epistemically sloppy bullying thing (I regularly receive messages thanking me for this service)
And, do you believe they were worth the costs?
I don’t actively believe this, no. It seems like it could still go either way. I would be slightly more surprised by it turning out worth it, than by it turning out not worth it.
This is an example of the illusion of transparency issue. Many salient interpretations of what this means (informed by the popularposts on the topic, that are actually not explicitly on this topic) motivate actions that I consider deleterious overall, like punishing half-baked/wild/probably-wrong hypotheses or things that are not obsequiously disclaimed as such, in a way that’s insensitive to the actual level of danger of being misleading. A more salient cost is nonsense hogging attention, but that doesn’t distinguish it from well-reasoned clear points that don’t add insight hogging attention.
The actually serious problem is when this is a symptom of not distinguishing epistemic status of ideas on part of the author, but then it’s not at all clear that punishing publication of such thoughts helps the author fix the problem. The personal skill of tagging epistemic status of ideas in one’s own mind correctly is what I think of as epistemic hygiene, but I don’t expect this to be canon, and I’m not sure that there is no serious disagreement on this point with people who also thought about this. For one, the interpretation I have doesn’t specify community norms, and I don’t know what epistemic-hygiene-the-norm should be.
[Obvious disclaimer: I am not Duncan, my views are not necessarily his views, etc.]
It seems to me that your comment is [doing something like] rounding off Duncan’s position to [something like] conflict theory, and contrasting it to the alternative of a mistake-oriented approach. This impression mostly comes from passages like the following:
You’re sad about the world. I’m sad about it too. I think a major cause is too much poking. You’re saying the cause is too little poking. So I poked you. Now what?
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of “people can’t grow, errors are defection, ban the defectors” because you would also think that you can’t grow, and I can’t grow, and if we’re calling for each other’s banning based on “essentializing pro-conflict social logic” because we both think the other is a “weed”… well… I guess its a fight then?
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
To the extent that this impression is accurate, I suspect you and Duncan are (at least somewhat) talking past each other. I don’t want to claim I have a strong model of Duncan’s stance on this topic, but the model I do have predicts that he would not endorse summaries of his positions along the lines of “people can’t grow, errors are defection, ban the defectors”; nor do I think he would endorse a summary of his prescriptions as “more poking”, “more fighting”, or “more conflict”.
Why is this an important clarification, in my view? Well, firstly, on the meta-level I should note that I don’t find the “conflict versus mistake” lens particularly convincing; my feeling is that it fails to carve reality at the joints in at least some important ways, in at least some important situations. This makes me in general suspicious of arguments that [seem to me to] depend on this lens (in the sense of containing steps that route substantially through the lens in question). Of course, this is not necessarily an indictment of that lens’ applicability in any specific case, but I think it’s worth mentioning nonetheless, just to give an idea of the kind of intuitions I’m starting with.
In terms of the argument as it applies to this specific case: I don’t think my model of Duncan particularly cares about the inherent motivations behind [what he would consider] violations of epistemic hygiene. Insofar as he does care about those motivations, I think it is only indirectly, in that he predicts different motivations will cause different reactions to pushback, and perhaps “better” motivations (to use a somewhat value-loaded term) will result in “better” reactions.
Of course, this is all very abstract, so let me be more specific: my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all. My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group’s likelihood of doing this is significantly lower.
Model!Duncan then argues that, if the ambient level of pushback crosses a certain threshold, this will make being a perennial member of the second group unpleasant enough to be psychologically unsustainable; either they will self-modify into a member of the first group, or (more likely) they will simply leave. Model!Duncan’s view is that the departure of such members is not a great loss to LW, and that LW should therefore strive to increase its level of ambient pushback, which (if done in a good way) translates to increasing epistemic standards on a site level.
Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this—which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.
Note also that this model makes no assumption that epistemic violations (“errors”) are in any way equivalent to “defection”, intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent. And, on the flipside of that coin, it is not so strange to imagine that even people who are striving to escape from the default human behavior may still need gentle reminders from time to time.
(And if there are people on this site who do not so strive, and for whom the reminders in question serve no purpose but to annoy and frustrate, to the point of making them leave—well, says model!Duncan, so much the worse for them, and so much the better for LW.)
Finally, note that at no point have I made an attempt to define what, exactly, constitute “epistemic violations”, “epistemic standards”, or “epistemic hygiene”. This is because this is the point where I am least confident in my model of Duncan, and separately where I also think his argument is at its weakest. It seems plausible to me that, even if [something like] Duncan’s vision for LW were to be realized, there would be still be substantial remaining disagreement about how to evaluate certain edge cases, and that that lack of consensus could undermine the whole enterprise.
(Though my model of Duncan does interject in response to this, “It’s okay if the edge cases remain slightly blurry; those edge cases are not what matter in the vast majority of cases where I would identify a comment as being epistemically unvirtuous. What matters is that the central territory is firmed up, and right now LW is doing extremely poorly at picking even that low-hanging fruit.”)
((At which point I would step aside and ask the real Duncan what he thinks of that, and whether he thinks the examples he picked out from the Leverage and CFAR/MIRI threads constitute representative samples of what he would consider “central territory”.))
Thank you for this great comment. I feel bad not engaging with Duncan directly, but maybe I can engage with your model of him? :-)
I agree that Duncan wouldn’t agree with my restatement of what he might be saying.
What I attributed to him was a critical part (that I object to) of the entailment of the gestalt of his stance or frame or whatever. My hope was that his giant list of varying attributes of statements and conversational motivations could be condensed into a concept with a clean intensive definition other than a mushy conflation of “badness” and “irrational”. For me these things are very very different and I’ll say much more about this below.
One hope I had was that he would vigorously deny that he was advocating anything like what I mentioned by making clear that, say, he wasn’t going to wander around (or have large groups of people wander around) saying “I don’t like X produced by P and so let’s impose costs (ie sanctions (ie punishments)) on P and on all X-like things, and if we do this search-and-punish move super hard, on literally every instance, then next time maybe we won’t have to hunt rabbits, and we won’t have to cringe and we won’t have to feel angry at everyone else for game-theoretically forcing ‘me and all of us’ to hunt measly rabbits by ourselves because of the presence of a handful of defecting defectors who should… have costs imposed on them… so they evaporate away to somewhere that doesn’t bother me or us”.
However, from what I can tell, he did NOT deny any of it? In a sibling comment he says:
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
But the thing is, the reason I’m not engaging with his hypothesis that I don’t even know what his hypothesis is other than trivially obvious things that have been true, but which it has always been polite to mostly ignore?
Things have never been particularly good, is that really “a hypothesis”? Is there more to it than “things are bad and getting worse”? The hard part isn’t saying “things are imperfect”.
The hard part, as I understand it, is figuring out a cheap and efficient solution that, that actually work, and that actually work systematically, in ways that anyone can use once they “get the trick” like how anyone can use arithmetic. He doesn’t propose any specific coherent solution that I can see? It is like he wants to offer an affirmative case, but he’s only listing harms (and boy does he stir people up on the harms) but then he doesn’t have a causal theory of the systematic cause of the harms in the status quo, and he doesn’t have a specific plan to fix them, and he doesn’t demonstrate that the plan mechanistically links to the harms in the status quo. So if you just grant the harms… that leaves him with a blank check to write more detailed plans that are consistent with the gestalt frame that he’s offered? And I think this gestalt frame is poorly grounded, and likely to authorize much that is bad.
Speaking of models, I like this as the beginning of a thoughtful distinction:
my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all.
I’m not sure if Duncan agrees with this, but I agree with it, and relevantly I think that it is likely that neither Duncan nor I likely consider ourselves in the first category. I think both of us see ourselves as “doctors around these parts” rather than “patients”? Then I take Duncan’s advocacy to move in the direction of a prescription, and his prescription sounds to me like bleeding the patient with leeches. It sounds like a recipe for malpractice.
Maybe he thinks of himself as being around here more as a patient or as a student, but, this seems to be his self-reported revealed preference for being here:
What I’m getting out of LessWrong these days is readership. It’s a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn’t have ever come to on my own.
(By contrast I’m still taking the temperature of the place, and thinking about whether it is useful to me larger goals, and trying to be mostly friendly and helpful while I do so. My larger goals are in working out a way to effectively professionalize “algorthmic ethics” (which was my last job title) and get the idea of it to be something that can systematically cause pro-social technology to come about, for small groups of technologists, like lab workers and programmers who are very smart, such that an algorithmic ethicist could help them systematically not cause technological catastrophes before they explode/escape/consume or other wise “do bad things” to the world, and instead cause things like green revolutions, over and over.)
So I think that neither of us (neither me nor Duncan) really expects to “grow as Rationalists” here because of “the curriculum”? Instead we seem to me to both have theories of what a good curriculum looks like, and… his curriculum leaves me aghast, and so I’m trying to just say that even if it might cut against his presumptively validly selfish goals for and around this website.
Stepping forward, this feels accurate to me:
My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group’s likelihood of doing this is significantly lower.
So my objection here is simply that I don’t simply don’t think think that “shifting one’s epistemics closer to the ideal” is a universal solvent, nor even a single coherent unique ideal.
The core point is that agency is not simply about beliefs, it is also about values.
Values can be objective: the objective needs for energy, for atoms to put into shapes to make up the body of the agent, for safety from predators and disease, etc. Also, as planning becomes more complex, instrumentally valuable things (like capital investments) are subject to laws of value (related to logistics and option pricing and so on) and if you get your values wrong, that’s another way to be a dysfunctional agent.
VNM rationality (which, if it is not in the cannon of rationality right now, then the cannon of rationality is bad) isn’t just about probabilities being bayesian it is also about expected values being linearly orderable and having no privileged zero, for example.
Most of my professional work over the last 4 years has not hinged on having too little bayes. Most of it has hinged on having too little mechanism design, and too little appreciation for the depths of coase’s theorem, and too little appreciation for the sheer joyous magic of humans being good and happy and healthy humans with each other, who value and care about each other FIRST and then USE epistemology to make our attempts at caring work better.
Over in that other sibling comments Duncan is yelling at me for committing logical fallacies, and he is ignoring that I implied he was bad and said that if we’re banning the bad people maybe we should ban him. That was not nice of me at all. I tried to be clear about this sort of thing here:
On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally might not be “sane and good” because your advice seemed to be violating ideas about conflict and economics that seem normative to me.
But he just… ignored it? Why didn’t he ask for an apology? Is he OK? Does he not think of people on this website as people who owe each other decent treatment?
My thesis statement, at the outset, such as it was:
This post makes me kind of uncomfortable and I feel like the locus is in… bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design?
So like… the lack of an ability to acknowledge his own validly selfish emotional needs… the lack of of a request for an apology… these are related parts of what feels weird to me.
I feel like a lot of people’s problems aren’t rationality, as such… like knowing how to do modus tollens or knowing how to model and then subtract out the effects of “nuisance variables”… the main problem is that truth is a gift we give to those we care about, and we often don’t care about each other enough to give this gift.
To return to your comments on moral judgements:
Note also that this model makes no assumption that epistemic violations (“errors”) are in any way equivalent to “defection”, intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent.
I don’t understand why “intent” arises here, except possibly if it is interacting with some folk theory about punishment and concepts like mens rea?
“Defecting” is just “enacting the strategy that causes the net outcome for the participants to be lower than otherwise for reasons partly explainable by locally selfish reasons”. You look at the rows you control and find the best for you. Then you look at the columns and worry about what’s the best for others. Then maybe you change your row in reaction. Robots can do this without intent. Chessbots are automated zero sum defectors (and the only reason we like them is that the game itself is fun, because it can be fun to practice hating and harming in small local doses (because play is often a safe version of violence)).
People don’t have to know that they are doing this to do this. If I person violates quarantine protocols that are selfishly costly they are probably not intending to spread disease into previously clean areas where mitigation practices could be low cost. They only intend to like… “get back to their kids who are on the other side of the quarantine barrier” (or whatever). The millions of people whose health in later months they put at risk are probably “incidental” and/or “unintentional” to their violation of quarantine procedures.
People can easily be modeled as “just robots” who “just do things mechanistically” (without imagining alternatives or doing math or running an inner simulator otherwise trying to taking all the likely consequences into account and imagine themselves personally responsible for everything under their causal influence, and so on).
Not having mens reas, in my book, does NOT mean they should be protected necessarily, if their automatic behaviors hurts others.
I think this is really really important, and that “theories about mens rea” are a kind of thoughtless crux that separates me (who has thought about it a lot) from a lot of naive people who have relatively lower quality theories of justice.
The less intent there is, the worse it it from an easy/cheap harms reduction perspective.
At least with a conscious villain you can bribe them to stop. In many cases I would prefer a clean honest villain. “Things” (fools, robots, animals, whatever) running on pure automatic pilot can’t be negotiated with :-(
...
Also, Duncan seems very very attached to the game-theory “stag hunt” thing? Like over in a cousin comment he says:
In part, this is because a major claim of the OP is “LessWrong has a canon; there’s an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts).”
(I kind of want to drop this, because it involves psychologizing, and even when I privately have detailed psychological theories that make high quality predictions that other people will do bad things, I try not to project them, because maybe I’m wrong, and maybe there’s a chance for them to stop being broken, but:
I think of “stag hunt” as a “Duncan thing” strongly linked to the whole Dragon Army experiment and not “a part of the lesswrong canon”.
Double cruxing is something I’ve been doing for 20 years, but not under that name. I know that CFAR got really into it as a “named technique”, but they never put that on LW in a highly formal way that I managed to see, so it is more part of a “CFAR canon” than a “Lesswrong canon” in my mind?
And so far as I’m aware “strawmanning” isn’t even a rationalist thing… its something from old school “critical thinking and debate and rhetoric” content? The rationalist version is to “steelman” one’s opponents who are assumed to need help making their point, which might actually be good, but so far poorly expressed by one’s interlocutor.
I am consciously lowering my steelmanning of Duncan’s position. My objection is to his frame in this case. Like I think he’s making mistakes, and it would help him to drop some of his current frames, and it would make lesswrong a safer place to think and talk if he didn’t try to impose these frames as a justification for meddling with other people, including potentially me and people I admire.)
...
Pivoting a bit, since he is so into the game theory of stag hunts… my understanding is that in 2-person Stag Hunt a single member of the team causes a failure of both to “get the benefit” so it becomes essential to get perfect behavior from literally everyone. The key difference with a prisoner’s dilemma is that “non-defection (to get the higher outcome)” is a nash equilibrium, because playing different things is even worse for each of the two players than playing any similar move.
A group of 5 playing stag hunt, with a history of all playing stag, loves their equilibrium and wants to protect it and each probably has a detailed mental model of all the others to keep it that way, and this is something humans do instinctively, and it is great.
But what about N>5? Suppose you are in a stag hunt where each of N persons has probability P of failing at the hunt, and “accidentally playing rabbit”. Then everyone gets a bad outcome with probability (1-(1-P)^N). So almost any non-trivial value of N causes group failure.
If you see that you’re in a stag hunt with 2000 people: you fucking play rabbit! That’s it. That’s what you do.
Even if the chances of each person succeeding is 99.9% and you have 2000 in a stag hunt… the hunt succeeds with probability 13.52% and that stag had better be really really really really valuable. Mostly it fails, even with that sort of superhuman success rate.
But there’s practically NOTHING that humans can do with better than maybe a 98% success rate. Once you take a realistic 2% chance of individual human failure into account, with 2000 people in your stag hunt you get a 1 in 2.83x10^18 chance of a successful stag hunt.
If you are in a stag hunt like this, it is socially and morally and humanistically correct to announce this fact. You don’t play rabbit secretly (because that hurts people who didn’t get the memo).
You tell everyone that you’re playing rabbit, even if they’re going to get angry at you for doing so, because you care about them.
You give them the gift of truth because you care about them, even if it gets you yelled at and causes people with dysfunctional emotional attachments to attack you.
And you teach people rabbit hunting skills, so that they get big rabbits, because you care about them.
And if someone says “we’re in a stag hunt that’s essentially statistically impossible to win and the right answer is to impose costs on everyone hunting rabbit” that is the act of someone who is either evil or dumb.
And I’d rather have a villain, who knows they are engaged in evil, because at least I can bribe the villain to stop being evil.
You mostly can’t bribe idiots, more’s the pity.
Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this—which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.
I think maybe your model of Duncan isn’t doing the math and reacting to it sanely?
Maybe by “stag hunt” your model of Duncan means “the thing in his head that ‘stag hunt’ is a metonym for” and it this phrase does not have a gears level model with numbers (backed by math that one plug-and-chug), driving its conclusions in clear ways, like long division leads clearly to a specific result at the end?
An actual piece of the rationalist canon is “shut up and multiply” and this seems to be something that your model of Duncan is simply not doing about his own conceptual hobby horse?
I might be wrong about the object level math. I might be wrong about what you think Duncan thinks. I might be wrong about Duncan himself. I might be wrong to object to Duncan’s frame.
But I currently don’t think I am wrong, and I care about you and Duncan and me and humans in general, and so it seemed like the morally correct (and also the epistemically hygienic thing ) is to flag my strong hunch (which seems wildly discrepant compared to Duncan’s hunches, as far as I understand them) about how best to make lesswrong a nurturing and safe environment for people to intellectually grow while working on ideas with potentially large pro-social impacts.
Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal who should be able to manage himself and his own emotions and his own valid selfish needs and the maintenance of boundaries for getting these things, and then, to this hoped-for-equal, I’m saying that something he is proposing seems likely to be harmful to a thing that is large and valuable. Because of mens rea, because of Dunbar’s Number, because of “the importance of N to stag hunt predictions”, and so on.
my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist,
Jennifer:
I think that it is likely that neither Duncan nor I likely consider ourselves in the first category.
Duncan, in the OP, which Jennifer I guess skimmed:
What I really want from LessWrong is to make my own thinking better, moment to moment. To be embedded in a context that evokes clearer thinking, the way being in a library evokes whispers. To be embedded in a context that anti-evokes all those things my brain keeps trying to do, the way being in a church anti-evokes coarse language.
I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said.
I apologize for over-simplifying, maybe I should have added “primarily” and/or “currently” to make it more literally true.
In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood… maybe looking to score points for unfairness?
What I’m getting out of LessWrong these days is readership. It’s a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn’t have ever come to on my own.
My model here is that this is your self-identified “revealed preference” for actually being here right now.
Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.
This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I’m not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)
It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?
And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?
...
There’s an interesting possible equivocation here.
(1) “Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong”.
(2) “Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong… which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning”.
These are very similar. I like having them separate so that I can agree and disagree with you <3
Also, consider then a third idea:
(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.
I think that you probably think 1 and 2 are true and 3 is false.
I think that 2 is true, and 3 is true.
Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits).
Because I think 2 is true, I think you’re motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.
In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on.
If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can’t meet that need, THEN (maybe?) you need to get those needs met somewhere else.
For what its worth, I think that 1 is false for many many people, and probably it is also false for you.
I don’t think you should leave, I just think you should be less interested in a “pro-stag-hunting jihad” and then I think you should get the need (that was prompting your stag hunting call) met in some new way.
I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There’s a virtuous cycle here potentially!
In my opinion, such a “skill building in rabbit hunting techniques” sort of rationality… is all that can be done in an environment like this.
Also I think this kind of teaching environment is less available in many places, and so it isn’t that this place is bad for not offering more, it is more that it is only “better by comparison to many alternatives” while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)
So in my model (where 2 is true) “because 1 is false for many (and maybe even for you)” and 3 is true… therefore your whole stag hunt concept, applied here, suggests to me that you’re “low key seeking to gain social permission” from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.
I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) “place on the internet” full of people semi-mindlessly shrieking at each other by default.
If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures.
These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.
In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the “rabbit hunting rationality improvement” function.
In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a “level” that was required-at-all to meet your particular educational needs.
To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong’s canon for granted while also tolerating and even encouraging variations… because it certainly isn’t the case that lesswrong is perfect.
(There’s a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure “smart person mental elbow grease” and also in memetic diversity) stays over longer periods of time on a trajectory of “getting less wrong over time”… though I don’t know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)
...
So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth “out there” where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.
Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn’t complain about it until or unless someone hired that firm to “impose costs” on me… then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad… it is just often the “last refuge of the incompetent”.
In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...
...so long as they don’t damage the “good hubness” of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong’s explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok… indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).
… a staging area and audition space for more specific and more demanding subcultures …
Here is a thng I wrote some years ago (this is a slightly cleaned up chat log, apologies for the roughness of exposition):
There was an analogue to this in WoW as well, where, as I think I’ve mentioned, there often was such a thing as “within this raid guild, there are multiple raid groups, including some that are more ‘elite’/exclusive than the main one”; such groups usually did not use the EPGP or other allocation system of the main group, but had their own thing.
(I should note that such smaller, more elite/exclusive groups, typically skewed closer to “managed communism” than to “regulated capitalism” on the spectrum of loot systems, which I do not think is a coincidence.)
“Higher internal trust” is true, but not where I’d locate the cause. I’d say “higher degree of sublimation of personal interest to group interest”.
[name_redacted]: Ah. … More dedicated?
Yes, and more willing to sacrifice for the good of the raid. Like, if you’re trying to maintain a raiding guild of 100 people, keep it functioning and healthy over the course of months or years, new content, people joining and leaving, schedules and life circumstances changing, different personalities and background, etc., then it’s important to maintain member satisfaction; it’s important to ensure that people feel in control and rewarded and appreciated; that they don’t burn out or develop resentments; that no one feels slighted, and no one feels that anyone is favored; you have to recruit, also...
All of these things are more important than being maximally effective at downing this boss right now and then the next five bosses this week.
If you focus on the latter and ignore the former, your guild will break and explode, and people on WoW-related news websites will place stories about your public meltdowns in the Drama section, and laugh at you.
On the other hand… if you get 10 guys together and you go “ok dudes, we, these particular 10 people, are going to show up every single Sunday for several months, play for 6 hours straight each time, and we will push through absolutely the most challenging content in the game, which only a small handful [or sometimes: none at all] of people in the world have done”… that is a different scenario. There’s no room for “I’m not the tank but I want that piece of tank gear”, because if you do that you will fail.
What a group like that promises (which a larger, more skill-diverse, less elite/exclusive, group cannot promise) is the incredible rush of pushing yourself—your concentration, your skill, your endurance, your coordination, your ingenuity—to the maximum, and succeeding at something really really hard as a result.
That is the intrinsic motivation which takes the place of the extrinsic motivation of getting loot. As a result, the extrinsic motivation is no longer a resource which it is vitally important to allocate.
In that scenario, your needs are the group’s needs; the group’s successes are your successes; there is no separation between you and the group, and consequently the need for equity in loot allocation falls away, and everything is allocated strictly by group-level optimization.
Of course, that sort of thing doesn’t scale, and neither can it last, just as you cannot build a whole country like a kibbutz. But it may be entirely possible, and perfectly healthy, to occasionally cleave off subgroups who follow that model, then to meld back into the overgroup at the completion of a project (and never having really separated from it, their members continuing to participate in the overgroup even as they throw themselves into the subproject).
I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I explicitly acknowledge that I see my self as more of a teacher than a student round these parts.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
It was a purposefully pointed and slightly unfair question. I didn’t predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).
If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
On epistemic grounds: The thing you should be objecting to in my mind is not the part where I said that “because I can’t think of a reason for X, that implies that there might not be a reason for X”.
(This isn’t great reasoning, but it is the start of something coherent. (Also, it is an invitation to defend X coherently and directly. (A way you could have engaged with is by explaining why adversarial attacks on the non-desired weeds would be a good use of resources rather than just… like… living and letting live, and trying to learn from things you initially can’t appreciate?)))
On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally were might not be “sane and good” because your advice seemed to be violating ideas about conflict and economics that seem normative to me.
This accusation could also have an epistemic component (which would be an ad hominem) if I were saying “you are saying X and are not sane and good and therefore not-X”. But I’m not saying this.
I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.
This is another instance of the whole “weed/conflict/fighting” frame to me, and my claim is that the whole frame is broken for any kind of communal/cooperative truth-seeking enterprise:
...and I’d like to know what those are, how they can be detected in people or conversations or whatever??
If you think I’m irrational, please enumerate the ways. Please be nuanced and detailed and unconfused. List 100 little flaws if you like. I’m sure I have flaws, I’m just not sure which of my many flaws you think is a problem here. Perhaps you could explain “epistemic hygiene” to me in mechanistic detail, and show how I’m messing it up?
But, there is a difference between being irrational or impolite.
If you think I’m being impolite to you personally, feel free to say how and why (with nuance, etc) and demand an apology. I would probably offer one. I try to mostly make peace, because I believe conflict and “intent to harm” is very very costly.
However, I “poked you” someone on purpose, because you strongly seem to me to be advocating a general strategy of “all of us being pokey at each other in general for <points at moon> reasons that might be summarized as a natural and normal failure to live up to potentially pragmatically impossible ideals”.
You’re sad about the world. I’m sad about it too. I think a major cause is too much poking. You’re saying the cause is too little poking. So I poked you. Now what?
If we really need to start banning the weeds, for sure and for true… because no one can grow, and no one can be taught, and errors in rationality are terrible signs that a person is an intrinsically terrible defector… then I might propose that you be banned?
And obviously this is inimical to your selfish interests. Obviously you would argue against it for this reason if you shared the core frame of “people can’t grow, errors are defection, ban the defectors” because you would also think that you can’t grow, and I can’t grow, and if we’re calling for each other’s banning based on “essentializing pro-conflict social logic” because we both think the other is a “weed”… well… I guess its a fight then?
But I don’t think we have to fight, because I think that the world is big, everyone can learn, and the best kinds of conflicts are small, with pre-established buffering boundaries, and they end quickly, and hopefully lead to peace, mutual understanding, and greater respect afterwards.
Debate is fun for kids. When I taught a debate team, I tried to make sure it stayed fun, and we won a lot, and years later I heard how the private prep schools tried to share research against us, with all this grinding and library time. (I think maybe they didn’t realize that the important part is just a good skeleton of “what an actual good argument looks like” and hitting people in at the center of their argument based on prima facie logical/policy problems.) People can be good sports about disagreements and it helps with educational processes, but it is important to tolerate missteps and focus on incremental improvement in an environment of quick clear feedback <3
The thing I want you to learn is that proactively harming people for failing to live up to an ideal (absent bright lines and jurisprudence and a system for regulating the processes of declaring people to have done something worth punishing, and so on) is very costly, in ways that cascade and iterate, and get worse over time.
Proposing to pro-actively harm people for pre-systematic or post-systematic reasons is bad because unsystematic negative incentive systems don’t scale. “I have a nuanced understanding of evil, and know it when I see it, and when I see it I weed it” is a bad plan for making the world good. That’s a formula for the social equivalent of an autoimmune disorder :-(
The specific problem: whats the inter-rater reliability like for “decisions to weed”? I bet it is low. It is very very hard to get human inter-rater-reliability numbers above maybe 95%. How do people deal with the inevitable 1 in 20 errors? If you have fewer than 20 people, this could work, but if you have 2000 people… its a recipe for disaster.
You didn’t mention the word “dunbar” for example that I can tell? You don’t seem to have a theory of governance? You don’t seem to have a theory of local normative validity (other than epistemic hygiene)? You didn’t mention “rights” or “elections” or “prices”? You haven’t talked about virtue epistemology or the principle of charity? You don’t seem to be citing studies in organizational psychology? It seems to all route through the “stag hunt” idea (and perhaps an implicit (and as yet largely unrealized in practice) sense that more is possible) and that’s almost all there is? And based on that you seem to be calling for “weeding” and conflict against imperfectly rational people, which… frankly… seems unwise to me.
Do you see how I’m trying to respond to a gestalt posture you’ve adopted here that I think leads to lower utility for individuals in little scuffles where each thinks the other is a white raven (I assume albinism is the unnatual, rare, presumptively deleterious pheotype?) and is trying to “weed them”, and then ultimately (maybe) it could be very bad for the larger community if “conflict-of-interest based fighting (as distinct from epistemic disagreement)” escalates (RO>1.0) instead of decaying (R0<1.0)?
I’m having a hard time doing this because your two comments are both full of things that seem to me to be doing exactly the fog-inducing, confusion-increasing thing. But I’m also reasonably confident that my menu of options looks like:
Don’t respond, and the-audience-as-a-whole, i.e. the-culture-of-LessWrong, will largely metabolize this as tacit admission that you were right, and I was unable to muster a defense because I don’t have one that’s grounded in truth
Respond in brief, and the very culture that I’m saying currently isn’t trying to be careful with its thinking and reasoning will round-off and strawman and project onto whatever I say. This seems even likelier than usual here in this subthread, given that your first comment does this all over the place and is getting pretty highly upvoted at this point.
Respond at length, here but not elsewhere, and try to put more data and models out there to bridge the inferential gaps (this feels doomy/useless, though, because this is a site already full of essays detailing all of the things wrong with your comments)
Respond at length to all such comments, even though it’s easier to produce bullshit than to refute bullshit, meaning that I’m basically committing to put forth two hours of effort for every one that other people can throw at me, which is a recipe for exhaustion and demoralization and failure, and which is precisely why the OP was written. “People not doing the thing are outgunning people doing the thing, and this causes people doing the thing to give up and LessWrong becomes just a slightly less poisonous corner of a poisonous internet.”
Like, you and another user who pushed back in ways that I think are strongly contra the established virtues of rationality both put forth this unfalsifiable claim that “things just get better and better! Relax and just let the weeds and the plants duke it out, and surely the plants will win!”
Completely ignoring the assertion I made, with substantial effort and detail, that it’s bad right now, and not getting better. Refusing to engage with it at all. Refusing to grant it even the dignity of a hypothesis.
That seems bad.
And it doesn’t matter how many times I do a deep, in-depth analysis of all the ways that a bad comment was bad, because the next person posting a bad comment didn’t read it and doesn’t care, and there aren’t enough other people chiming in. I’ve answered the call that you’re making here half a dozen times, elsewhere. More than once on this very post. But that doesn’t count for anything in your book, and the audience doesn’t see it or care about it. From the audience’s perspective, you made a pretty good comment and I didn’t substantively respond, and that’s not a good look, eh?
I don’t want to keep falling prey to this dynamic. But here, since you asked. I don’t have what it takes to do a thorough analysis of why each of these is bad, or a link to the full-length essay outlining the rule each thing broke (because LessWrong has one in its canon in almost every case), but I’ll at least provide a short pointer.
Fallacy of the grey, ironic in this case. “Black and white thinking” is not always bad or inappropriate; some things are in fact more or less binary and using the label “black and white thinking” to delegitimize something without checking to what degree it’s actually right to be thinking in binaries is disingenuous and sloppy.
I addressed this a little in my largely-downvoted comment above, but: bad rhetoric, trying to make the idea that your opponent is good and sane seem incredulous. Trying to win the argument without actually having it. And, as I noted, implicitly conflating your inability to imagine a reason with there not being one—having the general effect of nudging readers toward a belief that anything they don’t already see must not be real.
Abusing the metaphor. Seizing on one of multiple metaphors, which were headlined explicitly as being attempts to clumsily gesture at or triangulate a thing, and importing a bunch of emotion on an irrelevant axis. Trying to tinge the position you’re disagreeing with as genocide. A social “gotcha.” An applause light. At the end, a hypocritical call for humility, right after not having humility yourself about whether or not weeding is good or necessary. Black and white thinking, right after using the label “black and white” as a rhetorical weapon. You later go on to talk about a property of actual weeds but don’t even try to establish any way in which it’s relevantly analogous.
“Maybe your initial desires are improper, but instead of saying in what way they might be improper, or trying to highlight a more proper set of desires and bridge the gap, I’m going to do the Carlson/Shapiro thing of ‘just asking a question’ and then not settling it, because I can score points with the implication and then fade into the mists. I don’t have to stick my neck out or put any skin in the game.”
Completely ignoring an explicit, central assumption of the essay, made at length and defended in detail, about the cumulative effect of the little things. Instead of engaging with my claim that the little stuff matters, and trying to zero in on whether or not it does, and how and why, just dismissing it out of hand with a fraction of the effort put forth in the OP. Also, infuriatingly smug and dismissive with “maybe educate them?” as if I do not spend tremendous time and effort doing exactly that. While actively undermining my literal attempt to do some educating, no less. Like, what do you think this pair of posts is?
“I failed at this, so I’m going to undermine other people trying to do a similar thing, and call it savviness. Also, here, have some strawmanning of your point.”
Assertion with no justification and no detail and no model. Ignoring the entire claim of the OP, which is that the current thing is observably not working. And again, a fraction of the effort required to refute, so offering me the choice of “let the audience absorb how Jennifer just won with all these zingers, or burn two or more hours for every one she spent.”
Isolated demand for rigor. Putting the burden of proof on my position instead of yours, rather than cooperatively asking hey, can we talk about where the burden of proof lies? Also ignoring the fact that I literally just wrote two essays explaining why adversarial attacks on the weeds would be a good use of resources. Instead of noting confusion about that (“I think you think you’ve made a case here, but I didn’t follow it; can you expand on X?”) just pretending like I hadn’t done the work. Same thing happening with “I’m saying that your proposed rules are bad because they request expensive actions for unclear benefits that seem likely to lead to unproductive conflict if implemented… probably… but not certainly.”
Literally listed in the essay. Literally listed in the essay.
Again the trap; “just spend lots and lots of time explaining it to me in particular, even as I gloss over and ignore the concrete bits of explanation you’ve already done?” Framing things such that non-response will seem like I’m being uncooperative and unreasonable, when in fact you’re just refusing to meet me halfway. And again ignoring that a bunch of this work has already been done in the essay, and a bunch of other work has already been done on LessWrong as a whole, and the central claim is “we’ve already done this work, we should stop leaving ourselves in a position to have to shore this up over and over and over again and just actually cohere some standards.”
But anyway, I’m doing it (a little) here. For the hundredth time, even though it won’t actually help much and you’ll still be upvoted and I’ll still be downvoted and I’ll have to do this all over again next time and come on, I just want a place that actually cares about promoting clear thinking.
You don’t wander into a martial arts dojo, interrupt the class, and then sort-of-superciliously sneer that the martial arts dojo shouldn’t have a preference between [martial arts actions] and [everything else] and certainly shouldn’t enforce that people limit themselves to [martial arts actions] while participating in the class, that’s black-and-white thinking, just let everyone put their ideas into a free marketplace!
Well-kept gardens die by pacifism. If you don’t think that a garden being well-kept is a good thing, that’s fine. Go live in a messy garden. Don’t actively undermine someone trying to clean up a garden that’s trying to be neat.
Alternately, “we used to feel comfortable telling users that they needed to just go read the Sequences. Why did that become less fashionable, again?”
Except that you’re actively undermining a thing which is either crucial to this site’s goals, or at least plausibly so (hence my flagging it for debate). The veneer of cooperation is not the same thing as actually not doing damage.
Strawmanning. Strawmanning.
Except that you’re actively undermining my attempt to pre-establish boundaries here. To enshrine, in a place called “LessWrong,” that the principles of reasoning and discourse promoted by LessWrong ought maybe be considered better than their opposites.
“The thing I want to do is strawman what you’re arguing for as ‘proactively harming people for failing to live up to an ideal,’ such that I can gently condescend to you about how it’s costly and cascades and leads to vaguely undefined bad outcomes. This is much easier for me to do than to lay out a model, or detail, or engage with the models and details that you went to great lengths to write up in your essays.”
STRAWMANNING. “You said [A]. Rather than engage with [A], I’m going to pretend that you said [B] and offer up a bunch of objections to [B], skipping over the part where those objections are only relevant if, and to the degree that, [A→B], which I will not bother arguing for or even detailing in brief.”
“I bet it is low, but rather than proposing a test, I’m going to just declare it impossible on the scale of this site.”
I tried to respond to the last two paragraphs above but it was so thoroughly not even bothering to try to reach across the inferential gap or cooperate—was so thoroughly in violation of the spirit you claim to be defending, but in no way exhibit, yourself—that I couldn’t get a grip on “where to begin.”
I am less confident than you are in your points, and I am also of the opinion that both of Jennifer’s comments were posted in good faith. I wanted to say, however, that I strongly appreciate your highlighting of this dynamic, which I myself have observed play out too many times to count. I want to reinforce the norm of pointing out fucky dynamics when they occur, since I think the failure to do this is one of the primary routes through which “not enough concentration of force” can corrode discussion; that alone would have been enough to merit a strong upvote of the parent comment.
(Separately I would also like to offer commiseration, since I perceive that you are Feeling Bad at the moment. It’s not clear to me what the best way is to do this, so I settled for adding this parenthetical note.)
I’d contend that a post can be “in good faith” in the sense of being a sincere attempt to communicate your actual beliefs and your actual reasons for them, while nonetheless containing harmful patterns such as logical fallacies, misleading rhetorical tricks, excessive verbosity, and low effort to understand your conversational partner. Accusing someone of perpetuating harmful dynamics doesn’t necessarily imply bad faith.
In fact, I see this distinction as being central to the OP. Duncan talks about how his brain does bad things on autopilot when his focus slips, and he wants to be called on them so that he can get better at avoiding them.
Calling this subthread part of a fucky dynamic is begging the question a bit, I think.
If I post something that’s wrong, I’ll get a lot of replies pushing back. It’ll be hard for me to write persuasive responses, since I’ll have to work around the holes in my post and won’t be able to engage the strongest counterarguments directly. I’ll face the exact quadrilemma you quoted, and if I don’t admit my mistake, it’ll be unpleasant for me! But, there’s nothing fucky happening: that’s just how it goes when you’re wrong in a place where lots of bored people can see.
When the replies are arrant, bad faith nonsense, it becomes fucky. But the structure is the same either way: if you were reading a thread you knew nothing about on an object level, you wouldn’t be able to tell whether you were looking at a good dynamic or a bad one.
So, calling this “fucky” is calling JenniferRM’s post “bullshit”. Maybe that’s your model of JenniferRM’s post, in which case I guess I just wasted your time, sorry about that. If not, I hope this was a helpful refinement.
(My sense is that dxu is not referring to JenniferRM’s post, so much as the broader dynamic of how disagreement and engagement unfold, and what incentives that creates.)
Endorsed.
Fair enough! My claim is that you zoomed out too far: the quadrilemma you quoted is neither good nor evil, and it occurs in both healthy threads and unhealthy ones.
(Which means that, if you want to have a norm about calling out fucky dynamics, you also need a norm in which people can call each others’ posts “bullshit” without getting too worked up or disrupting the overall social order. I’ve been in communities that worked that way but it seemed to just be a founder effect, I’m not sure how you’d create that norm in a group with a strong existing culture).
It’s often useful to have possibly false things pointed out to keep them in mind as hypotheses or even raw material for new hypotheses. When these things are confidently asserted as obviously correct, or given irredeemably faulty justifications, that doesn’t diminish their value in this respect, it just creates a separate problem.
A healthy framing for this activity is to explain theories without claiming their truth or relevance. Here, judging what’s true acts as a “solution” for the problem, while understanding available theories of what might plausibly be true is the phase of discussing the problem. So when others do propose solutions, do claim what’s true, a useful process is to ignore that aspect at first.
Only once there is saturation, and more claims don’t help new hypotheses to become thinkable, only then this becomes counterproductive and possibly mostly manipulation of popular opinion.
This word “fucky” is not native to my idiolect, but I’ve heard it from Berkeley folks in the last year or two. Some of the “fuckiness” of the dynamic might be reduced if tapping out as a respectable move in a conversation.
I’m trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days.
I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not.
If I get voted down, or not upvoted, I don’t care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in “damaging lesswrong”.
I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?
Maybe he’s trying to respond to every response as a potential “cost of doing the great work” which he is willing to shoulder? But… I would expect him to get a sore shoulder though, eventually :-(
If “the general audience” is the causal locus through which a person’s speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor’s mind (who you are speaking to “in front of the audience”)) then tapping out of a conversation might “make the original thesis seem to the audience to have less justification” and then, if the audience’s brains were the thing truly of value to you, you might refuse to tap out?
This is a real stress. It can take lots and lots of minutes to respond to everything.
Sometimes problems are so constrained that the solution set is empty, and in this case it might be that “the minutes being too few” is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like “being in the same room with a whiteboard nearby”. It is hard for me to math very well in the absence of shared scratchspace for diagrams.
Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I’m coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C’est la vie <3
If your goal is to somehow help Duncan, you could start by ceasing to relentlessly and overconfidently proceed with wrong models of me.
I liked the effort put into this comment, and found it worth reading, but disagree with it very substantially. I also think I expect it to overall have bad consequences on the discussion, mostly via something like “illusion of transparency” and “trying to force the discussion to happen that you want to happen, and making it hard for people to come in with a different frame”, but am not confident.
I think the first one is sad, and something I expect would be resolved after some more rounds of comments or conversations. I don’t actually really know what to do about the second one, like, on a deeper level. I feel like “people wanting to have a different type of discussion than the OP wants to have” is a common problem on LW that causes people to have bad experiences, and I would like to fix it. I have some guesses for fixes, but none that seem super promising. I am also not totally confident it’s a huge problem and worth focussing on at the margin.
In light of your recent post on trying to establish a set of norms and guidelines for LessWrong (I think you accidentally posted it before it was finished, since some chunks of it were still missing, but it seemed to elaborate on things you put forth in stag hunt), it seems worthwhile to revisit this comment you made about a month ago that I commented on. In my comment I focused on the heat of your comment, and how that heat could lead to misunderstandings. In that context, I was worried that a more incisive critique would be counterproductive. Among other things, it would be increasing the heat in a conversation that I believed to be too heated. The other worries were that I expected that you would interpret the critique as an attack that needed defending, I intuited that you were feeling bad and that taking a very critical lens to your words would worsen your mood, and that this comment is going to take me a bunch of work (Author’s note: I’ve finished writing it. It took about 6 hours to compose, although that includes some breaks). In this comment, I’m going to provide that more incisive critique.
My goal is to engender a greater degree of empathy in you when you engage with commenters that disagree with you. This higher empathy would probably result in lower heat, which would allow you to more come closer to the truth since you would receive higher quality criticism. This is related to what habryka says here, where they say that ”...I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch...”, and Elizabeth says here that “I expect this feeling to be common, and for that lack of feedback to be detrimental to your model building even if you start out far above average.” In order to do this, I’m going to reread your Stag Hunt post, reread the comment chain leading up to your comment, and then do a line-by-line analysis of that comment looking for violations of the guidelines to rationalist discourse that you set in Stag Hunt.
My goal is twofold: to provide evidence that you would be helped by greater empathy (and lower heat) directed towards your critics, and to echo what I see as the meat of Jennifer’s comment; that if I were to adopt the framing I see in Stag Hunt, it would be on net detrimental to the LessWrong community.
Before all that, I want to reiterate: I like the beginning of your comment. Pointing out the rock-and-a-hard-place dilemma that you feel after reading her comment is a valuable insight, but I think that for the most part your comment would be stronger without the heated line-by-line critique of her comment. She gave you that invitation to do this and so the line-by-line focus on flaws in her comment is appropriate, but the heat you brought and your apparent confidence in assessing her mental state seems unwarranted. While you did not give such permission in that comment of yours, in the post itself you said:
I think that Jennifer’s comment was, in part, doing this. I agree that her comment was highly flawed, and many of the critiques in your line-by-line are valid, but I expect that the net effect of your comment is to discourage both comments like hers (which it seems to me you think are a net negative contribution to the discussion), and also comments like this one. I should note here a great irony in the fact that this particular comment of yours has garnered the most analysis of this sort by me compared to any of your others. I think this is simply because I take great joy in pointing out what I see as hypocrisies, and so I would be surprised if it generalized to a similar comment to this one that was made in a different context. The rubric I’ll be using to evaluate your comments is going to be the degree to which the comment falls into the mistakes you outline in Stag Hunt:
I added the numbers because that makes them easier to reference. I am sufficiently confused by 1, 2, and 9 that I don’t think I’d be able to identify them if I saw them, so I’ll ignore those. The rest I’ll summarize in one-or-two word phrases, which will make them easier to reference throughout in a way that is more legible to readers.
3: Overconfidence
4: Motte-and-bailey
5: [blank] (In the process of making this list, I couldn’t figure out a short handle for this that wasn’t just “Overconfidence” or “Strawmanning”, although there does seem to be a difference between this and those. I’m a bit stuck and confused here, presumably I’m lacking some understanding of what this is that would let me compress it.)
6: Failure to track uncertainty. (I’m not sure if this point is intended to be an instance of the broader class of not tracking uncertainty or specific to tracking guilt).
7: Failure of empathy.
8: Playing to the crowd.
You also accuse Jennifer of strawmanning throughout, which I’ll add to the argumentative tactics that you would like pointed out to you. I take strawmanning to mean “The act of presenting a weaker version of someone’s argument to argue against. This is most noticable when paraphrasing their statement in words they would not endorse, and then putting those words in quotation marks”.
Before any analysis of your comment, I’d like to summarize Jennifer’s comment in my own words (from memory, I read her comment for the second time about 2 hours ago and I’m doing this while about 1⁄4 of the way through analyzing your comment):
This is presumably quite different from what she actually said, but that’s the essence of what I understood her to mean.
Anyways, enough exposition. I’ll be quoting everything you say, line by line, and doing my best to describe the degree to which it lapses into any of the fallacies outlined above. I’ll also provide running commentary to stitch everything together into a cohesive mass. Some lines won’t have any commentary, which I’ll denote with ”.”. If I interrupt a paragraph, I’ll end the quote with ”...” and begin the next quote with ”...”. I’m aiming for either dispassionate or empathetic tone throughout, wish me great skill:
This makes it easier for me to model you and improves my sense of clarity surrounding the disagreement since I read it as a description of how you see yourself and how you see the disagreement between yourself and Jennifer. This is far and away my favorite part of your post.
In my view the individual points take an overly negative view of the outcomes of your potential options. If you didn’t respond, I think you are overestimating the degree to which I and other commenters will think that Jennifer is right (relative to how “right” I think she is now, having read your response several times). If you responded in brief, it’s harder for me to guess how I would view your comment because you did not respond in brief. Had you only included the part quoted above, for instance, I would have flagged Stag Hunt and Jennifer’s comments as likely rooted in an unstated disagreement about something more fundamental than what the two of you are explicitly talking about, but I wouldn’t know what it was (although it’s hard to say how much of that is my current view intruding).
This comment supposes in a parenthetical that there are many things wrong with Jennifer’s comment, but has not yet fortified that claim. From a rhetorical standpoint, I see this as justifying the subsequent line-by-line analysis of Jennifer’s comment. It’s also not clear to me why the existence of essays that describe the issues with Jennifer’s comment make the citation of those essays in refuting her comment sensation-of-doom inducing. I’m guessing it’s because you believe that if an essay exists that describes the problematic outcomes of a rhetorical/argumentative device you are about to use, you should never use that device?
There might be some Overconfidence in here, since I suspect that (had people not read your comment) Jennifer’s comment would score less-than-the-mean in terms of its violation of site norms, although I don’t know how we would measure this (and therefore turn it into a bet, which would let you examine the degree to which your comment engages in Overconfidence for yourself).
I notice that this implies, but does not quite state, that Jennifer’s comment is bullshit.
Strawmanning. Jennifer’s comment seems closer to “while weeds may indeed exist, they are hard to differentiate from the plants the garden is intended to cultivate and may have no negative effects on those plants”.
I took Jennifer’s comment as disagreeing with that state of affairs, proposing that weeds might not be easily differentiable from non-weeds, and challenging the weeding/garden framing entirely. I think that Jennifer’s comment would be stronger if she spoke to the specific instances you highlighted in the parenthetical of commenting/upvotes-gone-awry, although I should note that I found the comments that did that elsewhere somewhat confusing.
This reads to me as a mixture of several things:
A statement about your own mind (i.e. that you feel you are losing a social war), which you are the true authority on.
A statement about the state of LessWrong norms (i.e. that you feel that LessWrong norms are bad, and that your current attempts to improve them have no impact)
A statement about me and others who are reading this exchange between you and Jennifer (that we have not noticed that Jennifer violates some discourse norms in her comment because she is upvoted: a Failure of empathy)
I also have a couple points I’d like to respond to:
When you say “I’ve answered the call that you’re making here...”, I don’t know what call you’re referencing.
You say that “there aren’t enough other people chiming in” in reference to “in-depth analysis of all the ways that a bad comment was bad”. I think I’m doing here (although I don’t endorse it phrased in those terms). I also feel discouraged w.r.t. making comments like these when I read that, although I’m not sure why. Perhaps I don’t like being told I’m on the losing side of a war. Perhaps I don’t like anticipating that this comment is futile.
This seems like a good critique.
That isn’t the effect that her rhetoric had on me, so I disagree with you on the object level.
I also think that normatively people ought to be cautious about reasoning about the consequences that other people’s comments might have on an imagined audience, since it seems like the sort of thing that can be leveraged to disparage many comments that are on net beneficial to the platform.
Strawmanning, playing to the crowd.
Failure of empathy. It seems to me that Jennifer’s dismissal of the importance of the relative scoring of a couple of comments stemmed from not seeing it tied to the point that the little things matter. There are 2173 words between the paragraph that begins “Yet I nevertheless feel that I encounter resistance of various forms when attempting to point at small things as if they are important...” and the paragraph in which you identify comments that had bad outcomes as measured by upvotes in your view (which begins “(I set aside a few minutes to go grab some examples...)”). That’s a fair bit of time to track that particular point. Do you expect everyone to track your arguments with that level of fidelity? Do you track others’ arguments that well? I’ll remark that I typically don’t, although I might manage to when it comes to pointing out hypocrisy because it’s something that I have a proclivity for.
I’ll also remark that I read this response as smug and dismissive, although my hypocrisy detector is rather highly tuned right now, and so I’m more likely to read hypocrisy when it isn’t present.
Strawmanning of the hypocritical variety.
I take Jennifer to be talking about the fact that the community does not agree with her with respect to voting norms (as measured by the behavior that she observes on LessWrong).
Her statement here seems to follow from her elsewhere stating that the goal of gardening is to grow the desired plants, and that weeding is largely immaterial to that goal. I agree that she has not provided a causal mechanism by which weeding, when brought back to the state of LessWrong comment culture, is immaterial to thriving plant life. However, I don’t recall you making the other argument in your OP. You gestured towards that fact and it rested as a background assumption in much of your post, but it’s not one that I remember you arguing or providing evidence for (beyond the claim that you are better than average at detecting the degree to which such things are problematic). I’m not going to re-re-read your OP to check this, but if you did make this claim I would like to hear it.
I did not read her comment as a zinger. Also playing to the audience.
Hmm, it looks like I also missed your argument in favor of the cost effectiveness of adversarial attacks on the weeds. I recall that your previous essay discussed the value of a concentration of force, which is a reason to support such attacks, but is not an argument about its cost effectiveness (you say a valuable use of resources, and I use cost effective. If there’s a material difference there, let me know).
Strawmanning.
From memory, you listed fallacies that you yourself tended to fall into but when it came to evidence taken from other commenters it was a list of links without much context. There’s also a difference between having a list of fallacies and having a mechanism by which those fallacies can be detected and corrected. Perhaps you’re referring to the list of ideas that you list as “bad ideas” at the end, but then I’m confused about the degree to which you actually believe they’re bad ideas. If she is saying that the strategy of selecting for weeds against desirable plants is necessary before the call to action (she is saying something probably importantly different, but tracking point of views is getting exhausting), and you have preemptively agreed that you do not have a good mechanism to do this, then I don’t understand why you disagree with her disagreement here.
I feel I’ve talked about this particular phrase enough.
Strawmanning
Failure of empathy, and possibly playing to the audience (to the extent that you are accusing her of playing to the audience without outright saying it).
Good!
Overconfidence.
To the extent that you’re accusing Jennifer of sneering about you caring about rationalist discourse norms on LessWrong, this is a failure of empathy.
My understanding of Jennifer’s comment is that she believes you will make the garden messier with the arguments you are putting forth in Stag Hunt.
I don’t know the extent to which this is a rhetorical question, but to answer it earnestly I would expect that telling a user to read the sequences is an act that takes several orders of magnitude less effort than actually reading the sequences. I’m not confident about what the relative orders of magnitude should be between the critique-er and the critique-ee, but 1:2 (for a total of 1:10 effort) is where my intuition places the ratio. Reading a comment, deciding that it is unworthy of LessWrong discourse norms, and typing “read the sequences” is probably closer to a 1:5 ratio between the orders of magnitude of effort (i.e. it takes 100,000 times much effort to read the entirety of the sequences than it does to make such a comment).
This read to me as Jennifer stating her desire for cooperation, which is a signal that doesn’t come free! It cost her something, at a minimum the effort to type it.
Your response reads to me as throwing that request for cooperation back in her face and using her intent to cooperate as evidence that she is somehow even less cooperative than you expected prior to this statement. It’s possible that you just intended to disagree with her on the material fact that she intends cooperation, or observing that her actions do not align with her words.
I agree that the beginning of that statement is strawmanning.
The core of that statement in my eyes is the last statement; that if she agreed with the argument you put forth in stag hunt as she understands it, she would advocate for your banning.
To avoid further illusions of transparency, I’ll analyze how I would act if I based my actions on what I understand you to argue in Stag Hunt: If I were to suspend my own judgment and base my actions solely on my best attempt to interpret what you advocate for in stag hunt, I would strong downvote your comment because I see it as much much more “weed-like” than the average comment on LessWrong. It is a violation of the point of view you put forth in stag hunt because it normalizes bad forms (I suspect it succeeds despite this because it is prefaced with a valuable insight). I believe it normalizes bad forms because I see it as strawmanning, projecting statements and actions into others minds, pretending to speak to Jennifer while actually speaking mostly to the LessWrong community at large, and failing to retain skepticism that you might have deceived yourself w.r.t. the extent of Jennifer’s violations of rationalist discourse.
Instead, I weakly upvoted it because the first part of it is very useful, and responded to what I saw as the primary fault with the rest of it; that you engaged with Jennifer’s comment from a very conflict-centric point of view which led to high heat. As a result of this framing, you misunderstood most of her comment.
The boundaries that Jennifer is referring to here are boundaries on the extent of the conflict. What you advocate for in Stag Hunt is an expanding of those boundaries, and it was not clear to me upon reading it where those boundaries would end.
While I agree that Jennifer is strawmanning here, this is the second instance of accusing Jennifer of strawmanning while strawmanning.
Same as above.
Strawmanning. I take Jennifer as reiterating one of her central points here: if we take it as true that there are good comments and bad comments, and that we want to do something about the bad comments, then through what policy are we going to identify those bad comments (leaving aside what we then do about those bad comments)?
You had what you remarked were very bad ideas. Jennifer’s argument rests on the claim that such methods are rare, costly, or do not exist (but does not make that claim explicit).
This seems mean to me. You already don’t quote everything she says, you don’t have to remark on those last two paragraphs.
I’m not sure that going line by line was the most effective way to achieve my goals. It was costly, but I didn’t see another way to get you to internalize the fact that people are regularly taking costly measures to try to improve your model of the world, and I see you as largely ignoring them or accusing them of wrongdoing. Not all critiques of your work can be as comprehensive as mine is here, since as you pointed out, “it’s easier to produce bullshit than to refute bullshit” (I granted myself this one zinger as motivation for finishing this comment, if others remain in the text they are not intended).
Meta-question: Is this the sort of thing that’s appropriate to post as a top-level post? It seems fairly specific, but I worked hard on it and I imagine it as encapsulating the virtues that you put forth in Stag Hunt and your hopefully-soon-to-be-posted guidelines for rationalist discourse.
Edited for clarity on the 1:5 point and a few typos.
I’m glad you took the time to respond here, and there is a lot I like about this comment. In particular, I appreciate this comment for:
Being specific without losing sight of the general message of the parent comment.
Sharing how you see your situation at the outset, which puts the tone of the comment in context.
Identifying clear points of disagreement where possible.
There are, however, some points of disagreement I’d like to raise and some possible deleterious consequences I’d like to flag.
I share the concern raised by habryka about the illusion of transparency, which may be increasing your confidence that you are interpreting the intended meaning (and intended consequences) of Jennifer’s words. I’ll go into (possibly too much) detail on one very short example of what you’ve written and how it may involve some misreading of Jennifer’s comment. You quote Jennifer:
and respond:
I was also confused about what you meant by epistemic hygiene when finishing the essays. Elsewhere someone asked whether they were one of the ones doing the bad thing you were gesturing towards, which is another question/insecurity I shared (I do not recall how you responded to that question). It is hopefully clear that when I say this here, in this way, that it is not a trap for you. It’s statement of my confusion embedded in a broader point and I hope you feel no obligation to respond. The point of this exposition isn’t to get clarity on that point, it’s to (hopefully) inspire a shift of perspective. Your comment struck me is very high heat; that heat reflects a particular perspective. I don’t know exactly what that perspective is, but it seems to me that you saw Jennifer’s comments as threats. To the extent that you see a comment as a threat, the individual components of the comment take on more sinister airs. I tend to post in a calm tone, so most people have difficulty maintaining perspectives that see me as a threat. The perspective I’m hoping to affect in you is one of collaboration. I am hoping to leverage my nonthreatening way of raising the same confusion as Jennifer so that it is more natural to see that question of Jennifer’s in a nonthreatening light. In doing so, I’m hoping to provide a method by which her comment as a whole takes on a less threatening tone (Again, I expect this characterization of your perspective to be wrong in important ways—you may not see her comment as precisely “threatening”)
Framing her question as a trap also implies that it was “set”, i.e. that putting you in a weakened position was part of her intent (although you might not have intended to imply this). It’s possible that Jennifer had this intention, but I don’t know and I suspect that you don’t either. Perhaps you meant that it was a trap in the normative sense, i.e. that because Jennifer included that question you are placed (whether Jennifer intends it or not) in a no-win situation; that it’s a statement about you (i.e. you have been trapped even if no one is a hunter setting traps). In the context of your high-heat comment, however, I as a reader expect that you believe Jennifer intended it as a trap.
I mentioned that I was trying to shift your perspective to one of collaboration, but I never gave the motivation for why. What are some of the negative consequences of the high-heat framing? I expect that you will get less of the kind of feedback you want on your posts. I tend to avoid social conflict—particularly social conflict that is high in heat. This neuroticism makes me disinclined to converse with people who adopt high-heat tones, in part because I worry that I will get a high-heat reaction. I do not think I would attempt to convey a broad-scope confusion/disagreement with you of the type that Jennifer did here. I would probably choose to nitpick or simply not respond instead, letting the general confusion remain (in part I do this here; quibbling over tone instead of trying to resolve the major points of confusion with your post. I might try to figure out how to describe my confusion with your post and ask you later). Now, I don’t think you should be optimizing solely to get broad-scope-disagreement/confusion responses from neurotic people like me, but I expect you to want to know how your responses are received. The high heat from this comment, even though it is not directed at me, makes me (very slightly) afraid of you.
This relates back to Elizabeth’s comment elsewhere, where she says
I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety). Mostly this is a fault of mine, but high heat responses are part of what I fear when I do not respond (there are lots of other things too, so don’t please do not update strongly on times when I do not respond).
It’s likely that this comment should have contained (or simply been entirely composed of) questions, since it instead relied on a fair bit of speculation on my part (although I tried to make most of my statements about my reading of your comment rather than your comment itself). I’m including some of those questions here instead of doing the hard work of rewriting my comment to include them in more natural places (along with some other questions I have). I also don’t think it would be productive to respond to all of these at once, so respond only to the ones that you feel like:
Did you find my response nonthreatening?
Do you feel a difference in reaction to my stating confusion at epistemic hygiene and Jennifer stating confusion at that point?
Was my description of how I was trying to change your perspective as I was trying to change your perspective trust-increasing? (I am somewhat concerned that it will be perceived as manipulative)
Do you find my characterization of your perspective, where Jennifer’s comment is/was a threat, accurate?
Is a more collaborative perspective available to you at this moment?
If it is, do you find it changes your emotional reaction to Jennifer’s comment?
Do you feel that your comment was high heat?
If so, what goals did the high heat accomplish for you?
And, do you believe they were worth the costs?
Did you find my comment welcome?
I share dxu’s perception that you are Feeling Bad and want to extend you some sympathy (my expectation is that you’ll enjoy a parenthetical here—all the more if I go meta and reference dxu’s parenthetical—so here it is with reference and all).
EDIT: jessica → Jennifer. Thanks localdeity.
In part, this is because a major claim of the OP is “LessWrong has a canon; there’s an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts).” I didn’t set out to describe and define epistemic hygiene within the essay, because one of my foundational assumptions is “this work has already been done; we’re just not holding each other to the available existing standards found in all the highly upvoted common memes.”
This is evidence I wasn’t sufficiently clear. The “trap” I was referring to was the bulleted dynamic, whereby I either cede the argument or have to put forth infinite effort. I agree that it wasn’t at all likely deliberately set by Jennifer, but also there are ways to avoid accidentally setting such traps, such as not strawmanning your conversational partner.
(Strawmanning being, basically, redefining what they’re saying in the eyes of the audience. Which they then either tacitly accept or have to actively overturn.)
I think that, in the context of an essay specifically highlighting “people on this site often behave in ways that make it harder to think,” doing a bunch of the stuff Jennifer did is reasonably less forgivable than usual. It’s one thing to, I dunno, use coarse and foul language; it’s another thing to use it in response to somebody who’s just asked that we maybe swear a little less. Especially if the locale for the discussion is named LessSwearing (i.e. the person isn’t randomly bidding for the adoption of some out-of-the-blue standard).
Yes. I do not think it was a genuine attempt to engage or converge with me (the way that Said, Elizabeth, johnswentsworth, supposedlyfun, and even agrippa were clearly doing or willing to do), so much as an attempt to condescend, lecture, and belittle, and the crowd of upvotes seemed to indicate either general endorsement of those actions, or a belief that it’s fine/doesn’t matter/isn’t a dealbreaker. This impression has not shifted much on rereads, and is reminiscent of exactly the prior experiences on LW that caused me to feel the need to write the OP in the first place.
Yes.
Yes.
It was trust-increasing and felt cooperative throughout.
For the most part, yes.
I’m not quite sure what you’re asking, here. I can certainly access a desire to collaborate that is zero percent contingent on agreement with my claims.
No, or at least not yet. supposedlyfun, for example, seems at least as “hostile” as Jennifer on the level of agreement, but at least bothered to cut out paragraphs they estimated would be likely to be triggering, and mention that fact. That’s a costly signal of “look, I’m really trying to establish a handshake, here,” and it engendered substantial desire to reciprocate. You, too, are making such costly signals. If Jennifer chose to, that would reframe things somewhat, but in Jennifer’s second comment there was a lot of doubling down.
Yes.
This presupposes that it was … sufficiently strategic, or something?
Goals that were not necessarily well-achieved by the reply:
Putting object-level critique in a public place, so the norm violations didn’t go unnoticed (I’m not confident anyone else would have objected to the objectionable stuff)
Demonstrating that at least one person will in fact push back if someone does the epistemically sloppy bullying thing (I regularly receive messages thanking me for this service)
I don’t actively believe this, no. It seems like it could still go either way. I would be slightly more surprised by it turning out worth it, than by it turning out not worth it.
Yes.
This is an example of the illusion of transparency issue. Many salient interpretations of what this means (informed by the popular posts on the topic, that are actually not explicitly on this topic) motivate actions that I consider deleterious overall, like punishing half-baked/wild/probably-wrong hypotheses or things that are not obsequiously disclaimed as such, in a way that’s insensitive to the actual level of danger of being misleading. A more salient cost is nonsense hogging attention, but that doesn’t distinguish it from well-reasoned clear points that don’t add insight hogging attention.
The actually serious problem is when this is a symptom of not distinguishing epistemic status of ideas on part of the author, but then it’s not at all clear that punishing publication of such thoughts helps the author fix the problem. The personal skill of tagging epistemic status of ideas in one’s own mind correctly is what I think of as epistemic hygiene, but I don’t expect this to be canon, and I’m not sure that there is no serious disagreement on this point with people who also thought about this. For one, the interpretation I have doesn’t specify community norms, and I don’t know what epistemic-hygiene-the-norm should be.
[Obvious disclaimer: I am not Duncan, my views are not necessarily his views, etc.]
It seems to me that your comment is [doing something like] rounding off Duncan’s position to [something like] conflict theory, and contrasting it to the alternative of a mistake-oriented approach. This impression mostly comes from passages like the following:
To the extent that this impression is accurate, I suspect you and Duncan are (at least somewhat) talking past each other. I don’t want to claim I have a strong model of Duncan’s stance on this topic, but the model I do have predicts that he would not endorse summaries of his positions along the lines of “people can’t grow, errors are defection, ban the defectors”; nor do I think he would endorse a summary of his prescriptions as “more poking”, “more fighting”, or “more conflict”.
Why is this an important clarification, in my view? Well, firstly, on the meta-level I should note that I don’t find the “conflict versus mistake” lens particularly convincing; my feeling is that it fails to carve reality at the joints in at least some important ways, in at least some important situations. This makes me in general suspicious of arguments that [seem to me to] depend on this lens (in the sense of containing steps that route substantially through the lens in question). Of course, this is not necessarily an indictment of that lens’ applicability in any specific case, but I think it’s worth mentioning nonetheless, just to give an idea of the kind of intuitions I’m starting with.
In terms of the argument as it applies to this specific case: I don’t think my model of Duncan particularly cares about the inherent motivations behind [what he would consider] violations of epistemic hygiene. Insofar as he does care about those motivations, I think it is only indirectly, in that he predicts different motivations will cause different reactions to pushback, and perhaps “better” motivations (to use a somewhat value-loaded term) will result in “better” reactions.
Of course, this is all very abstract, so let me be more specific: my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all. My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group’s likelihood of doing this is significantly lower.
Model!Duncan then argues that, if the ambient level of pushback crosses a certain threshold, this will make being a perennial member of the second group unpleasant enough to be psychologically unsustainable; either they will self-modify into a member of the first group, or (more likely) they will simply leave. Model!Duncan’s view is that the departure of such members is not a great loss to LW, and that LW should therefore strive to increase its level of ambient pushback, which (if done in a good way) translates to increasing epistemic standards on a site level.
Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this—which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.
Note also that this model makes no assumption that epistemic violations (“errors”) are in any way equivalent to “defection”, intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent. And, on the flipside of that coin, it is not so strange to imagine that even people who are striving to escape from the default human behavior may still need gentle reminders from time to time.
(And if there are people on this site who do not so strive, and for whom the reminders in question serve no purpose but to annoy and frustrate, to the point of making them leave—well, says model!Duncan, so much the worse for them, and so much the better for LW.)
Finally, note that at no point have I made an attempt to define what, exactly, constitute “epistemic violations”, “epistemic standards”, or “epistemic hygiene”. This is because this is the point where I am least confident in my model of Duncan, and separately where I also think his argument is at its weakest. It seems plausible to me that, even if [something like] Duncan’s vision for LW were to be realized, there would be still be substantial remaining disagreement about how to evaluate certain edge cases, and that that lack of consensus could undermine the whole enterprise.
(Though my model of Duncan does interject in response to this, “It’s okay if the edge cases remain slightly blurry; those edge cases are not what matter in the vast majority of cases where I would identify a comment as being epistemically unvirtuous. What matters is that the central territory is firmed up, and right now LW is doing extremely poorly at picking even that low-hanging fruit.”)
((At which point I would step aside and ask the real Duncan what he thinks of that, and whether he thinks the examples he picked out from the Leverage and CFAR/MIRI threads constitute representative samples of what he would consider “central territory”.))
Thank you for this great comment. I feel bad not engaging with Duncan directly, but maybe I can engage with your model of him? :-)
I agree that Duncan wouldn’t agree with my restatement of what he might be saying.
What I attributed to him was a critical part (that I object to) of the entailment of the gestalt of his stance or frame or whatever. My hope was that his giant list of varying attributes of statements and conversational motivations could be condensed into a concept with a clean intensive definition other than a mushy conflation of “badness” and “irrational”. For me these things are very very different and I’ll say much more about this below.
One hope I had was that he would vigorously deny that he was advocating anything like what I mentioned by making clear that, say, he wasn’t going to wander around (or have large groups of people wander around) saying “I don’t like X produced by P and so let’s impose costs (ie sanctions (ie punishments)) on P and on all X-like things, and if we do this search-and-punish move super hard, on literally every instance, then next time maybe we won’t have to hunt rabbits, and we won’t have to cringe and we won’t have to feel angry at everyone else for game-theoretically forcing ‘me and all of us’ to hunt measly rabbits by ourselves because of the presence of a handful of defecting defectors who should… have costs imposed on them… so they evaporate away to somewhere that doesn’t bother me or us”.
However, from what I can tell, he did NOT deny any of it? In a sibling comment he says:
But the thing is, the reason I’m not engaging with his hypothesis that I don’t even know what his hypothesis is other than trivially obvious things that have been true, but which it has always been polite to mostly ignore?
Things have never been particularly good, is that really “a hypothesis”? Is there more to it than “things are bad and getting worse”? The hard part isn’t saying “things are imperfect”.
The hard part, as I understand it, is figuring out a cheap and efficient solution that, that actually work, and that actually work systematically, in ways that anyone can use once they “get the trick” like how anyone can use arithmetic. He doesn’t propose any specific coherent solution that I can see? It is like he wants to offer an affirmative case, but he’s only listing harms (and boy does he stir people up on the harms) but then he doesn’t have a causal theory of the systematic cause of the harms in the status quo, and he doesn’t have a specific plan to fix them, and he doesn’t demonstrate that the plan mechanistically links to the harms in the status quo. So if you just grant the harms… that leaves him with a blank check to write more detailed plans that are consistent with the gestalt frame that he’s offered? And I think this gestalt frame is poorly grounded, and likely to authorize much that is bad.
Speaking of models, I like this as the beginning of a thoughtful distinction:
I’m not sure if Duncan agrees with this, but I agree with it, and relevantly I think that it is likely that neither Duncan nor I likely consider ourselves in the first category. I think both of us see ourselves as “doctors around these parts” rather than “patients”? Then I take Duncan’s advocacy to move in the direction of a prescription, and his prescription sounds to me like bleeding the patient with leeches. It sounds like a recipe for malpractice.
Maybe he thinks of himself as being around here more as a patient or as a student, but, this seems to be his self-reported revealed preference for being here:
(By contrast I’m still taking the temperature of the place, and thinking about whether it is useful to me larger goals, and trying to be mostly friendly and helpful while I do so. My larger goals are in working out a way to effectively professionalize “algorthmic ethics” (which was my last job title) and get the idea of it to be something that can systematically cause pro-social technology to come about, for small groups of technologists, like lab workers and programmers who are very smart, such that an algorithmic ethicist could help them systematically not cause technological catastrophes before they explode/escape/consume or other wise “do bad things” to the world, and instead cause things like green revolutions, over and over.)
So I think that neither of us (neither me nor Duncan) really expects to “grow as Rationalists” here because of “the curriculum”? Instead we seem to me to both have theories of what a good curriculum looks like, and… his curriculum leaves me aghast, and so I’m trying to just say that even if it might cut against his presumptively validly selfish goals for and around this website.
Stepping forward, this feels accurate to me:
So my objection here is simply that I don’t simply don’t think think that “shifting one’s epistemics closer to the ideal” is a universal solvent, nor even a single coherent unique ideal.
The core point is that agency is not simply about beliefs, it is also about values.
Values can be objective: the objective needs for energy, for atoms to put into shapes to make up the body of the agent, for safety from predators and disease, etc. Also, as planning becomes more complex, instrumentally valuable things (like capital investments) are subject to laws of value (related to logistics and option pricing and so on) and if you get your values wrong, that’s another way to be a dysfunctional agent.
VNM rationality (which, if it is not in the cannon of rationality right now, then the cannon of rationality is bad) isn’t just about probabilities being bayesian it is also about expected values being linearly orderable and having no privileged zero, for example.
Most of my professional work over the last 4 years has not hinged on having too little bayes. Most of it has hinged on having too little mechanism design, and too little appreciation for the depths of coase’s theorem, and too little appreciation for the sheer joyous magic of humans being good and happy and healthy humans with each other, who value and care about each other FIRST and then USE epistemology to make our attempts at caring work better.
Over in that other sibling comments Duncan is yelling at me for committing logical fallacies, and he is ignoring that I implied he was bad and said that if we’re banning the bad people maybe we should ban him. That was not nice of me at all. I tried to be clear about this sort of thing here:
But he just… ignored it? Why didn’t he ask for an apology? Is he OK? Does he not think of people on this website as people who owe each other decent treatment?
My thesis statement, at the outset, such as it was:
So like… the lack of an ability to acknowledge his own validly selfish emotional needs… the lack of of a request for an apology… these are related parts of what feels weird to me.
I feel like a lot of people’s problems aren’t rationality, as such… like knowing how to do modus tollens or knowing how to model and then subtract out the effects of “nuisance variables”… the main problem is that truth is a gift we give to those we care about, and we often don’t care about each other enough to give this gift.
To return to your comments on moral judgements:
I don’t understand why “intent” arises here, except possibly if it is interacting with some folk theory about punishment and concepts like mens rea?
“Defecting” is just “enacting the strategy that causes the net outcome for the participants to be lower than otherwise for reasons partly explainable by locally selfish reasons”. You look at the rows you control and find the best for you. Then you look at the columns and worry about what’s the best for others. Then maybe you change your row in reaction. Robots can do this without intent. Chessbots are automated zero sum defectors (and the only reason we like them is that the game itself is fun, because it can be fun to practice hating and harming in small local doses (because play is often a safe version of violence)).
People don’t have to know that they are doing this to do this. If I person violates quarantine protocols that are selfishly costly they are probably not intending to spread disease into previously clean areas where mitigation practices could be low cost. They only intend to like… “get back to their kids who are on the other side of the quarantine barrier” (or whatever). The millions of people whose health in later months they put at risk are probably “incidental” and/or “unintentional” to their violation of quarantine procedures.
People can easily be modeled as “just robots” who “just do things mechanistically” (without imagining alternatives or doing math or running an inner simulator otherwise trying to taking all the likely consequences into account and imagine themselves personally responsible for everything under their causal influence, and so on).
Not having mens reas, in my book, does NOT mean they should be protected necessarily, if their automatic behaviors hurts others.
I think this is really really important, and that “theories about mens rea” are a kind of thoughtless crux that separates me (who has thought about it a lot) from a lot of naive people who have relatively lower quality theories of justice.
The less intent there is, the worse it it from an easy/cheap harms reduction perspective.
At least with a conscious villain you can bribe them to stop. In many cases I would prefer a clean honest villain. “Things” (fools, robots, animals, whatever) running on pure automatic pilot can’t be negotiated with :-(
...
Also, Duncan seems very very attached to the game-theory “stag hunt” thing? Like over in a cousin comment he says:
(I kind of want to drop this, because it involves psychologizing, and even when I privately have detailed psychological theories that make high quality predictions that other people will do bad things, I try not to project them, because maybe I’m wrong, and maybe there’s a chance for them to stop being broken, but:
I think of “stag hunt” as a “Duncan thing” strongly linked to the whole Dragon Army experiment and not “a part of the lesswrong canon”.
Double cruxing is something I’ve been doing for 20 years, but not under that name. I know that CFAR got really into it as a “named technique”, but they never put that on LW in a highly formal way that I managed to see, so it is more part of a “CFAR canon” than a “Lesswrong canon” in my mind?
And so far as I’m aware “strawmanning” isn’t even a rationalist thing… its something from old school “critical thinking and debate and rhetoric” content? The rationalist version is to “steelman” one’s opponents who are assumed to need help making their point, which might actually be good, but so far poorly expressed by one’s interlocutor.
I am consciously lowering my steelmanning of Duncan’s position. My objection is to his frame in this case. Like I think he’s making mistakes, and it would help him to drop some of his current frames, and it would make lesswrong a safer place to think and talk if he didn’t try to impose these frames as a justification for meddling with other people, including potentially me and people I admire.)
...
Pivoting a bit, since he is so into the game theory of stag hunts… my understanding is that in 2-person Stag Hunt a single member of the team causes a failure of both to “get the benefit” so it becomes essential to get perfect behavior from literally everyone. The key difference with a prisoner’s dilemma is that “non-defection (to get the higher outcome)” is a nash equilibrium, because playing different things is even worse for each of the two players than playing any similar move.
A group of 5 playing stag hunt, with a history of all playing stag, loves their equilibrium and wants to protect it and each probably has a detailed mental model of all the others to keep it that way, and this is something humans do instinctively, and it is great.
But what about N>5? Suppose you are in a stag hunt where each of N persons has probability P of failing at the hunt, and “accidentally playing rabbit”. Then everyone gets a bad outcome with probability (1-(1-P)^N). So almost any non-trivial value of N causes group failure.
If you see that you’re in a stag hunt with 2000 people: you fucking play rabbit! That’s it. That’s what you do.
Even if the chances of each person succeeding is 99.9% and you have 2000 in a stag hunt… the hunt succeeds with probability 13.52% and that stag had better be really really really really valuable. Mostly it fails, even with that sort of superhuman success rate.
But there’s practically NOTHING that humans can do with better than maybe a 98% success rate. Once you take a realistic 2% chance of individual human failure into account, with 2000 people in your stag hunt you get a 1 in 2.83x10^18 chance of a successful stag hunt.
If you are in a stag hunt like this, it is socially and morally and humanistically correct to announce this fact. You don’t play rabbit secretly (because that hurts people who didn’t get the memo).
You tell everyone that you’re playing rabbit, even if they’re going to get angry at you for doing so, because you care about them.
You give them the gift of truth because you care about them, even if it gets you yelled at and causes people with dysfunctional emotional attachments to attack you.
And you teach people rabbit hunting skills, so that they get big rabbits, because you care about them.
And if someone says “we’re in a stag hunt that’s essentially statistically impossible to win and the right answer is to impose costs on everyone hunting rabbit” that is the act of someone who is either evil or dumb.
And I’d rather have a villain, who knows they are engaged in evil, because at least I can bribe the villain to stop being evil.
You mostly can’t bribe idiots, more’s the pity.
I think maybe your model of Duncan isn’t doing the math and reacting to it sanely?
Maybe by “stag hunt” your model of Duncan means “the thing in his head that ‘stag hunt’ is a metonym for” and it this phrase does not have a gears level model with numbers (backed by math that one plug-and-chug), driving its conclusions in clear ways, like long division leads clearly to a specific result at the end?
An actual piece of the rationalist canon is “shut up and multiply” and this seems to be something that your model of Duncan is simply not doing about his own conceptual hobby horse?
I might be wrong about the object level math. I might be wrong about what you think Duncan thinks. I might be wrong about Duncan himself. I might be wrong to object to Duncan’s frame.
But I currently don’t think I am wrong, and I care about you and Duncan and me and humans in general, and so it seemed like the morally correct (and also the epistemically hygienic thing ) is to flag my strong hunch (which seems wildly discrepant compared to Duncan’s hunches, as far as I understand them) about how best to make lesswrong a nurturing and safe environment for people to intellectually grow while working on ideas with potentially large pro-social impacts.
Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal who should be able to manage himself and his own emotions and his own valid selfish needs and the maintenance of boundaries for getting these things, and then, to this hoped-for-equal, I’m saying that something he is proposing seems likely to be harmful to a thing that is large and valuable. Because of mens rea, because of Dunbar’s Number, because of “the importance of N to stag hunt predictions”, and so on.
dxu:
Jennifer:
Duncan, in the OP, which Jennifer I guess skimmed:
I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said.
I apologize for over-simplifying, maybe I should have added “primarily” and/or “currently” to make it more literally true.
In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood… maybe looking to score points for unfairness?
My model here is that this is your self-identified “revealed preference” for actually being here right now.
Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.
This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I’m not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)
It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?
And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?
...
There’s an interesting possible equivocation here.
(1) “Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong”.
(2) “Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong… which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning”.
These are very similar. I like having them separate so that I can agree and disagree with you <3
Also, consider then a third idea:
(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.
I think that you probably think 1 and 2 are true and 3 is false.
I think that 2 is true, and 3 is true.
Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits).
Because I think 2 is true, I think you’re motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.
In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on.
If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can’t meet that need, THEN (maybe?) you need to get those needs met somewhere else.
For what its worth, I think that 1 is false for many many people, and probably it is also false for you.
I don’t think you should leave, I just think you should be less interested in a “pro-stag-hunting jihad” and then I think you should get the need (that was prompting your stag hunting call) met in some new way.
I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There’s a virtuous cycle here potentially!
In my opinion, such a “skill building in rabbit hunting techniques” sort of rationality… is all that can be done in an environment like this.
Also I think this kind of teaching environment is less available in many places, and so it isn’t that this place is bad for not offering more, it is more that it is only “better by comparison to many alternatives” while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)
So in my model (where 2 is true) “because 1 is false for many (and maybe even for you)” and 3 is true… therefore your whole stag hunt concept, applied here, suggests to me that you’re “low key seeking to gain social permission” from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.
I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) “place on the internet” full of people semi-mindlessly shrieking at each other by default.
If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures.
These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.
In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the “rabbit hunting rationality improvement” function.
In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a “level” that was required-at-all to meet your particular educational needs.
To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong’s canon for granted while also tolerating and even encouraging variations… because it certainly isn’t the case that lesswrong is perfect.
(There’s a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure “smart person mental elbow grease” and also in memetic diversity) stays over longer periods of time on a trajectory of “getting less wrong over time”… though I don’t know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)
...
So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth “out there” where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.
Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn’t complain about it until or unless someone hired that firm to “impose costs” on me… then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad… it is just often the “last refuge of the incompetent”.
In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...
...so long as they don’t damage the “good hubness” of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong’s explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok… indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).
Here is a thng I wrote some years ago (this is a slightly cleaned up chat log, apologies for the roughness of exposition):
Yeah! This is great. This is the kind of detailed grounded cooperative reality that really happens sometimes :-)
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I find this unpromising, in light of the above.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
Just checking, what are X, Y and Z?
(I’m interested in a concrete answer but would be happy with a brief vague answer too!)
(Added: Please don’t feel obliged to write a long explanation here just because I asked, I really just wanted to ask a small question.)
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
This is not an exhaustive list.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
(For myself, I try not to assume I even know what’s happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
Am I missing something? What?
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
Plausible to me. (Thanks.)