I’m feeling demoralized by Ben and Scott’s comments (and Christian’s), which I interpret as being primarily framed as “in opposition to the OP and the worldview that generated it,” and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.Both Scott’s and Ben’s thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they’re three separate conversations, rather than it being clear that I said “X” and Ben said “By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X” and Scott said “And I have a lot of thoughts about X’ and X″.”Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don’t want to have to defend against them, but feel like if I don’t, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they’re specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there’s a difference between “what I assert Policy X will degenerate to, given [a, b, c] about the human condition” and “Policy X.”
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott’s proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They’re certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea’s tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I’m requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being “a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts” just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don’t know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm?
Almost everyone’s response to COVID, including institutions, to the tune of many preventable deaths.
Almost everything produced by the red tribe in 2020, to the tune of significant damage to the social fabric.
Your claims about the ramifications of my policy are straightforwardly false, because you have misunderstood / mischaracterized / strawmanned the policy I am advocating.You are failing to pass the ITT of the post, and to take seriously its thesis, and thus your responses are aimed at tangents rather than cruxes. The objections you are raising are roughly analogous to “but if you outlaw dueling, then people will get killed in duels when they refuse to shoot back!”
I explicitly request that you actually try to pass the ITT of the post, so that we can be in a place where our disagreement is actually useful. Or, if you’d rather have this other, different conversation (which would be fine), at least acknowledge that you are changing the subject, and riffing rather than directly responding.
(The riff being something like, “instead of discussing the policy Duncan’s actually proposing, I’d like to discuss the ramifications of a likely degeneration of it, because I suspect his proposal would degenerate in practice and what we would see as a result is X.”)
You’re still missing the thesis. Apologies for not having the spoons to try restating it in different words, but I figured I could at least politely let you know.Edit: a good first place to look might be “what do I think is different for me, Christian, than for people with substantially less discernment and savviness?”
I think you’re underweighting a crucial part of the thesis, which is that it doesn’t matter what the candidate secretly knows or would admit if asked. A substantial portion of the listeners just … get swayed by the strong claim. The existence of savvy listeners who “get it” and “know better” and know where to put the hedges and know which parts are hyperbole doesn’t change that fact. And there is approximately never a reckoning.
locally seem fairly costly
This seems highly variable person-to-person; Nate Soares and Anna Salamon each seem to pay fairly low costs/no costs for many kinds of disgust, and are also notably each doing very different things than each other. I also find that a majority of my experiences of disgust are not costly for me, and instead convert themselves by default into various fuels or resolutions or reinforcement-rewards. There may be discoverable and exportable mental tech re: relating productively to disgust-that-isn’t-particularly-actionable.
One last point for Zack to consider:
I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
You could start by thinking “okay, I don’t understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here? Maybe there’s something real where he’s pointing, even if I don’t see the lines of cause and effect?”
Plus, it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.” That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.
Hm. For the record, I find this thought to be worth chewing on, so thank you.
maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
So, by framing things as “taking offense” and “tone policing,” I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis “Actually, Zack’s doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback” already halfway to being dismissed.
I’m not “taking offense.” I’m not pointing at “your comment made me sad and therefore it was bad,” or “gosh, why did you use these words instead of these slightly different words which I’m arbitrarily declaring are better.”
I’m pointing at “your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I’ve engaged with you.” You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Eric Rogstad (who’s my mental exemplar of the virtue I’m pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick. Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they’re better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.
(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward. I think that’s what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre. I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)
clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them
… conveniently ignoring, as if I didn’t say it and it doesn’t matter, my point about context being a real thing that exists. Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn’t get to be, and decided to pretend they didn’t see it so they could still have their fun.
Exploring that line of discussion is potentially interesting!
I defy you to say, with a straight face, “a supermajority of rationalists polled would agree that the hypothesis which best explains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation.”
I’m more motivated, etc.
It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I’m immoral and irrational. EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.
The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders. If all someone knows is that one voted for Trump, one’s private dismay and internal reservations do nothing to stop the norm shift. You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months. I guess I’ll try again in the summer.
Agreement with all of the above. I just don’t want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].
If you’re going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.
I do not see how you could be applying charity neutrally/symmetrically, given the above comment.
I’m applying the standard “treat each statement as meaning what it plainly says, in context.” In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false. In context, my comment says if you’re about to assert that something is true of everyone without fail, you’re something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).
Of the actual times that actual humans have made assertions about what’s universally true of all people, I strongly wager that they’ve been wrong 1000x more frequently than they’ve been right. Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.
I actually can’t produce an assertion about all human actions that I’m confident is true. Like, I’m confident that I can assert that everything we’d classify as human “has a brain,” and that everything we’d classify as human “breathes air,” but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven’t yet been able to think of something that everyone, without fail, definitely does.
Note that near-universals are ruled out by “everyone without fail.” I am in fact pointing, with my “helpful tip,” at statements beginning with everyone without fail. It is in fact not the case that any of the examples Zack started with are true of everyone without fail—there are humans who do not laugh, humans who do not tell stories, humans who do not shiver when cold, etc.
This point is not the main thrust of my counterobjection to Zack’s comment, which was more about the incentives created by various styles of engagement, but it’s worth noting.
My downvote here is not for TAG holding the hypothesis that the rationalist/LW bubble might be bad in various ways (this is an inoffensive hypothesis to hold, in my culture) but rather for its method of sly insinuation that tries to score a point without sticking its neck out and making a clear and falsifiable claim.
If I can be shown that I’ve misread TAG, I’ll remove the downvote.
I mean the willful misunderstanding of the actual point I was making, which I still maintain is correct, including the bit about many orders of magnitude (once you include the should-be-obvious hidden assumption that has now been made explicit).
The adversarial pretending-that-I-was-saying-something-other-than-what-I-was-clearly-saying (if you assign any weight whatsoever to obvious context) so as to make it more attackable and let you thereby express the performative incredulity you seemed to want to express, and needed more license for than a mainline reading of my words provided you.I also object to “would be very bad” in the subjunctive … I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of “I apologize IF I offended anybody,” when one clearly did offend. This interaction has certainly taken my barely-sufficient-to-get-me-here motivation to “try LessWrong again” and quartered it. This thread has not fostered a sense of “LessWrong will help you nurture and midwife your thoughts, such that they end up growing better than they would otherwise.”
I would probably feel more willing to believe that your nitpicking was principled if you’d spared any of it for the top commenter, who made an even more ambitious statement than I (it being absolute/infinite).
You’re neglecting the unstated precondition that it’s the type of sentence that would be generated in the first place, by a discussion such as this one. You’ve leapt immediately to an explicitly adversarial interpretation and ruled out meaning that would have come from a cooperative one, rather than taking a prosocial and collaborative approach to contribute the exact same information.
(e.g. by chiming in to say “By the way, it seems to me that Duncan is taking for granted that readers will understand him to be referring to the set of such sentences that people would naturally produce when talking about culture and psychology. I think that assumption should be spelled out rather than left implicit, so that people don’t mistake him for making a (wrong) claim about genuine near-universals like ‘humans shiver when cold’ that are only false when there are e.g. extremely rare outlier medical conditions.” Or by asking something like “hey, when you say ‘a sign’ do you mean to imply that this is ironclad evidence, or did you more mean to claim that it’s a strong hint? Because your wording is compatible with both, but I think one of those is wrong.”)The adversarial approach you chose, which was not necessary to convey the information you had to offer, tends to make discourse and accurate thinking and communication more difficult, rather than less, because what you’re doing is introducing an extremely high burden on saying anything at all. “If you do not explicitly state every constraining assumption in advance, you will be called out/nitpicked/met with performative incredulity; there is zero assumption of charity and you cannot e.g. trust people to interpret your sentences as having been produced under Grice’s maxims (for instance).”
The result is an overwhelming increase in the cost of discourse, and a substantial reduction in its allure/juiciness/expected reward, which has the predictable chilling effect. I absolutely would not have bothered to make my comment if I’d known your comment was coming, in the style you chose to use, and indeed now somewhat regret trying to take part in the project of having good conversations on LessWrong today.
If [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], can you [blame A for championing PR]?
Nope, given that condition. But also the “if” does not hold. You’re incorrect that [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], and I note as a helpful tip that if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.
This seems true to me but also sort of a Moloch-style dynamic? Like “yep, I agree those are the incentives, and it’s too bad that that’s the case.”
I think another way to gesture at the distinction here is whether your success criteria is process-based or outcome-based.
If you’re “trying to do PR,” then you’re sort of hanging your hopes on a specific outcome—that people will hold you in high regard, say good things about you, etc. This opens you up to Goodharting, and various muggings and extortions, and sort of leaves you at the mercy of the most capricious or unreasonable member of the audience.
Whereas if you’re “trying to be honorable” (or some other similar thing), you’re attempting to engage in methods and processes that are likely to lead to good outcomes, according to your advance predictions, and which tend to produce social standing as a positive side effect. But you’re not optimizing for the social standing, except insofar as you’re contributing to a good and healthy society existing in the first place (and then slotting into it).
I see this (the thing I’m describing, which may or may not be as closely related to the thing Anna’s describing as I think it is) as sort of analogous to whether you do something like follow diplomatic procedures or use NVC (process-based), or do whatever-it-takes to make sure you don’t offend anybody (outcome-based). One of these is sort of capped and finite in a way I think is important, and the other is sort of infinitely vulnerable.
FWIW, I would be willing to cut it, if it makes the cut overall, such that the essay is shorter and primarily about the core concept and includes only enough Duncan-specific stuff to get that core concept across.