Nod. I do agree with that.
The Review Phase is a bit of an evolving process – I’m expecting us to learn over the course of the month what sort of reviews are most helpful.
One explicit update I made since last week is shifting the Review Phase from “write up whether you think this post should be included in the book” to “focus on providing information to other people who are evaluating the post.”
The “judge” mindset seemed to be outputting less useful content than the “provide information to help evaluate” mindset.
I do think including notes about what you think should be included in the book is still valuable, but is something it makes more sense to do after you’ve spent some time in “evaluate and add information” mode.
The process isn’t finished yet – it’ll hopefully complete sometime in January of next year.
The latest version of the “offtopic comment” feature that the team had chatted about was a “collapse” feature, where some comments are just forcibly collapsed with a flag, and this is just a generic tool that admins and some authors have access to. Doesn’t really require anything automatic, just, when you notice such a thread, you can close it. (It’s still appear in the comment list, just collapsed as if it had low karma, possibly with a reason displayed)
The LW team has been trying this out the “bolded unread posts” a few days as an admin-only setting. I think pretty much everyone isn’t liking it.
But I personally am liking the fact that most posts aren’t grey, and I’m finding myself wondering whether it’s even that important to highlight unread posts. Obviously there’s some value to it, but:
a) a post being read isn’t actually that much evidence about whether I want to read it again – I find myself clicking on old posts about as often as new posts. (This might be something you could concretely look into with analytics)
b if I don’t want to read a post, marking it as read is sort of annoying
c) I still really dislike having most of my posts be grey
d) it’s really hard to make an “unread” variant that doesn’t scream out for disproportionate attention.
(I suppose there’s also an option for this to be a user-configurable setting, since most users don’t read so many posts that they all show up grey, and the few who do could maybe just manually turn it off)
Reading this thread in the future, I find myself kinda wishing for ways comment threads like this could be auto-collapsed or resolved or something after reaching their conclusion.
The original motivating example for this post doesn’t actually quite fit into the lens the post oriented around. The post focuses on “disagreements between particular people”. But there’s a different set of issues surrounding “disagreements in a zeitgeist.”
Groups update slower than individuals. So if you write a brilliant essay saying “this core assumption or frame of your movement is wrong”, you may not only have to wait for individual people to go “oh, I see, that makes sense”, but even longer for that to become common knowledge, enough that so that you won’t reliably see group members acting under the old assumptions.
(Again, this is an observation about the status quo, not about what is necessarily possible if we grabbed all the low hanging fruit. But in the case of groups I’m more pessimistic about things improving as much, unless the group has strong barriers to entry and strong requirements of “actually committing serious practice to disagreement resolution.”)
This certainly seems important (I do think this is a key value the community provides). But it is importantly different from “the rationality content of the community is directly helpful for people-in-general.” If it were just “people who get you”, this wouldn’t obviously be more or differently important than other random subcultures.
Oh, it might have pattern-matched to spam initially. But, I think it exists now?
The OP comment was optimizing for “improving my understanding of the domain” more than direct advice of how to change the post.
(I’m not necessarily expecting the points and confusions there to resolve within the next month – it’s possible that you’ll reflect on it a bit and then figure out a slightly different orientation to the post, that distills the various concepts into a new form. Another possible outcome is that you leave the post as-is for now, and then in another year or two after mulling things over someone writers a new post doing a somewhat different thing, that becomes the new referent. Or, it might just turn out that my current epistemic state wasn’t that useful. Or other things)
I think there’s sort of a two-step process that goes into naming things (ironically, or appropriate, which map directly onto the post) – first figuring out “okay what actually is this phenomenon, and what name most accurately describes it?” and then, separately, “okay, what sort of names are going to reliably going to make people angry and distract from the original topic if you apply it to people, and are there alternative names that cleave closely to the truth?”
(my process for generating names in that risk offending is something like a multi-step Babble and Prune, where I generate names aiming to satisfice on “a good explanation of the true phenomenon” and “not likely to be unnecessarily distracting”, until I have a name that satisfies both criteria)
I haven’t tried generating a maximally good name for Jumbled yet since I wasn’t sure this was even carving reality the right way.
But, like, it’s not an accident that ‘jumbled’ is more likely to offend people than ‘contextualized’. I do, in fact, think worse of people who have jumbled communication than deliberately contextualized communication. (compare “Virtue Signalling”, which is an important term but is basically an insult except among people who have some kind of principled understanding that “Yup, it turns out some of the things I do had unflattering motives and I’ve come to endorse that, or endorse my current [low] degree of prioritizing changing it.”)
I am a conversation consequentialist and think it’s best to find ways of politely pointing out unflattering things about people in ways that don’t make them defensive. But, it might be that the correct carving of reality includes some unflattering descriptions of people and maybe the best you can do is minimize distraction-damage.
Here’s a review of mine that I think is pretty representative of the sort of review that I, personally, am most excited about.
This post seems to be making a few claims, which I think can be evaluated separately:1) Decoupling norms exist2) Contextualizing norms exist 3) Decoupling and contextualization norms are useful to think as opposites (either as a dichotomy or spectrum)
(i.e. there are enough people using those norms that it’s a useful way to carve up the discussion-landscape)
There’s a range of “strong” / “weak” versions of these claims – decoupling and/or contextualization might be principled norms that some people explicitly endorse, or they might just be clusters of tendencies people have sometimes.
In the comments of his response post, Zack Davis noted:
It’s certainly possible that there’s a “general factor” of contextualizing—that people systematically and non-opportunistically vary in how inferentially distant a related claim has to be in order to not create an implicature that needs to be explicitly canceled if false. But I don’t think it’s obvious, and even if it’s true, I don’t think it’s pedagogically wise to use a politically-motivated appeal-to-consequences as the central case of contextualizing.
And, reading that, I think it may actually the opposite – there is general factor of “decoupling”, not contextualizing. By default people are using language for a bunch of reasons all jumbled together, and it’s a relatively small set of people who have the deliberately-decouple tendency, skill and/or norm, of “checking individual statements to see if they make sense.”
Upon reflection, this is actually more in line with the original Nerst article, which used the terms “Low Decoupling” and “High Decoupling”, which less strongly conveys the idea of “contextualizer” being a coherent thing.
On the other hand, Nerst’s original post does make some claims about Klein being the sort of person (a journalist) who is “definitively a contextualizer, as opposed to just ‘not a decoupler’”, here:
While science and engineering disciplines (and analytic philosophy) are populated by people with a knack for decoupling who learn to take this norm for granted, other intellectual disciplines are not. Instead they’re largely composed of what’s opposite the scientist in the gallery of brainy archetypes: the literary or artistic intellectual.This crowd doesn’t live in a world where decoupling is standard practice. On the contrary, coupling is what makes what they do work. Novelists, poets, artists and other storytellers like journalists, politicians and PR people rely on thick, rich and ambiguous meanings, associations, implications and allusions to evoke feelings, impressions and ideas in their audience. The words “artistic” and “literary” refers to using idea couplings well to subtly and indirectly push the audience’s meaning-buttons.To a low-decoupler, high-decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a high-decoupler the low-decouplers insistence that this isn’t possible looks like naked bias and an inability to think straight. This is what Harris means when he says Klein is biased.
While science and engineering disciplines (and analytic philosophy) are populated by people with a knack for decoupling who learn to take this norm for granted, other intellectual disciplines are not. Instead they’re largely composed of what’s opposite the scientist in the gallery of brainy archetypes: the literary or artistic intellectual.
This crowd doesn’t live in a world where decoupling is standard practice. On the contrary, coupling is what makes what they do work. Novelists, poets, artists and other storytellers like journalists, politicians and PR people rely on thick, rich and ambiguous meanings, associations, implications and allusions to evoke feelings, impressions and ideas in their audience. The words “artistic” and “literary” refers to using idea couplings well to subtly and indirectly push the audience’s meaning-buttons.
To a low-decoupler, high-decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a high-decoupler the low-decouplers insistence that this isn’t possible looks like naked bias and an inability to think straight. This is what Harris means when he says Klein is biased.
Although they’re interwoven, I think it might be worth distinguishing some subclaims here (not necessarily made by Nerst or Leong, but I think implied and worth thinking about)
There exist a class of general storytelling contextualists
There exist PR-people/politicians/activists who wield contextual practice as a tool or weapon.
There exist “principled contextualizers” who try to evenly come to good judgments that depends on context.
My Epistemic State
There’s a set of fairly concrete “empirical” questions here, which are basically “if you do a bunch of factor analysis of discussions, would decoupling and/or contextualization and/or any of the specific contextual-subcategories listed above be major predictive power?”
The experiments you’d run for this might be expensive but not very confusing.
I would currently guess:
“Decoupling factor” definitely exists and is meaningful
Storytelling contextualists exist and are meaningful (though not necessarily especially useful to contrast with decouplers)
PR-ists who wield context as tool/weapon definitely exist (and decoupling is often relevant to their plans, so they have developed tools that allow them to modulate the degree to which decoupling fits into the conversational frame)
I think I could name a few people at least attempting to be “fair, principled contextualists”, at least in some circumstances. I am less confident that this is a real thing because the alternative “secretly they’re just really effective or subtle PR-ists, either intentionally or not” is a pretty viable alternative
I have a remaining confusion, which is something like “what exactly is a contextualizer?”. I feel like I have a crisp definition of “decoupling”. I don’t have that for contextualizers. Are the three subcategories listed above really ‘relatives’ or are they just three different groups doing different things? Is it meaningful to put these on a spectrum with decouplers on the other side?
“How much you and others are willing to think about the consequences of what is said separate from its’ truth value.”
Which sounds like a plausibly good definition, that maybe applies to all three of the subcategories. But I feel like it’s not quite the natural definition for each individual subcategory. (Rather, it’s something a bit downstream of each category definition)
“Jumbled” vs “Contextual”
“High decoupling” and “low decoupling” are still pretty confusing terms, even if you get rid of any notion of “low decoupling” being a cogent thing. It occured to me, writing this review, that you might replace the word “contextual” with “jumbled”.
Contextual implies some degree of principled norms. Jumbled points more towards “the person is using language for a random mishmash of strategies all thrown together.” (Politicians might sometimes be best described as “jumbled”, and sometimes as “principled” [but, not necessarily good principles, i.e. ‘I will deliberately say whatever causes my party to win’]).
That’s what I got for now.
One of the key ideas here is that I’d like posts to have gotten someone to “look into the dark”. If the post wasn’t as useful as it seemed, how would we know? If 10 years from now you no longer endorsed the post, why might that be?
There should be a post coming up soon that goes into more examples of how to do Reviews. It’s a bit tough question because different posts benefit from different types of reviews.
A thing that I think is commonly useful is asking “what are the actual claims this post is making”, and listing them succinctly, and writing up some thoughts about how we could actually empirically check if those claims are true. (Even if we don’t actually run the experiment, I think operationalizing what observations we’d expect in the world is helpful for evaluating when/why/whether the post is valid)
I do still think there’s a lot of legitimately hard stuff here. In the past year, in some debates with Habryka and with Benquo, I found a major component (of my own updating) had to do with giving their perspectives time to mull around in my brain, as well as some kind of aesthetic component. (i.e. if one person says “this UI looks good” and another person says “this UI looks bad”, there’s an aspect of that that doesn’t lend itself well to “debate”. I’ve spent the past 1.5 years thinking a lot about Aesthetic Doublecrux, which much of this sequence was laying the groundwork for)
Somewhat replying to both romeo and bendini elsethread:
Disagreements aren’t always trivial to resolve, but you’ve been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.
I propose an alternative model. People don’t resolve disagreements because there are no incentives to resolve them. In fact the incentives often cut the other way.
I definitely have a sense that rationalists by default aren’t that great at disagreeing for all the usual reasons (not incentivized to, don’t actually practice the mental moves necessary to do so productively), and kinda the whole point of this sequence is to go “Yo, guys, it seems like we should actually be able to be good at this?”
And the problem in part is that that this requires multiple actors – my sense is that a single person trying their best to listen/learn/update can only get halfway there, or less.
 The exact nature of “what you can accomplish with only one person trying to productively-disagree depends on the situation.” It may be that that particular person can come to whatever the truest-nearby-beliefs reasonably well, but if you need agreement, or if the other person is the final decision maker, “one-person-coming-to-correct-beliefs” may not solve the problem.
Coming to Correct Beliefs vs Political Debate
I think one of the things going on is that it takes a bit of vulnerability to switch from “adversarial political mode” (a common default) to “actually be open to changing your mind.” There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don’t have some skills re: “resilience to social pressure”, then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.
(sometimes, this is a legitimate fear, and sometimes it’s not but it feels like it is, and noticing that in the moment is an important skill)
I’ve been on both sides of this, I think. Sometimes I’ve found myself feeling really frustrated that it feels like my discussion partner isn’t listening or willing to update, and I find myself sort of leaning into an aggressive voice to try and force them to listen to me. And then they’re like ’Dude, you don’t sound like you’re actually willing to listen to me or update” and then I was sheepishly like… “oh, yeah you’re right.”
It seems like having some kind of mutally-trustable-procedure for mutual “disarmament” would be helpful.
For instance, the concept of “society isn’t made nice for humans” is not new, but having moloch and inadequate equilibria as concepts still pushed forward the discourse
Nod. And in particular, I saw this post as something like “taking the concept of ‘privilege’, and fleshing it the gears of one particular facet of it.” (Privilege also being a concept that’s interwoven with some broader narratives or political maneuvering that I don’t fully endorse, but is nonetheless have found quite useful)
This is an interesting comment. Some thoughts after reflecting a bit:Awhile ago you wrote a comment saying something like “deliberate practice deliberate practice until you get really good identifying good feedback loops, and working with them.” I found that fairly inspiring at the time.
I didn’t ever really dedicate myself to doing that thoroughly enough to have a clear opinion on whether it’s The Thing. I think I put maybe… 6 hours into deliberate practice, with intent to pay attention to how deliberate practice generalized. I got value out of that that was, like, commensurate with the “serious practice” noted here (i.e. if I kept that up consistently I’d probably skill up at a rate of 5-10% per year, and at the time that I did it, my output in the domains-in-question was maybe 10-20% higher, but more costly), but it required setting aside time for hard cognitive labor that feels in short supply.
There were at least some domains (a particular videogame I tried to deliberate practice), that seemed very surprisingly hard to improve at.
I do have a general sense (from this past year as well as previous experience) that in many domains, there are some rapid initial gains to be had for the first 20 hours or so.
None of this feels like “things are actually easier than described in the karate kid essay.” I would agree with the claim “the karate kid essay sort of implies you just have to try hard for a long time, and actually many of the gains come from actually having models of how things work and you should be able to tell if you’re improving.” But that doesn’t make things not hard.
It seems plausible that if you gain the generalized Deliberate Practice skill a lot of things become much easier, and that it’s the correct skill to gain early in the skill-tree. But, like, it’s still pretty hard yo.
I also agree that most people aren’t actually even trying to get better at disagreement, and if they were doing that much at all that’d make a pretty big difference. (“years” is what I think the default expectation should be among people that aren’t really trying)
The Lesswrong comment guidelines say, “Aim to explain, not persuade.” Is this a method by which we cut out our own chests?
I‘m curious how this question parses for Vaniver
I initially wanted “bold everywhere” because it helped my brain reliably parse things as “this is a bold line” instead of “this is a line with some bold parts but you have to hunt for them”. But, after experimenting a bit I started to feeling having bold elements semi-randomly distributed across the lines made it a lot busier.