Well if you found out that tea actually didn’t cause cancer, would you be fine with people drinking tea?
In my experience that’s where most attempts to get to a mutual understanding fail. People refuse to entertain a hypothetical that contradicts their deeply held beliefs, period. Not just “those irrational people”, but basically everyone, you and me included. If the belief/alief in question is a core one, there is almost no chance of us to earnestly consider that it might be false, not in a single conversation, anyway.
An example:
“What if you found out that vaccines caused autism?” “They don’t, the only study claiming this was decisively debunked.” “But just imagine if they did” “Are you trolling? We know they don’t!”
An opposite example:
“What if you found out that vaccines didn’t cause autism?” “They do, it’s a conspiracy by the pharma companies and the government, they poison you with mercury.” “Just for the sake of argument, what if they didn’t?” “You are so brainwashed by the media, you need to open your eyes to reality!”
First of all, yep, the kind of map territory distinction that enables one to even do the crux-checking move at all is reasonably sophisticated. And I suspect that some people, for all practical purposes, just can’t do it.
Second, even for those of us who can execute that move, in principle, it gets harder, to the point of impossibility, as the conversation becomes more heated, or the person become more triggered.
Third, when a person is in a public, low-nuance context, or is used to thinking in public, low-nuance context, a person is likely to have resistance to acknowledging that [x] is a crux for [y], because that can sound like an endorsement of [x] to a casual observer.
So there are some real difficulties here.
However, I think there are strategies that help in light of these difficulties.
In terms of doing this move yourself...
You can just practice this until it becomes habitual. In Double Crux sessions, I sometimes include an exercise that involves just doing crux-checking: taking a bunch of statement, isolating the the [A] because [B] structure, and thin checking if [B] is a crux for [A], for you.
And certainly there are people around (me) who will habitually respond to some claim with “that would be a crux for me” / “that wouldn’t be a crux for me.”
In terms of helping your conversational partner do this move...
First of all, it goes a long way to have a spirit of open curiosity, where you are actually trying to understand where they are coming from. If a person expects that you’re going to jump on them and exploit any “misstep” they make, they’re not going to be relaxed enough to consider counterfactual-from-their-view hypotheticals. Sincerely offering your own cruxes often helps as a sign of good faith, but keep in mind that there is no substitute for just actually wanting to understand, instead of trying to persuade.
Furthermore, when a person is resistant to do the crux-checking this is often because there is some bucket error or conflation happening, and if you step back and help them untangle it this goes a long way. You should actively go out of your way to help your partner avoid accidentally gas lighting themselves.
For instance, I was having a conversation with someone, this week, about culture war related topics.
A few hours into the discussion I asked,
Suppose that the leaders of the Black Lives Matter movement (not the “rank and file”) had a seriously flawed impact model, such that all of the energy going into this area didn’t actually resolve any of these terrible problems. In that case, would you have a different felling about the movement?”
(In fact, I asked a somewhat more pointed question: “If that was the case, would you feel more inclined to push a button to ‘roll back’ the recent flourishing of activity of around BLM?”)
I asked this question, and the person said some things in response, and then the conversation drifted away. I brought us back, and asked it again, and again we kind of “slid off.”
So I (gently) pointed this out,
We’ve asked this question twice now, and both times we’ve sort of drifted away. This suggests to me that maybe there’s some bucket error or false dichotomy in play, and I imagine the some part of you is trying to protect something, or making sure that something doesn’t slip in sideways. How do you feel about trying to focus on and articulate that thing, directly?
We went into that, and together, we drew out that there were two things at stake, and two (not incompatible) ways that you could view the situation:
On the one hand BLM, and the recent protests, and other things in that space, are a strategic social change movement, which has some goals, and is trying to achieve them.
But also, it is an expression of rage and frustration at the pain that black people in the United States, as a group, have had to endure for decades and decades. And separately from the question of “will these actions result in the social change that they’re aiming for?”, there’s just something bad about telling those people to shut up, and something important about this kind of emotional expression on the societal level.
(Which, to translate a little, is to say “no, the leaders having the wrong impact model, on its own, wouldn’t be a crux, because that is only part of the story.”)
Now, if we hadn’t drawn this out explicitly, my conversational partner might have been in danger of making a bucket error, gaslighting themselves into believing that it they think that it is correct or morally permissible to tell people or groups that they should repress their pain, or that they shouldn’t be allowed to express it.
And for my part, this was itself a productive exploration, because, while it seems sort of obvious in retrospect (as these things often do), I had only been thinking of “all these things” as strategic societal reform movements, and not mass expressions of frustration. But, actually, that seems like a sort of crucial thing to be tracking if I want to understand what is happening the world, and/or I want to try and plot a path to actual solutions. For instance, I had already been importing my models of social change and intervention-targeting, but now I’m also importing my models of trauma and emotional healing.
(To be clear, I’ve very unsure how my individual-level models of trauma apply at the societal level. I do think it can be dangerous to assume a one-to-one correspondence between people and groups of people. But also, I’ve learned how to do Double Crux from doing IDC, and vis versa, and I think modeling groups of people as individuals writ large is often a very good starting point for analysis.)
So overall we went from a place of “this person seems kind of unwilling to consider the question” to “we found some insights that have changed my sense of the situation.”
Granted, this was with a rationalist-y person, who I already knew pretty well and with whom I had mutual trust, who was familiar with the concept of bucket errors, and had experience with Focusing and introspection in general.
So on the one hand, this was easy mode.
But on the other hand, one takeaway from this is “with sufficient skill between the two people, you can get past this kind of problem.”
I agree there’s a depressingly huge majority out there that at the way you describe, but, like, the whole point of having a rationalist community is to train ourselves to deal with that fact, and I think the people I interact with regularly basically have the skill of at least making a serious attempt at confronting deeply held beliefs.
(I certainly think many people in LW circles still struggle with it, especially in domains that are different from the domains they were training on. But, I’ve seen people do at least a half-passable job at it pretty frequently)
I think shminux may have in mind one or more specific topics of contention that he’s had to hash out with multiple LWers in the past (myself included), usually to no avail.
(Admittedly, the one I’m thinking of is deeply, deeply philosophical, to the point where the question “what if I’m wrong about this?” just gets the intuition generator to spew nonsense. But I would say that this is less about an inability to question one’s most deeply held beliefs, and more about the fact that there are certain aspects of our world-models that are still confused, and querying them directly may not lead to any new insight.)
In my experience that’s where most attempts to get to a mutual understanding fail. People refuse to entertain a hypothetical that contradicts their deeply held beliefs, period. Not just “those irrational people”, but basically everyone, you and me included. If the belief/alief in question is a core one, there is almost no chance of us to earnestly consider that it might be false, not in a single conversation, anyway.
An example:
“What if you found out that vaccines caused autism?” “They don’t, the only study claiming this was decisively debunked.” “But just imagine if they did” “Are you trolling? We know they don’t!”
An opposite example:
“What if you found out that vaccines didn’t cause autism?” “They do, it’s a conspiracy by the pharma companies and the government, they poison you with mercury.” “Just for the sake of argument, what if they didn’t?” “You are so brainwashed by the media, you need to open your eyes to reality!”
First of all, yep, the kind of map territory distinction that enables one to even do the crux-checking move at all is reasonably sophisticated. And I suspect that some people, for all practical purposes, just can’t do it.
Second, even for those of us who can execute that move, in principle, it gets harder, to the point of impossibility, as the conversation becomes more heated, or the person become more triggered.
Third, when a person is in a public, low-nuance context, or is used to thinking in public, low-nuance context, a person is likely to have resistance to acknowledging that [x] is a crux for [y], because that can sound like an endorsement of [x] to a casual observer.
So there are some real difficulties here.
However, I think there are strategies that help in light of these difficulties.
In terms of doing this move yourself...
You can just practice this until it becomes habitual. In Double Crux sessions, I sometimes include an exercise that involves just doing crux-checking: taking a bunch of statement, isolating the the [A] because [B] structure, and thin checking if [B] is a crux for [A], for you.
And certainly there are people around (me) who will habitually respond to some claim with “that would be a crux for me” / “that wouldn’t be a crux for me.”
In terms of helping your conversational partner do this move...
First of all, it goes a long way to have a spirit of open curiosity, where you are actually trying to understand where they are coming from. If a person expects that you’re going to jump on them and exploit any “misstep” they make, they’re not going to be relaxed enough to consider counterfactual-from-their-view hypotheticals. Sincerely offering your own cruxes often helps as a sign of good faith, but keep in mind that there is no substitute for just actually wanting to understand, instead of trying to persuade.
Furthermore, when a person is resistant to do the crux-checking this is often because there is some bucket error or conflation happening, and if you step back and help them untangle it this goes a long way. You should actively go out of your way to help your partner avoid accidentally gas lighting themselves.
For instance, I was having a conversation with someone, this week, about culture war related topics.
A few hours into the discussion I asked,
(In fact, I asked a somewhat more pointed question: “If that was the case, would you feel more inclined to push a button to ‘roll back’ the recent flourishing of activity of around BLM?”)
I asked this question, and the person said some things in response, and then the conversation drifted away. I brought us back, and asked it again, and again we kind of “slid off.”
So I (gently) pointed this out,
We went into that, and together, we drew out that there were two things at stake, and two (not incompatible) ways that you could view the situation:
On the one hand BLM, and the recent protests, and other things in that space, are a strategic social change movement, which has some goals, and is trying to achieve them.
But also, it is an expression of rage and frustration at the pain that black people in the United States, as a group, have had to endure for decades and decades. And separately from the question of “will these actions result in the social change that they’re aiming for?”, there’s just something bad about telling those people to shut up, and something important about this kind of emotional expression on the societal level.
(Which, to translate a little, is to say “no, the leaders having the wrong impact model, on its own, wouldn’t be a crux, because that is only part of the story.”)
Now, if we hadn’t drawn this out explicitly, my conversational partner might have been in danger of making a bucket error, gaslighting themselves into believing that it they think that it is correct or morally permissible to tell people or groups that they should repress their pain, or that they shouldn’t be allowed to express it.
And for my part, this was itself a productive exploration, because, while it seems sort of obvious in retrospect (as these things often do), I had only been thinking of “all these things” as strategic societal reform movements, and not mass expressions of frustration. But, actually, that seems like a sort of crucial thing to be tracking if I want to understand what is happening the world, and/or I want to try and plot a path to actual solutions. For instance, I had already been importing my models of social change and intervention-targeting, but now I’m also importing my models of trauma and emotional healing.
(To be clear, I’ve very unsure how my individual-level models of trauma apply at the societal level. I do think it can be dangerous to assume a one-to-one correspondence between people and groups of people. But also, I’ve learned how to do Double Crux from doing IDC, and vis versa, and I think modeling groups of people as individuals writ large is often a very good starting point for analysis.)
So overall we went from a place of “this person seems kind of unwilling to consider the question” to “we found some insights that have changed my sense of the situation.”
Granted, this was with a rationalist-y person, who I already knew pretty well and with whom I had mutual trust, who was familiar with the concept of bucket errors, and had experience with Focusing and introspection in general.
So on the one hand, this was easy mode.
But on the other hand, one takeaway from this is “with sufficient skill between the two people, you can get past this kind of problem.”
I think you’re selling people way short.
I agree there’s a depressingly huge majority out there that at the way you describe, but, like, the whole point of having a rationalist community is to train ourselves to deal with that fact, and I think the people I interact with regularly basically have the skill of at least making a serious attempt at confronting deeply held beliefs.
(I certainly think many people in LW circles still struggle with it, especially in domains that are different from the domains they were training on. But, I’ve seen people do at least a half-passable job at it pretty frequently)
I think shminux may have in mind one or more specific topics of contention that he’s had to hash out with multiple LWers in the past (myself included), usually to no avail.
(Admittedly, the one I’m thinking of is deeply, deeply philosophical, to the point where the question “what if I’m wrong about this?” just gets the intuition generator to spew nonsense. But I would say that this is less about an inability to question one’s most deeply held beliefs, and more about the fact that there are certain aspects of our world-models that are still confused, and querying them directly may not lead to any new insight.)