The claim of CFAR (based on instructors’ experience in practice) is that a single B represents something like 60-90% of the crux-weight underlying A “a surprising amount of the time.”
Like, I imagine that most people would guess there is a single crucial support maybe 5% of the time and the other 95% of the time there are a bunch of things, none of which are an order of magnitude more important than any other.
But it seems like there really is “a crux” maybe … a third of the time?
Here I’m wishing again that we could’ve gotten an Eli Tyre writeup of Finding Cruxes at some point.
In the course of you teaching me the double crux technique, we attempt to find a crux for a belief of mine. Suppose that there is no such crux. However, in our mutual eagerness to apply this cool new technique—which you (a person with high social status in my social circle, and possessed of some degree of personal charisma) are presenting to me, as a useful and proven thing, which lots of other high-status people use—we confabulate some “crux” for my belief. This “faux crux” is not really a crux, but with some self-deception I can come to believe it to be such. You declare success for the technique; I am impressed and satisfied. You file this under “we found a crux this time, as in 60–90% of other times”.
Do you think that this scenario is plausible? Implausible?
Plausible. But I (at least attempted to) account for this prior to giving the number.
Like, the number I give matches my own sense, introspecting on my own beliefs, including doing sensible double-checks and cross-checks with other very different tools such as Focusing or Murphyjitsu and including e.g. later receiving the data and sometimes discovering that I was wrong and have not updated in the way I thought I would.
I think you might be thinking that the “a surprising amount of the time” claim is heavily biased toward “immediate feedback from people who just tried to learn it, in a context where they are probably prone to various motivated cognitions,” and while it’s not zero biased in that way, it’s based on lots of feedback from not-that-context.
So if there’s a crux 1⁄3 of the time, and if my having a crux and your having a crux are independent (they surely aren’t, but it’s not obvious to me which way the correlation goes), we expect there to be cruxes on both sides about 10% of the time, which means it seems like it would be surprising if there were an available double-crux more than about 5% of the time. Does that seem plausible in view of CFAR experience?
Of course double-cruxing could be a valuable technique in many cases where there isn’t actually a double-crux to be found: it encourages both participants to understand the structure of their beliefs better, to go out of the way to look for things that might refute or weaken them, to pay attention to one another’s positions … What do you say to the idea that the real value in double-cruxing isn’t so much that sometimes you find double-cruxes, as that even when you don’t it usually helps you both understand the disagreement better and engage productively with one another?
Actually finding a legit double crux (i.e. a B that both parties disagree on, that is a crux for A that both parties disagree on) happening in the neighborhood of 5% of the time sounds about right.
More and more, CFAR leaned toward “the spirit of double crux,” i.e. seek to move toward getting resolution on your own cruxes, look for more concrete and more falsifiable things, assume your partner has reasons for their beliefs, try to do less adversarial obscuring of your belief structure, rather than “literally play the double crux game.”
The claim of CFAR (based on instructors’ experience in practice) is that a single B represents something like 60-90% of the crux-weight underlying A “a surprising amount of the time.”
Like, I imagine that most people would guess there is a single crucial support maybe 5% of the time and the other 95% of the time there are a bunch of things, none of which are an order of magnitude more important than any other.
But it seems like there really is “a crux” maybe … a third of the time?
Here I’m wishing again that we could’ve gotten an Eli Tyre writeup of Finding Cruxes at some point.
Consider this hypothetical scenario:
In the course of you teaching me the double crux technique, we attempt to find a crux for a belief of mine. Suppose that there is no such crux. However, in our mutual eagerness to apply this cool new technique—which you (a person with high social status in my social circle, and possessed of some degree of personal charisma) are presenting to me, as a useful and proven thing, which lots of other high-status people use—we confabulate some “crux” for my belief. This “faux crux” is not really a crux, but with some self-deception I can come to believe it to be such. You declare success for the technique; I am impressed and satisfied. You file this under “we found a crux this time, as in 60–90% of other times”.
Do you think that this scenario is plausible? Implausible?
Plausible. But I (at least attempted to) account for this prior to giving the number.
Like, the number I give matches my own sense, introspecting on my own beliefs, including doing sensible double-checks and cross-checks with other very different tools such as Focusing or Murphyjitsu and including e.g. later receiving the data and sometimes discovering that I was wrong and have not updated in the way I thought I would.
I think you might be thinking that the “a surprising amount of the time” claim is heavily biased toward “immediate feedback from people who just tried to learn it, in a context where they are probably prone to various motivated cognitions,” and while it’s not zero biased in that way, it’s based on lots of feedback from not-that-context.
Interesting.
So if there’s a crux 1⁄3 of the time, and if my having a crux and your having a crux are independent (they surely aren’t, but it’s not obvious to me which way the correlation goes), we expect there to be cruxes on both sides about 10% of the time, which means it seems like it would be surprising if there were an available double-crux more than about 5% of the time. Does that seem plausible in view of CFAR experience?
Of course double-cruxing could be a valuable technique in many cases where there isn’t actually a double-crux to be found: it encourages both participants to understand the structure of their beliefs better, to go out of the way to look for things that might refute or weaken them, to pay attention to one another’s positions … What do you say to the idea that the real value in double-cruxing isn’t so much that sometimes you find double-cruxes, as that even when you don’t it usually helps you both understand the disagreement better and engage productively with one another?
Actually finding a legit double crux (i.e. a B that both parties disagree on, that is a crux for A that both parties disagree on) happening in the neighborhood of 5% of the time sounds about right.
More and more, CFAR leaned toward “the spirit of double crux,” i.e. seek to move toward getting resolution on your own cruxes, look for more concrete and more falsifiable things, assume your partner has reasons for their beliefs, try to do less adversarial obscuring of your belief structure, rather than “literally play the double crux game.”