This presentation of the technique does better than (my possibly bad recollection of) the last time I saw it explained in one important respect: acknowledging that in any given case there very well might not be a double-crux to find. I’d be happier if it were made more explicit that this is a thing that happens and that when it does it doesn’t mean “you need to double-crux harder”, it may simply be that the structure of your beliefs, or of the other person’s, or of both together, doesn’t enable your disagreement to be resolved or substantially clarified using this particular technique.
(if empirically it usually turns out in some class of circumstances that double-crux works, then I think this is a surprising and interesting finding, and any presentation of the technique should mention this and maybe offer explanations for why it might be. My guess is that (super-handwavily) say half the time a given top-level belief’s support is more “disjunctive” than “conjunctive”, so that no single one of the things that support it would definitely kill it if removed, and that typically if my support for belief X is more “conjunctive” then your support for belief not-X is likely to be more “disjunctive”. And you only get a double-crux when our beliefs both have “conjunctive” support (and when, furthermore, there are bits in the two supports that match in the right way).
Nitpicky remark: Of course in a rather degenerate sense any two people with a disagreement have a double-crux: if I think A and you think not-A, then A/not-A themselves are a double-crux: I would change my mind about A if I were shown to be wrong about A, and you would change your mind about not-A if you were shown to be wrong about not-A :-). Obviously making this observation is not going to do anything to help us resolve or clarify our disagreement. So I take it the aim is to find genuinely lower-level beliefs, in some sense, that have the double-crux relationship.
Further nitpicky-remark: even if someone’s belief that A is founded on hundreds of things B1, B2, … none of which is close to being cruxy, you can likely synthesize a crux by e.g. combining most of the Bs. This feels only slightly less cheaty than taking A itself and calling it a crux. Again, finding things of this sort probably isn’t going to help us resolve or clarify our disagreement. So I take it the aim is to find reasonably simple genuinely lower-level beliefs that have the double-crux relationship. And that’s what I would expect can often not be done, even in principle.
(I have a feeling I may have had a discussion like this with someone on LW before, but after a bit of searching I haven’t found it. If it turns out that I have and my comments above have already been refuted by someone else, my apologies for the waste of time :-).)
The claim of CFAR (based on instructors’ experience in practice) is that a single B represents something like 60-90% of the crux-weight underlying A “a surprising amount of the time.”
Like, I imagine that most people would guess there is a single crucial support maybe 5% of the time and the other 95% of the time there are a bunch of things, none of which are an order of magnitude more important than any other.
But it seems like there really is “a crux” maybe … a third of the time?
Here I’m wishing again that we could’ve gotten an Eli Tyre writeup of Finding Cruxes at some point.
In the course of you teaching me the double crux technique, we attempt to find a crux for a belief of mine. Suppose that there is no such crux. However, in our mutual eagerness to apply this cool new technique—which you (a person with high social status in my social circle, and possessed of some degree of personal charisma) are presenting to me, as a useful and proven thing, which lots of other high-status people use—we confabulate some “crux” for my belief. This “faux crux” is not really a crux, but with some self-deception I can come to believe it to be such. You declare success for the technique; I am impressed and satisfied. You file this under “we found a crux this time, as in 60–90% of other times”.
Do you think that this scenario is plausible? Implausible?
Plausible. But I (at least attempted to) account for this prior to giving the number.
Like, the number I give matches my own sense, introspecting on my own beliefs, including doing sensible double-checks and cross-checks with other very different tools such as Focusing or Murphyjitsu and including e.g. later receiving the data and sometimes discovering that I was wrong and have not updated in the way I thought I would.
I think you might be thinking that the “a surprising amount of the time” claim is heavily biased toward “immediate feedback from people who just tried to learn it, in a context where they are probably prone to various motivated cognitions,” and while it’s not zero biased in that way, it’s based on lots of feedback from not-that-context.
So if there’s a crux 1⁄3 of the time, and if my having a crux and your having a crux are independent (they surely aren’t, but it’s not obvious to me which way the correlation goes), we expect there to be cruxes on both sides about 10% of the time, which means it seems like it would be surprising if there were an available double-crux more than about 5% of the time. Does that seem plausible in view of CFAR experience?
Of course double-cruxing could be a valuable technique in many cases where there isn’t actually a double-crux to be found: it encourages both participants to understand the structure of their beliefs better, to go out of the way to look for things that might refute or weaken them, to pay attention to one another’s positions … What do you say to the idea that the real value in double-cruxing isn’t so much that sometimes you find double-cruxes, as that even when you don’t it usually helps you both understand the disagreement better and engage productively with one another?
Actually finding a legit double crux (i.e. a B that both parties disagree on, that is a crux for A that both parties disagree on) happening in the neighborhood of 5% of the time sounds about right.
More and more, CFAR leaned toward “the spirit of double crux,” i.e. seek to move toward getting resolution on your own cruxes, look for more concrete and more falsifiable things, assume your partner has reasons for their beliefs, try to do less adversarial obscuring of your belief structure, rather than “literally play the double crux game.”
This presentation of the technique does better than (my possibly bad recollection of) the last time I saw it explained in one important respect: acknowledging that in any given case there very well might not be a double-crux to find. I’d be happier if it were made more explicit that this is a thing that happens and that when it does it doesn’t mean “you need to double-crux harder”, it may simply be that the structure of your beliefs, or of the other person’s, or of both together, doesn’t enable your disagreement to be resolved or substantially clarified using this particular technique.
(if empirically it usually turns out in some class of circumstances that double-crux works, then I think this is a surprising and interesting finding, and any presentation of the technique should mention this and maybe offer explanations for why it might be. My guess is that (super-handwavily) say half the time a given top-level belief’s support is more “disjunctive” than “conjunctive”, so that no single one of the things that support it would definitely kill it if removed, and that typically if my support for belief X is more “conjunctive” then your support for belief not-X is likely to be more “disjunctive”. And you only get a double-crux when our beliefs both have “conjunctive” support (and when, furthermore, there are bits in the two supports that match in the right way).
Nitpicky remark: Of course in a rather degenerate sense any two people with a disagreement have a double-crux: if I think A and you think not-A, then A/not-A themselves are a double-crux: I would change my mind about A if I were shown to be wrong about A, and you would change your mind about not-A if you were shown to be wrong about not-A :-). Obviously making this observation is not going to do anything to help us resolve or clarify our disagreement. So I take it the aim is to find genuinely lower-level beliefs, in some sense, that have the double-crux relationship.
Further nitpicky-remark: even if someone’s belief that A is founded on hundreds of things B1, B2, … none of which is close to being cruxy, you can likely synthesize a crux by e.g. combining most of the Bs. This feels only slightly less cheaty than taking A itself and calling it a crux. Again, finding things of this sort probably isn’t going to help us resolve or clarify our disagreement. So I take it the aim is to find reasonably simple genuinely lower-level beliefs that have the double-crux relationship. And that’s what I would expect can often not be done, even in principle.
(I have a feeling I may have had a discussion like this with someone on LW before, but after a bit of searching I haven’t found it. If it turns out that I have and my comments above have already been refuted by someone else, my apologies for the waste of time :-).)
The claim of CFAR (based on instructors’ experience in practice) is that a single B represents something like 60-90% of the crux-weight underlying A “a surprising amount of the time.”
Like, I imagine that most people would guess there is a single crucial support maybe 5% of the time and the other 95% of the time there are a bunch of things, none of which are an order of magnitude more important than any other.
But it seems like there really is “a crux” maybe … a third of the time?
Here I’m wishing again that we could’ve gotten an Eli Tyre writeup of Finding Cruxes at some point.
Consider this hypothetical scenario:
In the course of you teaching me the double crux technique, we attempt to find a crux for a belief of mine. Suppose that there is no such crux. However, in our mutual eagerness to apply this cool new technique—which you (a person with high social status in my social circle, and possessed of some degree of personal charisma) are presenting to me, as a useful and proven thing, which lots of other high-status people use—we confabulate some “crux” for my belief. This “faux crux” is not really a crux, but with some self-deception I can come to believe it to be such. You declare success for the technique; I am impressed and satisfied. You file this under “we found a crux this time, as in 60–90% of other times”.
Do you think that this scenario is plausible? Implausible?
Plausible. But I (at least attempted to) account for this prior to giving the number.
Like, the number I give matches my own sense, introspecting on my own beliefs, including doing sensible double-checks and cross-checks with other very different tools such as Focusing or Murphyjitsu and including e.g. later receiving the data and sometimes discovering that I was wrong and have not updated in the way I thought I would.
I think you might be thinking that the “a surprising amount of the time” claim is heavily biased toward “immediate feedback from people who just tried to learn it, in a context where they are probably prone to various motivated cognitions,” and while it’s not zero biased in that way, it’s based on lots of feedback from not-that-context.
Interesting.
So if there’s a crux 1⁄3 of the time, and if my having a crux and your having a crux are independent (they surely aren’t, but it’s not obvious to me which way the correlation goes), we expect there to be cruxes on both sides about 10% of the time, which means it seems like it would be surprising if there were an available double-crux more than about 5% of the time. Does that seem plausible in view of CFAR experience?
Of course double-cruxing could be a valuable technique in many cases where there isn’t actually a double-crux to be found: it encourages both participants to understand the structure of their beliefs better, to go out of the way to look for things that might refute or weaken them, to pay attention to one another’s positions … What do you say to the idea that the real value in double-cruxing isn’t so much that sometimes you find double-cruxes, as that even when you don’t it usually helps you both understand the disagreement better and engage productively with one another?
Actually finding a legit double crux (i.e. a B that both parties disagree on, that is a crux for A that both parties disagree on) happening in the neighborhood of 5% of the time sounds about right.
More and more, CFAR leaned toward “the spirit of double crux,” i.e. seek to move toward getting resolution on your own cruxes, look for more concrete and more falsifiable things, assume your partner has reasons for their beliefs, try to do less adversarial obscuring of your belief structure, rather than “literally play the double crux game.”