Well, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing…
But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?
Yeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.)
The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. “Ohhhhh, wait, hang on—I think I would agree with what you’re saying if I thought that we couldn’t expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get.”
Another is the issue of burden of proof, which I think I’ve read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I’m remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up “declaring victory” prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it’s always been done, and it’s on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it’s on the backwards troglodytes to prove that spanking is net beneficial, the debate won’t ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they’re already expecting of their own beliefs, and can make that known to the other person.
Some other situations, off the top of my head:
You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you’re really uncomfortable and criticize my road rage, and we’re trying to converge on whether it was actually right that I did what I did. Double Crux seems like a good tool for each of us to get to the bottom of our implicit models and make them available to the other person.
You and I are living together in a house, and we have some sort of agreement about the cleanliness of the common spaces, and we keep clashing over it such that I feel judged and you feel defected on, and to some extent (given that each of us has our own frame) we’re both right. Double Crux (or at least the generators that caused Double Crux to be invented) seems like a useful tool for helping us keep the argument on track à la “under what circumstances would you agree my mess was permissible/under what circumstances would I agree I’d been too cavalier” (such that we can feel confident things will be different in the future because our models now converge), versus having it spiral off into “you’re a dick/you’re a slob,” which isn’t crucial to our disagreement in the same way.
You and I are trying to decide how to divide a chunk of value (e.g. $10000 we were given in a grant, or our work hours over the next month) and we strongly disagree to the point that there’s sort of a zero-sum game (e.g. I need all of my hours and some of yours to accomplish my plan, and the same is true in reverse for your plan). We could resolve this through rank, or we could resolve it in a social pressure game, or we could just fight and sink everything, but through Double Crux or something like it it seems likely that we can come closer to understanding why the other person is so confident that their use of resources is better, and once we both have identical overlapping models of both sides it seems likely that we can act strategically in a coordinated fashion to choose the best tradeoff.
Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!
That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).
This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.
Hmmm. Maybe there’s something in here about the difference between “Double-Crux-like” and “formal Double Crux”? On reflection, after you said you’re more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the “spirit” of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it’s the thing that generated Double Crux and not something that’s owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.
Hello, I’m the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.
On questions like “what is the longitude of Tokyo” or “who starred in the original Star Wars,” two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special “rationality technique” or something CFAR can take credit for inventing.
On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn’t actually connected to the real motivations behind their disagreement (“if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good”), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.
Double Crux does seem applicable to questions where the answer can’t simply be looked up, where the disagreement is strictly confined to the empirical level and doesn’t touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.
I note there’s something in here that I’m reading as a pseudofallacy—it’s the same reason why Mythbusters is terrible, and it goes like “I can only think of these three outcomes, and therefore those are the most likely outcomes.”
This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot.
That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data.
But there’s a particular sort of … audacity? … in taking one’s own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence “this thing that a lot of smart people say is useful just isn’t.”
In your shoes, I’d say something like what I said in my Focusing post, which is “this thing that is useful for a lot of people isn’t useful for me or the people around me.” That seems more solidly justified and epistemically sound, and enriches an onlooker’s understanding of the situation rather than creating crosswise narratives.
In particular, as I tried to do with Focusing, I’d make a genuine attempt to learn Double Crux (from the people who know what they’re talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven’t done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who’s skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans.
(By the way, it’s completely fine to have not done A + B + C. People have higher priorities. But I personally think that in a rationalist community like Less Wrong, we have a responsibility to not claim things are false or useless or stupid until we’ve actually attempted to falsify them, not just scanned through our own experiences for confirming evidence. If I were in your shoes and I didn’t think Double Crux was useful and I also didn’t intend to do A + B + C, I’d caveat my suspicions of its relative uselessness heavily by pointing out that I was using Stereotypes rather than Rigor, and I want people on Less Wrong to call for and socially reinforce that sort of standard.)
Will probably add that to my list of posts to write this month.
Also, am willing to do the thing that’s been suggested over and over in this thread, and do a Double Crux with you on the usefulness/uselessness of Double Crux, including doing the motions unilaterally while you do whatever you feel like. I could use more practice with Double Cruxing in a not-fully-cooperative environment, since it seems like a plurality of the important debates happen with people who aren’t willing to enter the Double Crux frame anyway.
You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.
Yes, I don’t actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I’d be very happy to see some real rigor, but I’m not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was “supposed to be doing better science later,” meaning better than self-reports and personal impressions. How much later is later?)
I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I’m going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don’t gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.
I have not in fact attended a CFAR workshop and don’t intend to, for reasons that might get me in trouble with the “Sunshine Regiment” if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I’m happy to try it with you if you’d like.
I disagree with your claim that I “reflexively characterized all skepticism as hostile.” I have reread my own comment and I do not think that’s a fair or accurate synopsis.
I believe you are overstating your claim that “there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy” and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don’t think you could actually find thousands of people in the intersection of “smart” and “endorses homeopathy”).
My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they’re being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc.
In short, I wasn’t asking you to be less skeptical, I was asking you to add one more person to your list of people you’re skeptical of—yourself.
I’m attempting to point out that your claim “Double Crux seems like an attempt to solve a problem that almost never happens” seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren’t necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying “I am experiencing this-or-that physiological distress which requires some form of treatment” or “I am having this-or-that medical problem which is lowering my quality of life” without reference to their thoughts on what would fix it. It does not seem that you are updating away from “the problems that Double Crux purports to solve are rare” and toward “those problems are rare in my experience but reliably common for large numbers of people.”
I’m attempting to point out that your statement “I can see one of three things happening” was made in such a way as to imply that there are no other likely things that might happen, and that you’re considering your ability to generate hypotheses or scenarios or predictions to be likely sufficient and near-complete. It’s like when Myth Busters say “Well, we failed to recreate claim X, and therefore claim X is impossible!” That whole paragraph was setting up strawmen and false dichotomies and ignoring giant swaths of possibility.
I didn’t feel like you really addressed any of the thrust of my previous reply, which was something like “If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
It does not look, based on your comments thus far, like you’re sincerely asking that question. Again, that’s fine—it could simply be that it’s not worth your time. Or it could be that you’re asking that question and I just haven’t noticed yet, and that’s fine because it’s in no way your job to appease some rando on the internet, and my endorsement is not your goal.
But the issue I have, at least, has nothing to do with your opinion on Double Crux. It has to do with the public impression you’re leaving, of how you’re forming and informing it. You’re laying claim to explicitly prosocial behavior on the basis of continuing skepticism, and I simply don’t believe you’re living up to the ideals you think you are. I think Less Wrong has (or ought have) a higher standard than the one you’re visibly meeting. The difference between solving the Emperor’s Clothes problem and just being a contrarian is evidence and sound argument.
Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.
“If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.
I suspect I’m already being more confrontational than you’d prefer, and I don’t want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR’s rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?
Well, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing…
But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?
Yeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.)
The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. “Ohhhhh, wait, hang on—I think I would agree with what you’re saying if I thought that we couldn’t expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get.”
Another is the issue of burden of proof, which I think I’ve read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I’m remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up “declaring victory” prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it’s always been done, and it’s on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it’s on the backwards troglodytes to prove that spanking is net beneficial, the debate won’t ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they’re already expecting of their own beliefs, and can make that known to the other person.
Some other situations, off the top of my head:
You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you’re really uncomfortable and criticize my road rage, and we’re trying to converge on whether it was actually right that I did what I did. Double Crux seems like a good tool for each of us to get to the bottom of our implicit models and make them available to the other person.
You and I are living together in a house, and we have some sort of agreement about the cleanliness of the common spaces, and we keep clashing over it such that I feel judged and you feel defected on, and to some extent (given that each of us has our own frame) we’re both right. Double Crux (or at least the generators that caused Double Crux to be invented) seems like a useful tool for helping us keep the argument on track à la “under what circumstances would you agree my mess was permissible/under what circumstances would I agree I’d been too cavalier” (such that we can feel confident things will be different in the future because our models now converge), versus having it spiral off into “you’re a dick/you’re a slob,” which isn’t crucial to our disagreement in the same way.
You and I are trying to decide how to divide a chunk of value (e.g. $10000 we were given in a grant, or our work hours over the next month) and we strongly disagree to the point that there’s sort of a zero-sum game (e.g. I need all of my hours and some of yours to accomplish my plan, and the same is true in reverse for your plan). We could resolve this through rank, or we could resolve it in a social pressure game, or we could just fight and sink everything, but through Double Crux or something like it it seems likely that we can come closer to understanding why the other person is so confident that their use of resources is better, and once we both have identical overlapping models of both sides it seems likely that we can act strategically in a coordinated fashion to choose the best tradeoff.
Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!
That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).
This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.
Hmmm. Maybe there’s something in here about the difference between “Double-Crux-like” and “formal Double Crux”? On reflection, after you said you’re more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the “spirit” of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it’s the thing that generated Double Crux and not something that’s owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.
Hello, I’m the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.
On questions like “what is the longitude of Tokyo” or “who starred in the original Star Wars,” two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special “rationality technique” or something CFAR can take credit for inventing.
On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn’t actually connected to the real motivations behind their disagreement (“if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good”), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.
Double Crux does seem applicable to questions where the answer can’t simply be looked up, where the disagreement is strictly confined to the empirical level and doesn’t touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.
I note there’s something in here that I’m reading as a pseudofallacy—it’s the same reason why Mythbusters is terrible, and it goes like “I can only think of these three outcomes, and therefore those are the most likely outcomes.”
This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot.
That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data.
But there’s a particular sort of … audacity? … in taking one’s own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence “this thing that a lot of smart people say is useful just isn’t.”
In your shoes, I’d say something like what I said in my Focusing post, which is “this thing that is useful for a lot of people isn’t useful for me or the people around me.” That seems more solidly justified and epistemically sound, and enriches an onlooker’s understanding of the situation rather than creating crosswise narratives.
In particular, as I tried to do with Focusing, I’d make a genuine attempt to learn Double Crux (from the people who know what they’re talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven’t done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who’s skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans.
(By the way, it’s completely fine to have not done A + B + C. People have higher priorities. But I personally think that in a rationalist community like Less Wrong, we have a responsibility to not claim things are false or useless or stupid until we’ve actually attempted to falsify them, not just scanned through our own experiences for confirming evidence. If I were in your shoes and I didn’t think Double Crux was useful and I also didn’t intend to do A + B + C, I’d caveat my suspicions of its relative uselessness heavily by pointing out that I was using Stereotypes rather than Rigor, and I want people on Less Wrong to call for and socially reinforce that sort of standard.)
Will probably add that to my list of posts to write this month.
Also, am willing to do the thing that’s been suggested over and over in this thread, and do a Double Crux with you on the usefulness/uselessness of Double Crux, including doing the motions unilaterally while you do whatever you feel like. I could use more practice with Double Cruxing in a not-fully-cooperative environment, since it seems like a plurality of the important debates happen with people who aren’t willing to enter the Double Crux frame anyway.
You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.
Yes, I don’t actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I’d be very happy to see some real rigor, but I’m not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was “supposed to be doing better science later,” meaning better than self-reports and personal impressions. How much later is later?)
I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I’m going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don’t gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.
I have not in fact attended a CFAR workshop and don’t intend to, for reasons that might get me in trouble with the “Sunshine Regiment” if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I’m happy to try it with you if you’d like.
I disagree with your claim that I “reflexively characterized all skepticism as hostile.” I have reread my own comment and I do not think that’s a fair or accurate synopsis.
I believe you are overstating your claim that “there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy” and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don’t think you could actually find thousands of people in the intersection of “smart” and “endorses homeopathy”).
My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they’re being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc.
In short, I wasn’t asking you to be less skeptical, I was asking you to add one more person to your list of people you’re skeptical of—yourself.
I’m attempting to point out that your claim “Double Crux seems like an attempt to solve a problem that almost never happens” seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren’t necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying “I am experiencing this-or-that physiological distress which requires some form of treatment” or “I am having this-or-that medical problem which is lowering my quality of life” without reference to their thoughts on what would fix it. It does not seem that you are updating away from “the problems that Double Crux purports to solve are rare” and toward “those problems are rare in my experience but reliably common for large numbers of people.”
I’m attempting to point out that your statement “I can see one of three things happening” was made in such a way as to imply that there are no other likely things that might happen, and that you’re considering your ability to generate hypotheses or scenarios or predictions to be likely sufficient and near-complete. It’s like when Myth Busters say “Well, we failed to recreate claim X, and therefore claim X is impossible!” That whole paragraph was setting up strawmen and false dichotomies and ignoring giant swaths of possibility.
I didn’t feel like you really addressed any of the thrust of my previous reply, which was something like “If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
It does not look, based on your comments thus far, like you’re sincerely asking that question. Again, that’s fine—it could simply be that it’s not worth your time. Or it could be that you’re asking that question and I just haven’t noticed yet, and that’s fine because it’s in no way your job to appease some rando on the internet, and my endorsement is not your goal.
But the issue I have, at least, has nothing to do with your opinion on Double Crux. It has to do with the public impression you’re leaving, of how you’re forming and informing it. You’re laying claim to explicitly prosocial behavior on the basis of continuing skepticism, and I simply don’t believe you’re living up to the ideals you think you are. I think Less Wrong has (or ought have) a higher standard than the one you’re visibly meeting. The difference between solving the Emperor’s Clothes problem and just being a contrarian is evidence and sound argument.
Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.
Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.
What makes you think describing why you personally won’t go to a workshop would get you in trouble?
I suspect I’m already being more confrontational than you’d prefer, and I don’t want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR’s rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?