You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.
Yes, I don’t actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I’d be very happy to see some real rigor, but I’m not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was “supposed to be doing better science later,” meaning better than self-reports and personal impressions. How much later is later?)
I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I’m going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don’t gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.
I have not in fact attended a CFAR workshop and don’t intend to, for reasons that might get me in trouble with the “Sunshine Regiment” if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I’m happy to try it with you if you’d like.
Hello, I’m the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.
On questions like “what is the longitude of Tokyo” or “who starred in the original Star Wars,” two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special “rationality technique” or something CFAR can take credit for inventing.
On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn’t actually connected to the real motivations behind their disagreement (“if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good”), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.
Double Crux does seem applicable to questions where the answer can’t simply be looked up, where the disagreement is strictly confined to the empirical level and doesn’t touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.