I am p>99.999 confident that what I propose is right. I’d like that rigorously tested. Break me, crush me. Release me from the frustration of knowing (with every fibre in my body) that I’m right ; )
If you’re that confident in your position=pain theory, why would you need DAMN-IT? Why would your assessment of a patient do anything other than figure out which of your Big 5 muscles are involved in the pain? If the answer is, “Strengthen the glutes and your pain will stop,” then how is any pain ever properly characterized as degenerative?
Alternatively, if your theory is actually “position=pain/Big 5 unless some other pathology is involved,” then doesn’t your theory only say, “I’m 99.999 percent confident that pain properly diagnosed as idiopathic by someone who doesn’t subscribe to my theory is explained by my theory”?
At what point are you describing an invisible dragon?
Here’s the thing. You say you came to LW to get your theory disproven. Fine. But you are so confident in it that you expect to be wrong about one in one hundred thousand beliefs that you hold with that level of confidence. Beliefs I hold to that level of confidence include 9 * 7 = 63, because it’s possible I am misremembering my multiplication tables.
Now. Imagine trying to convince me that 9 * 7 = something else, just you and me in an empty room with no calculators.
This is why your entire sequence went by with minimal engagement and mild upvoting. The amount of work involved in “breaking you” is tremendous, especially over the Internet, especially when your model takes eight disorganized posts and has many irrelevant images in it, and you seemingly haven’t absorbed some basic lessons of The Sequences (TM). If I’m going to spend a bunch of time engaging with your theory and finding cruxes, I want to know in advance that you’ll play by the rules of good reasoning.
I’m not unwilling, but can you first provide three substantive answers to the following question:
What evidence would falsify your theory?
[APPRENTICE] My first is due in November. I’ve had a very hard time finding evidence-based parenting resources on the Internet that aren’t for extremely bad situations like poverty or abuse. I feel a burning need to be able to roughly model this kid’s subjective experience on a rolling basis because I suspect that’s what will make me the most emotionally effective AND let me impart the most rationality-adjacent thought habits. But the books I’ve come across have been either 1) “it’s all Piaget!” which seems somewhat outdated or 2) “Piaget is a good framework but outdated, and I’ve read some studies, but I’m terrible at synthesis!”.
Even just a reading list would be super great. Or a list of 10 heuristics for making parenting decisions. I feel like I need some kind of systematic approach.
I saw in one of your parenting posts that you cited parentingscience.com, which I’d come across in my searches and looked promising, but I couldn’t get enough clues from the site itself to figure out if it was a good foundation.
My not-a-Democrat grandmother had this exact experience when meeting him. They spoke for a few minutes, and she felt like he thought she was the most interesting person in the room. It left a permanent impression.
This is a Humble Bundle with a bunch of AI-related publications by Morgan & Claypool. $18 for 15 books. I’m a layperson re the material, but I’m pretty confident it’s worth $18 just to have all of these papers collected in one place and formatted nicely. NB increasing my payment from $18 to $25 would have raised the amount donated to the charity from $0.90 to $1.25--I guess the balance of the $7 goes directly to Humble.
Which classic amp sound does her sonorus model? Is it like a Line6 head but it can read the player’s mind? Or is there a Vox AC30 in a pocket dimension? What’s the mic setup if there are multiple amp speakers? Who handles the mixing? I have so many questions!
Re trade vs conquest—If smart people are in charge of a smart populace, I agree. But China’s South China Sea colonialism + attitude toward Taiwan suggest that they aren’t viewing things solely in those terms. They act like a people who find terminal value in throwing their weight around and in taking Taiwan, or at least in reducing the influence of the U.S.-Japan alliance in the area by doing those things.
Re your example of Bretton Woods—in an analogous situation, the U.S./world order would be ready to give China great trade terms, but China would not even perceive such terms to be possible—wouldn’t that give China an incentive to conquer instead of trade, as the Axis powers did? I am probably misinterpreting your point here. (Does China want more access to U.S./world markets than it already has?)
This all seems pretty sensible.
The United States and China aren’t expansionary powers
How long do you think it would take for China to go from its current level of expansionism to a level that would make war with the US plausibly worthwhile? Could it happen in a generation, and what might precipitate it? I’m thinking about Weimar Germany to Nazi Germany, or (the reverse) Imperial Japan to Solid-State Electronics Japan.
The Uighur ethnic cleansing is Han (versus “Chinese” more generally, since the Uighurs are citizens of PRC) expansionism, right? Might that become more widespread and aggressive?
(Contra, there’s not much worth owning in southeast Asia or the Stan countries, and Russia would oppose outside influence in former Soviet states, based on past and current behavior.)
What about taking over the Korean peninsula? Wouldn’t be the first time. If China controlled DPRK’s territory, which I assume they could at will, they could much more easily get troops into ROK than the U.S. could, especially if your view on missile-based ocean-area denial is correct. The 30,000 U.S. troops in ROK would have no realistic hope of reinforcement so long as neither side had air or sea superiority. Does POTUS order them to fight to the last soldier, hoping that 30,000 dead or captured would motivate the country to fight back, or negotiate a peaceful retreat and withdrawal from ROK? I guess it depends on who’s POTUS.
I bet the modern PRC could stop another Operation Chromite literally dead in the water. If nothing else, spotting an incoming sea assault is so much easier than it was in 1950.
These same issues would apply if China attacked Japan.
Do I detect an homage to Ann Leckie?
I eventually got tired of not knowing where the karma increments were coming from, so I changed it to cache once a week. I just got my first weekly cache, and the information I got from seeing what was voted on outweighed the encouragement of any Internet Points Neurosis I may have.
This is good world-modeling.
I also mentioned Clever Hans, and you made a good point in response. Rather than sound like I am motte-and-baileying you, I will say that I was using “Clever Hans” irresponsibly imprecisely as a stand-in for more issues than were present in the Clever Hans case.
I’ve updated in the direction of “I’ll eventually need to reconsider my relationship with my dog” but still expect a lot of these research threads to come apart through a combination of
Subconscious cues from trainers—true Clever Hans effects (dogs are super clued in to us thanks to selection pressure, in ways we don’t naturally detect)
Experiment design that has obvious holes in it (at first)
Experiment design that has subtle holes in it (once the easy problems are dealt with)
Alternative explanations, of experimentally established hole-free results, from professional scientists (once the field becomes large enough to attract widespread academic attention). Like, yes, you unambiguously showed experimental result x, which you attributed to p, which would indeed explain x, but q is an equally plausible explanation which your experiment does not differentiate against.
This is based on a model of lay science that tends to show these patterns, because lay science tends to be a “labor of love” that makes it harder to detect one’s own biases.
Specifically on the volunteer-based projects, I expect additional issues with:
Selection effects in the experimentees (only unusually smart/perceptive/responsive/whatever dogs will make it past the first round of training; the others will have owners who get bored from lack of results and quit)
Selection effects in the experimenters (only certain types of people will even be aware of this research; only exceptionally talented dog trainers will stick with the program because training intelligent dogs takes so much f-ing patience, much less training dumber dogs)
There may be lines of research that conclusively establish some surprising things about dog intelligence, and I look forward to any such surprisal. But I’m going to wait until the dust settles more—and until there are more published papers because I have to work a lot harder to understand technical information conveyed by video—before engaging with the research.
I have a dog and was aware of these people. My lack of reaction was due to a default assumption that this will turn out to be Clever Hansian once science brings its customary rigor to bear.
If not, I wonder if I will conclude that it’s unethical not to teach my dog how to communicate.
*The placebo effect is an effect.*
Yes, I guess I’m just wrestling with how it pings both instrumental and epistemic rationality.
I don’t know what “self-heal” means in your comment. Does that include conditions that go away on their own (episode of acute back pain, say)? In which case, wouldn’t it make more sense to call those temporary conditions, rather than conditions which require the intervention of some self-healing mechanism?
The only thing the open-label placebo effect tends to prove, to me, is that the placebo effect is operating at a mind-level much deeper than our rationality efforts can hope to reach.
I took an LW break for a few days and read the abstract of that Cochrane review. I’m going to go paragraph by paragraph in responding, which sometimes looks aggressive on the Internet but is just me crux-hunting.
From the Bayesian perspective you have a model of the world according to which different treatments have different likelihoods of having effects. Then you pay attention to reality and if reality doesn’t behave in the way your model predicts your model has to be updated. That’s the core of what epistemic rationality is about, being ready to update when your beliefs don’t pay rent.
If you want to go for the maximum of epistemic rationality, write down your credence for the effects of a given treatment down and then check afterwards how good your predictions have been. That’s the way to get a world model that’s aligned with empiric reality.
Agreed. I would do this now in advance of another treatment I suspected was woo.
While doing this it’s worth to keep in mind what you care about. One alternative medicine treatment is for example colon cleaning. People who do colon cleaning usually observe that after taking the colon cleaning substance their shit has a particular surprising form. If you were previously skeptical that the treatment did anything, you shouldn’t take the fact that your shit now has a surprising form that you didn’t expect as evidence that the treatment provides the medical benefits it’s claimed.
“Laughing deliriously” is not a result I would have expected from getting my leg tugged on exactly once, but I understand you (above) to be claiming that the unexpected result is evidence that it was not merely a placebo. I’m not a physiologist, but I can’t even begin to think of a reason for leg-tug->delirious-laugh other than “placebo.”
There are a bunch of alternative medical intervention that follow the pattern of providing surprising effects which then convince people that the intervention is great while not providing the hoped for benefits.
This is my understanding as well, and it acts as a global “less likely” coefficient any time I hear any claim made by the alternative medicine community.
When thinking about the issue of chiropratic interventions there’s also the question of whether the treated “pelvic misalignment” is the root cause.
The DO made no claims about what caused the pelvic misalignment (although she speculated that it was because I drive a manual transmission!), only that pelvic misalignment was the cause of the pain.
In the scenario where the “pelvic misalignment” is due to one leg being shorter then the other, it’s plausible that the “pelvic misalignment” is going to happen repeatidly in the future if it gets fixed.
Yes, and grossly/radiographically visible pelvic misalignment is a thing that happens due to legs of different lengths, but you are being too charitable. This DO did not say my legs were different lengths or that the misalignment was grossly visible, and in fact, she claimed that many such misalignments were invisible to x-ray.
Then if you would for example go every month to get your “pelvic misalignment” from the chiropratic that’s likely better then pain-killers but it’s still not a perfect intervention.
In which case I think a “doctor” or practitioner of any stripe has an obligation to dig deeper for an actual root cause. Who stops at, “Very gently tugging on this guy’s leg once a month provides some relief for his back pain that was bad enough that he went to the ER”? I’m not prepared to excuse that level of incuriousness; it causes me to down-update my trust in everything the practitioner says.
When it comes to working with the body there are a few strains of experts that develop their expertise through trained perception and who mostly work outside of the academia given that the trained expert perception is subjective in nature. I don’t think it’s necessary that those experts are able to translate what they are doing concept that break down along lines that can be objectively observed (like x-ray’s) instead of subjectively accessed.
I completely disagree. I would expect what you’re calling subjective trained expert perception to be constantly subject to all of the following cognitive biases (reading the Wikipedia list) and others I haven’t thought of:
Anchoring bias (you learn bullshit in your bullshit school about what causes unilateral back pain, and that’s your frame for all unilateral back pain now)
Availability bias (these other biases cause you to remember confirming data, which then...causes you to remember confirming data)
Confirmation bias (you remember everyone who you helped and forget the people you didn’t help)
Backfire effect (when someone says you didn’t help, you find a way to use that as evidence that your underlying theory is true)
Hindsight bias (when someone has a good result, you believe it was predictable at the time you treated them, which makes you look great in your own mind)
Illusory truth effect/availability cascade (everyone in your professional community says unilateral back pain is caused by pelvic misalignment)
Sunk-cost thinking (you spent a lot of money and professional time learning to tug on people’s legs, so you tend to ignore evidence that your entire field is woo, or decide that “Western medicine” is the real villain)
Pareidolia (you perceive important patterns in random noise—cf. study where practitioners can’t agree on where a supposed trigger point is)
Salience bias (a patient says you cured their back pain vs. a patient who you never see again and posts on the Internet dragging your entire field)
Summing over all these biases, I have basically no faith at all in subjective trained expert perception of people performing chiropractic/bodywork. Crux: Whether there are reliable studies tending to show that such practitioners have any ability to diagnose/recognize illness conditions better than chance, or that their diagnosis-specific chiropractic manipulations do better than the replacement-level intervention we would expect from “Western medicine” treating the same symptom. No partial credit for “You have a C4-C5 subluxation that won’t show up on a CT scan; I prescribe [the same physical therapy you’d get from an MD].”
Just like a good musician might not be able to give you an objective model of his expertise doesn’t mean that they don’t have expertise.
Bodywork-expertise claims to have both objective effects on humans and objective models behind those effects. I don’t think musicians make similarly specific claims (beyond very general things like “these three notes sound weird because the third one is out of key” or “songs in a minor key tend to feel sad relative to songs in the major key”). Musicians and bodyworkers may both claim “expertise through trained perception,” but a musician claiming that is making dramatically weaker factual claims at a dramatically weaker epistemic standard than the bodyworker.
In general it makes sense to go first for the treatments that you think have the most likely success and then if they don’t work depending on how desperate you are down the list to treatments that you believe have a lower chance of success. In practice it makes sense to also factor in what success in a given treatment means, the possible risks of the treatment and the costs.
This is a sensible response, probably the ideally correct one, and I appreciate it.
My counter question is: as a limited agent, at what point, if ever, am I justified in writing off (that is, assuming it was a placebo) a treatment with no plausible mechanism of action? I’ve done mainstream treatments for this spasm as well, without the zany effect, and without the equivalent reduction in magnitude.
Jeffrey Epstein didn’t commit suicide. Two cameras malfunctioned, the normal procedures weren’t followed, and it’s silly to think he didn’t have compromising information on important people. And it was an incredibly high-profile prisoner.
“Attorney General William Barr described Epstein’s death as ‘a perfect storm of screw-ups’.” Yet several guards were indicted on charges of conspiracy and record falsification.
This belief is so obvious to me that I felt like I was being gaslighted by news outlets and even academics who later called the belief a conspiracy theory in the same class as QAnon and UFOs, including a guest on a FiveThirtyEight podcast about conspiracy theories (I’m a huge FiveThirtyEight fan; they laid the groundwork for me to appreciate this community, which in turn mostly increased my appreciation for FiveThirtyEight).
A majority of Americans seem to agree with me, although who knows why, so maybe it’s not a “weird” belief except when compared against the mass media/”elite” narrative.
You could de-convince me with statistics about how often those and similar cameras malfunctioned and how often guards disregarded normal procedures with other prisoners, low profile and high profile.
This post addresses none of the valid criticisms in comments on your recent posts, especially challenges to the accuracy of your assessment of counterparty risk. These were all different ways of saying, “Your posts do not contain enough information to allow me to determine the likelihood that you are a genius versus a crank.”
You can’t just keep invoking Scott Alexander’s Bitcoin regrets. Those opportunities are gone.
This post, in which you instead conclude that “haters gonna hate,” does not advance your cause. I am just irritated that you keep posting without seeming to absorb the community’s epistemic values.
I like the model you’re developing here as an intuitively plausible explanation of akrasia. However, I think the comparison of BUD/S to, say, ritual scarification or bullet-ant gloves isn’t strong enough to support your theory.
Like we might hope, it endows its survivors (because some die as a direct result of it) with focus, decisiveness, and basically all the conscientiousness they need to seriously “kick ass” — that is, underperform their cognitive potential much less than most do.
This point about BUD/S isn’t obviously correct to me. Something like 75% of candidates drop out without completing the course. This is strong evidence to me that BUD/S primarily selects for whatever it’s designed to optimize for (whether intentionally or unintentionally) rather than endowing it.
At least as of 1981, a major part of the weeding-out was occurring fairly early in the course:
Thirty-five percent of the attrites dropped during the indoctrination period; 27 percent, during the first 2 weeks of training, 15 percent, during Hellweek; and 23 percent,during the remainder of the training period.
(page v). An average 20% of the attrites who passed the screen quit during Hell Week (page v), three weeks into the actual course, and as high as 36% did in two of the classes studied (page 18). If BUD/S cultivated traits rather than selecting for them, I would expect the dropouts to be more evenly distributed. You could observe that many of the attrites during the indoctrination period failed the physical screening test, but then we have to determine how well conscientiousness correlates with passing the screen...
Admittedly, those statistics don’t differentiate between medical attrition and voluntary attrition, which were each about 40% of total attrition.
This study by the Navy doesn’t seem to support your claim that BUD/S makes people conscientious to the extent you suggest. SEALs seem to be somewhat, but not hugely, above civilian average on the conscientiousness scale (page 10). Somewhat contra my arguments, this observational study admits that it could not rule out the possibility that BUD/S increases conscientiousness (page 11).
This study by RAND indicates (page 11) that the Air Force Research Laboratory concluded that higher-than-average conscientiousness was predictive of success in the Combat Controller course (a component of the Air Force’s special operations side). Combat Controllers work alongside other branches’ special forces people, so presumably they need some of the same special sauce in order to succeed. CCT school is much shorter than BUD/S, I admit, but it’s some evidence that conscientiousness is a cause, not an effect, of success in Special Forces.
I think the most favorable claim you could make based on BUD/S is “To the extent that high conscientiousness is required for a BUD/S candidate’s success in the course and as a SEAL, only 25% of the candidates either have the requisite conscientiousness at the start of the course or develop it during the course before the course selects against their then-current level of conscientiousness.”
[edit: changed “SEALS” to “BUD/S” in the first graf]