Why did I believe Oliver Sacks?
So, it’s recently come out that Oliver Sacks made up a lot the stuff he wrote.
I read parts of The Man Who Mistook His Wife for a Hat a few years ago and read Musicophilia and Hallucinations earlier this year. I think I’m generally a skeptical person, one who is not afraid to say “I don’t believe this thing that is being presented to me as true.” Indeed, I find myself saying that sentence somewhat regularly when presented with incredible information. But for some reason I didn’t ask myself if what I was reading was true when reading Oliver Sacks. Why was this?
The main reason I can think of is that the particular domain of Sacks, which I’d call neurology or the behavior of brain damaged patients, is one in which I had prior belief that A. incredible stuff does happen and B. we don’t really understand. In particular, we have stuff like the behavior of split hemisphere patients and people like Phineas Gage. So my prior is that incredible things really do happen, and nothing Sacks said was any more unbelievable than these phenomena.
Also, for Musicophilia, the “domain” could additionally said to be music or humans’ reactions to music, which again is something I think is pretty incredible and that we don’t understand. Like, music is really powerful, why do we have such strong reactions to it? Why does it exist at all? Let me put it this way: music is so weird that if I hadn’t experienced its effects first hand, I’d be inclined to think that the entire thing is “made up” and humanity is under some sort of mass delusion, confusion, or fraud.
The second reason I can think of is that something… the approach or voice or worldview or something else… about Oliver Sacks made me trust him; made me think he was generally sane and truthseeking and honest. I’m not entirely sure why this is. I’ll be thinking about this more.
If you were like me and you were insufficiently skeptical of Oliver Sack’s claims, it’s worth asking: why did I make this mistake? Certainly this thing is relevant to the general rationalist project, to the goal of being less wrong. Or maybe you weren’t like me, and you didn’t believe Sacks. Well, why not? Don’t just say “This isn’t actually hard,” because this is actually hard. Epistemics is hard! Under what principles or knowledge of the world did you not believe Sacks while also believing that split brain patients were a thing?
One relevant point here is that if you read OP, it makes the case that Hat was Sacks’s nadir of honesty, and that guilt (from Hat being a wild success) and changed methodology (his fame meant a lot of spontaneous contacts so he didn’t need to make anything up to have something to write about nor did he have any concern about getting published or sales) mean that the later books were probably far more trustworthy (but also more boring, as Sacks himself complains in his described diary entries). The fabrications seemed to have stopped. And Hat was published in 1985, but your other 2 entries were 22+ years later: Musicophilia was 2007, and Hallucinations was 2012. So they are much later books, and if you believe the investigation, probably legitimate.
What claims were fabricated, specifically? It seems like mostly minor stuff. As in, a man with visual agnosia probably did mistake very different objects, like his wife or his hat, though maybe Sacks created that specific situation where he mistook his wife for his hat just for dramatic effect. It’s shitty that he would do that, but I still feel that whatever I believed after reading The Man Who Mistook His Wife for a Hat I was probably right to believe, because the major details are probably true?
I think that the case of twins who generated prime numbers is a serious one. This leads us to overestimation of human brain capabilities. I used to be skeptical about it and was criticize for not believing.
Yeah that seems to be the most serious one, and the only one I could see that I had a real issue with.
I think the key reason why many people believed Oliver Sacks is that he had a good reputation within the scientific community and people want to “believe the science”. People don’t like to believe that scientists produce fraudulent data. It’s the same reason why people believe Matthew Walker.
I did have one bioinformatics professor who did made a point to say something in every lecture of the semester that we should not believe the literature. Many people who think of themselves as skeptic are not skeptic when it comes to claims made by people who have a good reputation in the scientific community.
I never actually read Oliver Sacks. I believed, without thinking much about it, that he was probably credible because he was well-respected and I wasn’t aware of any major debunkings. And, well, brain malfunctions can get weird and tragic quickly.
The other reason I assumed that he was probably correct (without bothering to dig in and decide for myself) was that nothing I believed about Oliver Sacks was particularly load bearing for me? I think the hardest I ever leaned on his work was when I told people, “Some of these LLMs have bizarre cognitive deficits. They’re like an Oliver Sacks patient or something.”
So he was more or less scientifically respectable, nobody had debunked him at the time, and my beliefs about his work weren’t load bearing.
I’m not sure if I can really fix the generalized issue here: I believe a lot of basically unimportant things about the world solely because groups of credible-seeming people made a claim that sounded vaguely plausible. I believe a reasonable majority of those things are actually true, but wow, would it take forever to check them all. I do have a policy of discarding all further claims from known liars, because unskewing dishonest data is a fool’s errand.
I guess the best I can do is to (1) check things that a load bearing in my life more carefully, (2) randomly sample and investigate enough items to maintain my awareness that some of my beliefs are in error, and (3) accept that some of the things I “know” just ain’t so.
There’s the policy of generally being more skeptical both on claims that something is true or something is false and more often say “I don’t know”.
The problem with broad skepticism is that I would reject enormous numbers of true conclusions, including very basic facts about the world.
For example, I haven’t personally verified the heliocentric model of the solar system from observation. I think I’ve “verified” gravitational acceleration maybe once, poorly. I have “verified” vaccine reliability from the fact that I don’t know anyone with polio, but my parents (generally reliable witnesses) actually did remember when people got polio. Also, I once met an EMT who walked through old New England graveyards looking for “tiny tombstones” where multiple children under 10 in a family all died within a year or so, with causes of death and death dates that were consistent with known diseases. (But can I trust him? I mean, he seemed fairly trustworthy, but I never walked through those graveyards.) For that matter, my only verification that the Roman empire existed is what you can perform as a tourist in Italy. I believe one of my old cars had a broken overdrive system because a good mechanic said that it did, and because he fixed the problem within 10 seconds of opening the hood by yanking out a cable, and told me “No charge.” I didn’t take out my Haynes teardown manual and study the engine to verify his claim, though I easily could have. The car was, in fact, fixed, and that was good enough past 200,000 miles.
An over-broad skepticism of experts risks turning people into the kind of credulous fools who try to heal themselves with the powers of quartz crystals.
A more subtle balance is required here, I think, and accepting broad categories of information as “probably true because experts said so” is almost certainly a decent rule of thumb. Especially if you apply some common sense, and if you keep track of which experts appear to be full it (e.g., the replication crisis), and if you remain aware that you almost certainly have some false beliefs but you don’t know which.
There’s a reason why I spoke about generally being skeptical. The person who easily accepts claims about the healing powers of quartz crystals is not broadly skeptical. They are not the person who often says “I don’t know”.
The replication crisis is about the community of psychology getting much better at getting rid of bullshit. Before the crisis you could have listened to Feynmann’s cargo cult science speech and him explaining why rat psychology is cargo cult science and observe that the same criticisms apply to most of psychology.
Fields of science that behave like what Feymann describes as cargo cult science but who don’t had their replication crisis are less trustworthy than post-replication crisis psychology. Post-replication crisis psychology still isn’t perfect but it’s a step up.
There are many cases where systematically increased transparency that reveal problems in an expert community should get you to trust them more because they have found ways to reduce problems.
If you ask “What do I do if I don’t know?”, there’s the answer is to make sure that you have decent feedback systems that allow you to change course if what you are doing isn’t working.
I thought the cases in The Man Who Mistook His Wife for a Hat were obviously as fictionalized as an episode of House: the condition described is real and based on an actual case, but the details were made up to make the story engaging. But I didn’t read it in 1985 when it was published. Did people back then take statements like “based on a true story” more seriously?
I read it long after it was published, and took it as less fictionalized than House; in that show the audience can expect events to take the occasional turn towards wild implausibility for the sake of drama. I expected MWMHWfaH to fudge personally identifying details, sure, but to hew as closely to medical reality as possible. The stories in the book aren’t dramas, he’s not trying to give his patients satisfying “character arcs” or inject moments of tension and uncertainty. I don’t care if the personal details are made up, but if the clinical details are wrong—as in the story of the twins generating prime numbers, mentioned in another comment—that seems like a real divergence from the truth. I had assumed, from reading the book, that this had literally happened, not that it was a cute story meant to illustrate the power of the human mind.
I was under the impression that Oliver Sacks was well regarded among his professional colleagues, so he wouldn’t just make up a bunch of important stuff out of whole cloth.
I have read about people who were skeptical of the substance of the Phineas Gage story too (i.e. that he had this big involuntary personality shift after his injury.)
I took his claims at face value for many years, although there was always a small undercurrent of skepticism. But when I heard many years ago that he claimed that he himself suffered from the neurosis of not being able to recognize people’s faces, my skepticism came on fully. How could someone who couldn’t recognize people by their faces have the truly deep insight into personality that is presented in his books? It didn’t make sense to me. Either he wasn’t completely truthful about that affliction or the stories weren’t quite as they appeared. It all seemed too much to believe, and so from that moment I didn’t really believe any of it.
I think that we may be tempted to justify our adherence to Sacks’s narrative by nice arguments like his reading feels honest and convincing. However it is plausibility a rationalization avoiding to acknowledge much more common and boring reasons such as we have a strong prior because 1) it’s a book 2) it’s a best seller 3) the author is a physician 4) the patients were supposed to be known to other physicians, nurses, etc 5) and yes, as you also pointed out, we already know that neurology is about crazy things. So overall the prior is high that the book tells the truth even before we open it. That’s said, I really love Oliver Sacks’s books.
That is a tricky problem, isn’t it? There are much weirder things that are well-known to be true, like the ability to cure phantom limb pain with some well-placed mirrors, or the fact that you can give someone back the ability to recognize faces by disabling the part of the brain meant to ‘shortcut’ facial recognition. We build a probabilistic model of how ‘weird’ we expect a given domain to be, and, when a story’s weirdness is well within one or two standard deviations of what we expect, we don’t see any reason to be doubtful. It’s comparable to your neighbor lying about having gone to the grocery store yesterday, or having seen a rare breed of dog at the local park. The effort needed to investigate and uncover unsurprising lies would be intractable.
In the general case, this is a pretty harmless thing. It only allows lies to go unrecognized when the updates they’d produce would be small. The more dangerous case is when malicious actors abuse this same phenomena by preempting a true-but-surprising story that would produce large updates after readers investigate and find it to be true by circulating a similarly surprising false story that, after being discredited, causes readers to forego the effort needed to investigate the original story.
Sadly, I think Sack’s actions may, unintentionally, have the same effect as the explicitly malicious example above. “Cool neuroscience thing turns out to be made up for book sales” is now in the public psyche, and ordinary people who do not spend time reading neuroscience papers may default to dismissing interesting discoveries that could improve their lives or motivate them to learn more.