Bayes for Schizophrenics: Reasoning in Delusional Disorders

Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran’s 1996 theory explaining anosognosia through an “apologist” and a “revolutionary”.

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter’s arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient’s left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an “apologist”, trying to justify existing theories, and the right brain is a “revolutionary” which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient’s arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran’s theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/​right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.


INTRODUCTION TO DELUSIONS

Strange as anosognosia is, it’s only one of several types of delusions, which are broadly categorized into polythematic and monothematic. Patients with polythematic delusions have multiple unconnected odd ideas: for example, the famous schizophrenic game theorist John Nash believed that he was defending the Earth from alien attack, that he was the Emperor of Antarctica, and that he was the left foot of God. A patient with a monothematic delusion, on the other hand, usually only has one odd idea. Monothematic delusions vary less than polythematic ones: there are a few that are relatively common across multiple patients. For example:

In the Capgras delusion, the patient, usually a victim of brain injury but sometimes a schizophrenic, believes that one or more people close to her has been replaced by an identical imposter. For example, one male patient expressed the worry that his wife was actually someone else, who had somehow contrived to exactly copy his wife’s appearance and mannerisms. This delusion sounds harmlessly hilarious, but it can get very ugly: in at least one case, a patient got so upset with the deceit that he murdered the hypothesized imposter—actually his wife.

The Fregoli delusion is the opposite: here the patient thinks that random strangers she meets are actually her friends and family members in disguise. Sometimes everyone may be the same person, who must be as masterful at quickly changing costumes as the famous Italian actor Fregoli (inspiring the condition’s name).

In the Cotard delusion, the patient believes she is dead. Cotard patients will neglect personal hygiene, social relationships, and planning for the future—as the dead have no need to worry about such things. Occasionally they will be able to describe in detail the “decomposition” they believe they are undergoing.

Patients with all these types of delusions1 - as well as anosognosiacs—share a common feature: they usually have damage to the right frontal lobe of the brain (including in schizophrenia, where the brain damage is of unknown origin and usually generalized, but where it is still possible to analyze which areas are the most abnormal). It would be nice if a theory of anosognosia also offered us a place to start explaining these other conditions, but this Ramachandran’s idea fails to do. He posits a problem with belief shift: going from the originally correct but now obsolete “my arm is healthy” to the updated “my arm is paralyzed”. But these other delusions cannot be explained by simple failure to update: delusions like “the person who appears to be my wife is an identical imposter” never made sense. We will have to look harder.

ABNORMAL PERCEPTION: THE FIRST FACTOR

Coltheart, Langdon, and McKay posit what they call the “two-factor theory” of delusion. In the two-factor theory, one problem causes an abnormal perception, and a second problem causes the brain to come up with a bizarre instead of a reasonable explanation.

Abnormal perception has been best studied in the Capgras delusion. A series of experiments, including some by Ramachandran himself, demonstrate that Capgras patients lack a skin conductance response (usually used as a proxy of emotional reaction) to familiar faces. This meshes nicely with the brain damage pattern in Capgras, which seems to involve the connection between the face recognition areas in the temporal lobe and the emotional areas in the limibic system. So although the patient can recognize faces, and can feel emotions, the patient cannot feel emotions related to recognizing faces.

The older “one-factor” theories of delusion stopped here. The patient, they said, knows that his wife looks like his wife, but he doesn’t feel any emotional reaction to her. If it was really his wife, he would feel something—love, irritation, whatever—but he feels only the same blankness that would accompany seeing a stranger. Therefore (the one-factor theory says) his brain gropes for an explanation and decides that she really is a stranger. Why does this stranger look like his wife? Well, she must be wearing a very good disguise.

One-factor theories also do a pretty good job of explaining many of the remaining monothematic delusions. A 1998 experiment shows that Cotard delusion sufferers have a globally decreased autonomic response: that is, nothing really makes them feel much of anything—a state consistent with being dead. And anosognosiacs have lost not only the nerve connections that would allow them to move their limbs, but the nerve connections that would send distress signals and even the connections that would send back “error messages” if the limb failed to move correctly—so the brain gets data that everything is fine.

The basic principle behind the first factor is “Assume that reality is such that my mental states are justified”, a sort of Super Mind Projection Fallacy.

Although I have yet to find an official paper that says so, I think this same principle also explains many of the more typical schizophrenic delusions, of which two of the most common are delusions of grandeur and delusions of persecution. Delusions of grandeur are the belief that one is extremely important. In pop culture, they are typified by the psychiatric patient who believes he is Jesus or Napoleon—I’ve never met any Napoleons, but I know several Jesuses and recently worked with a man who thought he was Jesus and John Lennon at the same time. Here the first factor is probably an elevated mood (working through a miscalibrated sociometer). “Wow, I feel like I’m really awesome. In what case would I be justified in thinking so highly of myself? Only if I were Jesus and John Lennon at the same time!” A similar mechanism explains delusions of persecution, the classic “the CIA is after me” form of disease. We apply the Super Mind Projection Fallacy to a garden-variety anxiety disorder: “In what case would I be justified in feeling this anxious? Only if people were constantly watching me and plotting to kill me. Who could do that? The CIA.”

But despite the explanatory power of the Super Mind Projection Fallacy, the one-factor model isn’t enough.

ABNORMAL BELIEF EVALUATION: THE SECOND FACTOR

The one-factor model requires people to be really stupid. Many Capgras patients were normal intelligent people before their injuries. Surely they wouldn’t leap straight from “I don’t feel affection when I see my wife’s face” to “And therefore this is a stranger who has managed to look exactly like my wife, sounds exactly like my wife, owns my wife’s clothes and wedding ring and so on, and knows enough of my wife’s secrets to answer any question I put to her exactly like my wife would.” The lack of affection vaguely supports the stranger hypothesis, but the prior for the stranger hypothesis is so low that it should never even enter consideration (remember this phrasing: it will become important later.) Likewise, we’ve all felt really awesome at one point or another, but it’s never occurred to most of us that maybe we are simultaneously Jesus and John Lennon.

Further, most psychiatric patients with the deficits involved don’t develop delusions. People with damage to the ventromedial area suffer the same disconnection between face recognition and emotional processing as Capgras patients, but they don’t draw any unreasonable conclusions from it. Most people who get paralyzed don’t come down with anosognosia, and most people with mania or anxiety don’t think they’re Jesus or persecuted by the CIA. What’s the difference between these people and the delusional patients?

The difference is the right dorsolateral prefrontal cortex, an area of the brain strongly associated with delusions. If whatever brain damage broke your emotional reactions to faces or paralyzed you or whatever spared the RDPC, you are unlikely to develop delusions. If your brain damage also damaged this area, you are correspondingly more likely to come up with a weird explanation.

In his first papers on the subject, Coltheart vaguely refers to the RDPC as a “belief evaluation” center. Later, he gets more specific and talks about its role in Bayesian updating. In his chronology, a person damages the connection between face recognition and emotion, and “rationally” concludes the Capgras hypothesis. In his model, even if there’s only a 1% prior of your spouse being an imposter, if there’s a 1000 times greater likelihood of you not feeling anything toward an imposter than to your real spouse, you can “rationally” come to believe in the delusion. In normal people, this rational belief then gets worn away by updating based on evidence: the imposter seems to know your spouse’s personal details, her secrets, her email passwords. In most patients, this is sufficient to have them update back to the idea that it is really their spouse. In Capgras patients, the damage to the RDPC prevents updating on “exogenous evidence” (for some reason, the endogenous evidence of the lack of emotion itself still gets through) and so they maintain their delusion.

This theory has some trouble explaining why patients are still able to update about other situations, but Coltheart speculates that maybe the belief evaluation system is weakened but not totally broken, and can deal with anything except the ceaseless stream of contradictory endogenous information.

EXPLANATORY ADEQUACY BIAS

McKay makes an excellent critique of several questionable assumptions of this theory.

First, is the Capgras hypothesis ever plausible? Coltheart et al pretend that the prior is 1100, but this implies that there is a base rate of your spouse being an imposter one out of every hundred times you see her (or perhaps one out of every hundred people has a fake spouse) either of which is preposterous. No reasonable person could entertain the Capgras hypothesis even for a second, let alone for long enough that it becomes their working hypothesis and develops immunity to further updating from the broken RDPC.

Second, there’s no evidence that the ventromedial patients—the ones who lose face-related emotions but don’t develop the Capgras delusion—once had the Capgras delusion but then successfully updated their way out of it. They just never develop the delusion to begin with.

McKay keeps the Bayesian model, but for him the second factor is not a deficit in updating in general, but a deficit in the use of priors. He lists two important criteria for reasonable belief: “explanatory adequacy” (what standard Bayesians call the likelihood ratio; the new data must be more likely if the new belief is true than if it is false) and “doxastic conservativism” (what standard Bayesians call the prior; the new belief must be reasonably likely to begin with given everything else the patient knows about the world).

Delusional patients with damage to their RDPC lose their ability to work with priors and so abandon all doxastic conservativism, essentially falling into a what we might term the Super Base Rate Fallacy. For them the only important criterion for a belief is explanatory adequacy. So when they notice their spouse’s face no longer elicits any emotion, they decide that their spouse is not really their spouse at all. This does a great job of explaining the observed data—maybe the best job it’s possible for an explanation to do. Its only minor problem is that it has a stupendously low prior, and this doesn’t matter because they are no longer able to take priors into account.

This also explains why the delusional belief is impervious to new evidence. Suppose the patient’s spouse tells personal details of their honeymoon that no one else could possibly know. There are several possible explanations: the patient’s spouse really is the patient’s spouse, or (says the left-brain Apologist) the patient’s spouse is an alien who was able to telepathically extract the relevant details from the patient’s mind. The telepathic alien imposter hypothesis has great explanatory adequacy: it explains why the person looks like the spouse (the alien is a very good imposter), why the spouse produces no emotional response (it’s not the spouse at all) and why the spouse knows the details of the honeymoon (the alien is telepathic). The “it’s really your spouse” explanation only explains the first and the third observations. Of course, we as sane people know that the telepathic alien hypothesis has a very low base rate plausibility because of its high complexity and violation of Occam’s Razor, but these are exactly the factors that the RDPC-damaged2 patient can’t take into account. Therefore, the seemingly convincing new evidence of the spouse’s apparent memories only suffices to help the delusional patient infer that the imposter is telepathic.

The Super Base Rate Fallacy can explain the other delusional states as well. I recently met a patient who was, indeed, convinced the CIA were after her; of note she also had extreme anxiety to the point where her arms were constantly shaking and she was hiding under the covers of her bed. CIA pursuit is probably the best possible reason to be anxious; the only reason we don’t use it more often is how few people are really pursued by the CIA (well, as far as we know). My mentor warned me not to try to argue with the patient or convince her that the CIA wasn’t really after her, as (she said from long experience) it would just make her think I was in on the conspiracy. This makes sense. “The CIA is after you and your doctor is in on it” explains both anxiety and the doctor’s denial of the CIA very well; “The CIA is not after you” explains only the doctor’s denial of the CIA. For anyone with a pathological inability to handle Occam’s Razor, the best solution to a challenge to your hypothesis is always to make your hypothesis more elaborate.

OPEN QUESTIONS


Although I think McKay’s model is a serious improvement over its predecessors, there are a few loose ends that continue to bother me.

”You have brain damage” is also a theory with perfect explanatory adequacy. If one were to explain the Capgras delusion to Capgras patients, it would provide just as good an explanation for their odd reactions as the imposter hypothesis. Although the patient might not be able to appreciate its decreased complexity, they should at least remain indifferent between the two hypotheses. I’ve never read of any formal study of this, but given that someone must have tried explaining the Capgras delusion to Capgras patients I’m going to assume it doesn’t work. Why not?

Likewise, how come delusions are so specific? It’s impossible to convince someone who thinks he is Napoleon that he’s really just a random non-famous mental patient, but it’s also impossible to convince him he’s Alexander the Great (at least I think so; I don’t know if it’s ever been tried). But him being Alexander the Great is also consistent with his observed data and his deranged inference abilities. Why decide it’s the CIA who’s after you, and not the KGB or Bavarian Illuminati?

Why is the failure so often limited to failed inference from mental states? That is, if a Capgras patient sees it is raining outside, the same process of base rate avoidance that made her fall for the Capgras delusion ought to make her think she’s been transported to ther rainforest or something. This happens in polythematic delusion patients, where anything at all can generate a new delusion, but not those with monothematic delusions like Capgras. There must be some fundamental difference between how one draws inferences from mental states versus everything else.

This work also raises the question of whether one can one consciously use System II Bayesian reasoning to argue oneself out of a delusion. It seems improbable, but I recently heard about an n=1 personal experiment of a rationalist with schizophrenia who used successfully used Bayes to convince themselves that a delusion (or possibly hallucination; the story was unclear) was false. I don’t have their permission to post their story here, but I hope they’ll appear in the comments.

FOOTNOTES


1: I left out discussion of the Alien Hand Syndrome, even though it was in my sources, because I believe it’s more complicated than a simple delusion. There’s some evidence that the alien hand actually does move independently; for example it will sometimes attempt to thwart tasks that the patient performs voluntarily with their good hand. Some sort of “split brain” issues seem like a better explanation than simple Mind Projection.

2: The right dorsolateral prefrontal cortex also shows up in dream research, where it tends to be one of the parts of the brain shut down during dreaming. This provides a reasonable explanation of why we don’t notice our dreams’ implausibility while we’re dreaming them—and Eliezer specifically mentions he can’t use priors correctly in his dreams. It also highlights some interesting parallels between dreams and the monothematic delusions. For example, the typical “And then I saw my mother, but she was also somehow my fourth grade teacher at the same time” effect seems sort of like Capgras and Fregoli. Even more interestingly, the RDPC gets switched on during lucid dreaming, providing an explanation of why lucid dreamers are able to reason normally in dreams. Because lucid dreaming also involves a sudden “switching on” of “awareness”, this makes the RDPC a good target area for consciousness research.