Bayes for Schizophrenics: Reasoning in Delusional Disorders

Re­lated to: The Apol­o­gist and the Revolu­tion­ary, Dreams with Da­m­aged Priors

Sev­eral years ago, I posted about V.S. Ra­machan­dran’s 1996 the­ory ex­plain­ing anosog­nosia through an “apol­o­gist” and a “rev­olu­tion­ary”.

Anosog­nosia, a con­di­tion in which ex­tremely sick pa­tients mys­te­ri­ously deny their sick­ness, oc­curs dur­ing right-sided brain in­jury but not left-sided brain in­jury. It can be ex­traor­di­nar­ily strange: for ex­am­ple, in one case, a woman whose left arm was par­a­lyzed in­sisted she could move her left arm just fine, and when her doc­tor pointed out her im­mo­bile arm, she claimed that was her daugh­ter’s arm even though it was ob­vi­ously at­tached to her own shoulder. Anosog­nosia can be tem­porar­ily alle­vi­ated by squirt­ing cold wa­ter into the pa­tient’s left ear canal, af­ter which the pa­tient sud­denly re­al­izes her con­di­tion but later loses aware­ness again and re­verts back to the bizarre ex­cuses and con­fab­u­la­tions.

Ra­machan­dran sug­gested that the left brain is an “apol­o­gist”, try­ing to jus­tify ex­ist­ing the­o­ries, and the right brain is a “rev­olu­tion­ary” which changes ex­ist­ing the­o­ries when con­di­tions war­rant. If the right brain is dam­aged, pa­tients are un­able to change their be­liefs; so when a pa­tient’s arm works fine un­til a right-brain stroke, the pa­tient can­not dis­card the hy­poth­e­sis that their arm is func­tional, and can only use the left brain to try to fit the facts to their be­lief.

In the al­most twenty years since Ra­machan­dran’s the­ory was pub­lished, new re­search has kept some of the gen­eral out­line while chang­ing many of the speci­fics in the hopes of ex­plain­ing a wider range of delu­sions in neu­rolog­i­cal and psy­chi­a­tric pa­tients. The newer model ac­knowl­edges the left-brain/​right-brain di­vide, but adds some new twists based on the Mind Pro­jec­tion Fal­lacy and the brain as a Bayesian rea­soner.


S­trange as anosog­nosia is, it’s only one of sev­eral types of delu­sions, which are broadly cat­e­go­rized into poly­the­matic and monothe­matic. Pa­tients with poly­the­matic delu­sions have mul­ti­ple un­con­nected odd ideas: for ex­am­ple, the fa­mous schizophrenic game the­o­rist John Nash be­lieved that he was defend­ing the Earth from alien at­tack, that he was the Em­peror of Antarc­tica, and that he was the left foot of God. A pa­tient with a monothe­matic delu­sion, on the other hand, usu­ally only has one odd idea. Monothe­matic delu­sions vary less than poly­the­matic ones: there are a few that are rel­a­tively com­mon across mul­ti­ple pa­tients. For ex­am­ple:

In the Cap­gras delu­sion, the pa­tient, usu­ally a vic­tim of brain in­jury but some­times a schizophrenic, be­lieves that one or more peo­ple close to her has been re­placed by an iden­ti­cal im­poster. For ex­am­ple, one male pa­tient ex­pressed the worry that his wife was ac­tu­ally some­one else, who had some­how con­trived to ex­actly copy his wife’s ap­pear­ance and man­ner­isms. This delu­sion sounds harm­lessly hilar­i­ous, but it can get very ugly: in at least one case, a pa­tient got so up­set with the de­ceit that he mur­dered the hy­poth­e­sized im­poster—ac­tu­ally his wife.

The Fre­goli delu­sion is the op­po­site: here the pa­tient thinks that ran­dom strangers she meets are ac­tu­ally her friends and fam­ily mem­bers in dis­guise. Some­times ev­ery­one may be the same per­son, who must be as mas­ter­ful at quickly chang­ing cos­tumes as the fa­mous Ital­ian ac­tor Fre­goli (in­spiring the con­di­tion’s name).

In the Co­tard delu­sion, the pa­tient be­lieves she is dead. Co­tard pa­tients will ne­glect per­sonal hy­giene, so­cial re­la­tion­ships, and plan­ning for the fu­ture—as the dead have no need to worry about such things. Oc­ca­sion­ally they will be able to de­scribe in de­tail the “de­com­po­si­tion” they be­lieve they are un­der­go­ing.

Pa­tients with all these types of delu­sions1 - as well as anosog­nosi­acs—share a com­mon fea­ture: they usu­ally have dam­age to the right frontal lobe of the brain (in­clud­ing in schizophre­nia, where the brain dam­age is of un­known ori­gin and usu­ally gen­er­al­ized, but where it is still pos­si­ble to an­a­lyze which ar­eas are the most ab­nor­mal). It would be nice if a the­ory of anosog­nosia also offered us a place to start ex­plain­ing these other con­di­tions, but this Ra­machan­dran’s idea fails to do. He posits a prob­lem with be­lief shift: go­ing from the origi­nally cor­rect but now ob­so­lete “my arm is healthy” to the up­dated “my arm is par­a­lyzed”. But these other delu­sions can­not be ex­plained by sim­ple failure to up­date: delu­sions like “the per­son who ap­pears to be my wife is an iden­ti­cal im­poster” never made sense. We will have to look harder.


Colt­heart, Lang­don, and McKay posit what they call the “two-fac­tor the­ory” of delu­sion. In the two-fac­tor the­ory, one prob­lem causes an ab­nor­mal per­cep­tion, and a sec­ond prob­lem causes the brain to come up with a bizarre in­stead of a rea­son­able ex­pla­na­tion.

Ab­nor­mal per­cep­tion has been best stud­ied in the Cap­gras delu­sion. A se­ries of ex­per­i­ments, in­clud­ing some by Ra­machan­dran him­self, demon­strate that Cap­gras pa­tients lack a skin con­duc­tance re­sponse (usu­ally used as a proxy of emo­tional re­ac­tion) to fa­mil­iar faces. This meshes nicely with the brain dam­age pat­tern in Cap­gras, which seems to in­volve the con­nec­tion be­tween the face recog­ni­tion ar­eas in the tem­po­ral lobe and the emo­tional ar­eas in the limibic sys­tem. So al­though the pa­tient can rec­og­nize faces, and can feel emo­tions, the pa­tient can­not feel emo­tions re­lated to rec­og­niz­ing faces.

The older “one-fac­tor” the­o­ries of delu­sion stopped here. The pa­tient, they said, knows that his wife looks like his wife, but he doesn’t feel any emo­tional re­ac­tion to her. If it was re­ally his wife, he would feel some­thing—love, ir­ri­ta­tion, what­ever—but he feels only the same blank­ness that would ac­com­pany see­ing a stranger. There­fore (the one-fac­tor the­ory says) his brain gropes for an ex­pla­na­tion and de­cides that she re­ally is a stranger. Why does this stranger look like his wife? Well, she must be wear­ing a very good dis­guise.

One-fac­tor the­o­ries also do a pretty good job of ex­plain­ing many of the re­main­ing monothe­matic delu­sions. A 1998 ex­per­i­ment shows that Co­tard delu­sion suffer­ers have a globally de­creased au­to­nomic re­sponse: that is, noth­ing re­ally makes them feel much of any­thing—a state con­sis­tent with be­ing dead. And anosog­nosi­acs have lost not only the nerve con­nec­tions that would al­low them to move their limbs, but the nerve con­nec­tions that would send dis­tress sig­nals and even the con­nec­tions that would send back “er­ror mes­sages” if the limb failed to move cor­rectly—so the brain gets data that ev­ery­thing is fine.

The ba­sic prin­ci­ple be­hind the first fac­tor is “As­sume that re­al­ity is such that my men­tal states are jus­tified”, a sort of Su­per Mind Pro­jec­tion Fal­lacy.

Although I have yet to find an offi­cial pa­per that says so, I think this same prin­ci­ple also ex­plains many of the more typ­i­cal schizophrenic delu­sions, of which two of the most com­mon are delu­sions of grandeur and delu­sions of per­se­cu­tion. Delu­sions of grandeur are the be­lief that one is ex­tremely im­por­tant. In pop cul­ture, they are typ­ified by the psy­chi­a­tric pa­tient who be­lieves he is Je­sus or Napoleon—I’ve never met any Napoleons, but I know sev­eral Je­suses and re­cently worked with a man who thought he was Je­sus and John Len­non at the same time. Here the first fac­tor is prob­a­bly an ele­vated mood (work­ing through a mis­cal­ibrated so­ciome­ter). “Wow, I feel like I’m re­ally awe­some. In what case would I be jus­tified in think­ing so highly of my­self? Only if I were Je­sus and John Len­non at the same time!” A similar mechanism ex­plains delu­sions of per­se­cu­tion, the clas­sic “the CIA is af­ter me” form of dis­ease. We ap­ply the Su­per Mind Pro­jec­tion Fal­lacy to a gar­den-va­ri­ety anx­iety di­s­or­der: “In what case would I be jus­tified in feel­ing this anx­ious? Only if peo­ple were con­stantly watch­ing me and plot­ting to kill me. Who could do that? The CIA.”

But de­spite the ex­plana­tory power of the Su­per Mind Pro­jec­tion Fal­lacy, the one-fac­tor model isn’t enough.


The one-fac­tor model re­quires peo­ple to be re­ally stupid. Many Cap­gras pa­tients were nor­mal in­tel­li­gent peo­ple be­fore their in­juries. Surely they wouldn’t leap straight from “I don’t feel af­fec­tion when I see my wife’s face” to “And there­fore this is a stranger who has man­aged to look ex­actly like my wife, sounds ex­actly like my wife, owns my wife’s clothes and wed­ding ring and so on, and knows enough of my wife’s se­crets to an­swer any ques­tion I put to her ex­actly like my wife would.” The lack of af­fec­tion vaguely sup­ports the stranger hy­poth­e­sis, but the prior for the stranger hy­poth­e­sis is so low that it should never even en­ter con­sid­er­a­tion (re­mem­ber this phras­ing: it will be­come im­por­tant later.) Like­wise, we’ve all felt re­ally awe­some at one point or an­other, but it’s never oc­curred to most of us that maybe we are si­mul­ta­neously Je­sus and John Len­non.

Fur­ther, most psy­chi­a­tric pa­tients with the defic­its in­volved don’t de­velop delu­sions. Peo­ple with dam­age to the ven­tro­me­dial area suffer the same dis­con­nec­tion be­tween face recog­ni­tion and emo­tional pro­cess­ing as Cap­gras pa­tients, but they don’t draw any un­rea­son­able con­clu­sions from it. Most peo­ple who get par­a­lyzed don’t come down with anosog­nosia, and most peo­ple with ma­nia or anx­iety don’t think they’re Je­sus or per­se­cuted by the CIA. What’s the differ­ence be­tween these peo­ple and the delu­sional pa­tients?

The differ­ence is the right dor­so­lat­eral pre­frontal cor­tex, an area of the brain strongly as­so­ci­ated with delu­sions. If what­ever brain dam­age broke your emo­tional re­ac­tions to faces or par­a­lyzed you or what­ever spared the RDPC, you are un­likely to de­velop delu­sions. If your brain dam­age also dam­aged this area, you are cor­re­spond­ingly more likely to come up with a weird ex­pla­na­tion.

In his first pa­pers on the sub­ject, Colt­heart vaguely refers to the RDPC as a “be­lief eval­u­a­tion” cen­ter. Later, he gets more spe­cific and talks about its role in Bayesian up­dat­ing. In his chronol­ogy, a per­son dam­ages the con­nec­tion be­tween face recog­ni­tion and emo­tion, and “ra­tio­nally” con­cludes the Cap­gras hy­poth­e­sis. In his model, even if there’s only a 1% prior of your spouse be­ing an im­poster, if there’s a 1000 times greater like­li­hood of you not feel­ing any­thing to­ward an im­poster than to your real spouse, you can “ra­tio­nally” come to be­lieve in the delu­sion. In nor­mal peo­ple, this ra­tio­nal be­lief then gets worn away by up­dat­ing based on ev­i­dence: the im­poster seems to know your spouse’s per­sonal de­tails, her se­crets, her email pass­words. In most pa­tients, this is suffi­cient to have them up­date back to the idea that it is re­ally their spouse. In Cap­gras pa­tients, the dam­age to the RDPC pre­vents up­dat­ing on “ex­oge­nous ev­i­dence” (for some rea­son, the en­doge­nous ev­i­dence of the lack of emo­tion it­self still gets through) and so they main­tain their delu­sion.

This the­ory has some trou­ble ex­plain­ing why pa­tients are still able to up­date about other situ­a­tions, but Colt­heart spec­u­lates that maybe the be­lief eval­u­a­tion sys­tem is weak­ened but not to­tally bro­ken, and can deal with any­thing ex­cept the cease­less stream of con­tra­dic­tory en­doge­nous in­for­ma­tion.


McKay makes an ex­cel­lent cri­tique of sev­eral ques­tion­able as­sump­tions of this the­ory.

First, is the Cap­gras hy­poth­e­sis ever plau­si­ble? Colt­heart et al pre­tend that the prior is 1100, but this im­plies that there is a base rate of your spouse be­ing an im­poster one out of ev­ery hun­dred times you see her (or per­haps one out of ev­ery hun­dred peo­ple has a fake spouse) ei­ther of which is pre­pos­ter­ous. No rea­son­able per­son could en­ter­tain the Cap­gras hy­poth­e­sis even for a sec­ond, let alone for long enough that it be­comes their work­ing hy­poth­e­sis and de­vel­ops im­mu­nity to fur­ther up­dat­ing from the bro­ken RDPC.

Se­cond, there’s no ev­i­dence that the ven­tro­me­dial pa­tients—the ones who lose face-re­lated emo­tions but don’t de­velop the Cap­gras delu­sion—once had the Cap­gras delu­sion but then suc­cess­fully up­dated their way out of it. They just never de­velop the delu­sion to be­gin with.

McKay keeps the Bayesian model, but for him the sec­ond fac­tor is not a deficit in up­dat­ing in gen­eral, but a deficit in the use of pri­ors. He lists two im­por­tant crite­ria for rea­son­able be­lief: “ex­plana­tory ad­e­quacy” (what stan­dard Bayesi­ans call the like­li­hood ra­tio; the new data must be more likely if the new be­lief is true than if it is false) and “dox­as­tic con­ser­va­tivism” (what stan­dard Bayesi­ans call the prior; the new be­lief must be rea­son­ably likely to be­gin with given ev­ery­thing else the pa­tient knows about the world).

Delu­sional pa­tients with dam­age to their RDPC lose their abil­ity to work with pri­ors and so aban­don all dox­as­tic con­ser­va­tivism, es­sen­tially fal­ling into a what we might term the Su­per Base Rate Fal­lacy. For them the only im­por­tant crite­rion for a be­lief is ex­plana­tory ad­e­quacy. So when they no­tice their spouse’s face no longer elic­its any emo­tion, they de­cide that their spouse is not re­ally their spouse at all. This does a great job of ex­plain­ing the ob­served data—maybe the best job it’s pos­si­ble for an ex­pla­na­tion to do. Its only minor prob­lem is that it has a stu­pen­dously low prior, and this doesn’t mat­ter be­cause they are no longer able to take pri­ors into ac­count.

This also ex­plains why the delu­sional be­lief is im­per­vi­ous to new ev­i­dence. Sup­pose the pa­tient’s spouse tells per­sonal de­tails of their hon­ey­moon that no one else could pos­si­bly know. There are sev­eral pos­si­ble ex­pla­na­tions: the pa­tient’s spouse re­ally is the pa­tient’s spouse, or (says the left-brain Apol­o­gist) the pa­tient’s spouse is an alien who was able to tele­path­i­cally ex­tract the rele­vant de­tails from the pa­tient’s mind. The tele­pathic alien im­poster hy­poth­e­sis has great ex­plana­tory ad­e­quacy: it ex­plains why the per­son looks like the spouse (the alien is a very good im­poster), why the spouse pro­duces no emo­tional re­sponse (it’s not the spouse at all) and why the spouse knows the de­tails of the hon­ey­moon (the alien is tele­pathic). The “it’s re­ally your spouse” ex­pla­na­tion only ex­plains the first and the third ob­ser­va­tions. Of course, we as sane peo­ple know that the tele­pathic alien hy­poth­e­sis has a very low base rate plau­si­bil­ity be­cause of its high com­plex­ity and vi­o­la­tion of Oc­cam’s Ra­zor, but these are ex­actly the fac­tors that the RDPC-dam­aged2 pa­tient can’t take into ac­count. There­fore, the seem­ingly con­vinc­ing new ev­i­dence of the spouse’s ap­par­ent mem­o­ries only suffices to help the delu­sional pa­tient in­fer that the im­poster is tele­pathic.

The Su­per Base Rate Fal­lacy can ex­plain the other delu­sional states as well. I re­cently met a pa­tient who was, in­deed, con­vinced the CIA were af­ter her; of note she also had ex­treme anx­iety to the point where her arms were con­stantly shak­ing and she was hid­ing un­der the cov­ers of her bed. CIA pur­suit is prob­a­bly the best pos­si­ble rea­son to be anx­ious; the only rea­son we don’t use it more of­ten is how few peo­ple are re­ally pur­sued by the CIA (well, as far as we know). My men­tor warned me not to try to ar­gue with the pa­tient or con­vince her that the CIA wasn’t re­ally af­ter her, as (she said from long ex­pe­rience) it would just make her think I was in on the con­spir­acy. This makes sense. “The CIA is af­ter you and your doc­tor is in on it” ex­plains both anx­iety and the doc­tor’s de­nial of the CIA very well; “The CIA is not af­ter you” ex­plains only the doc­tor’s de­nial of the CIA. For any­one with a patholog­i­cal in­abil­ity to han­dle Oc­cam’s Ra­zor, the best solu­tion to a challenge to your hy­poth­e­sis is always to make your hy­poth­e­sis more elab­o­rate.


Although I think McKay’s model is a se­ri­ous im­prove­ment over its pre­de­ces­sors, there are a few loose ends that con­tinue to bother me.

”You have brain dam­age” is also a the­ory with perfect ex­plana­tory ad­e­quacy. If one were to ex­plain the Cap­gras delu­sion to Cap­gras pa­tients, it would provide just as good an ex­pla­na­tion for their odd re­ac­tions as the im­poster hy­poth­e­sis. Although the pa­tient might not be able to ap­pre­ci­ate its de­creased com­plex­ity, they should at least re­main in­differ­ent be­tween the two hy­pothe­ses. I’ve never read of any for­mal study of this, but given that some­one must have tried ex­plain­ing the Cap­gras delu­sion to Cap­gras pa­tients I’m go­ing to as­sume it doesn’t work. Why not?

Like­wise, how come delu­sions are so spe­cific? It’s im­pos­si­ble to con­vince some­one who thinks he is Napoleon that he’s re­ally just a ran­dom non-fa­mous men­tal pa­tient, but it’s also im­pos­si­ble to con­vince him he’s Alexan­der the Great (at least I think so; I don’t know if it’s ever been tried). But him be­ing Alexan­der the Great is also con­sis­tent with his ob­served data and his de­ranged in­fer­ence abil­ities. Why de­cide it’s the CIA who’s af­ter you, and not the KGB or Bavar­ian Illu­mi­nati?

Why is the failure so of­ten limited to failed in­fer­ence from men­tal states? That is, if a Cap­gras pa­tient sees it is rain­ing out­side, the same pro­cess of base rate avoidance that made her fall for the Cap­gras delu­sion ought to make her think she’s been trans­ported to ther rain­for­est or some­thing. This hap­pens in poly­the­matic delu­sion pa­tients, where any­thing at all can gen­er­ate a new delu­sion, but not those with monothe­matic delu­sions like Cap­gras. There must be some fun­da­men­tal differ­ence be­tween how one draws in­fer­ences from men­tal states ver­sus ev­ery­thing else.

This work also raises the ques­tion of whether one can one con­sciously use Sys­tem II Bayesian rea­son­ing to ar­gue one­self out of a delu­sion. It seems im­prob­a­ble, but I re­cently heard about an n=1 per­sonal ex­per­i­ment of a ra­tio­nal­ist with schizophre­nia who used suc­cess­fully used Bayes to con­vince them­selves that a delu­sion (or pos­si­bly hal­lu­ci­na­tion; the story was un­clear) was false. I don’t have their per­mis­sion to post their story here, but I hope they’ll ap­pear in the com­ments.


1: I left out dis­cus­sion of the Alien Hand Syn­drome, even though it was in my sources, be­cause I be­lieve it’s more com­pli­cated than a sim­ple delu­sion. There’s some ev­i­dence that the alien hand ac­tu­ally does move in­de­pen­dently; for ex­am­ple it will some­times at­tempt to thwart tasks that the pa­tient performs vol­un­tar­ily with their good hand. Some sort of “split brain” is­sues seem like a bet­ter ex­pla­na­tion than sim­ple Mind Pro­jec­tion.

2: The right dor­so­lat­eral pre­frontal cor­tex also shows up in dream re­search, where it tends to be one of the parts of the brain shut down dur­ing dream­ing. This pro­vides a rea­son­able ex­pla­na­tion of why we don’t no­tice our dreams’ im­plau­si­bil­ity while we’re dream­ing them—and Eliezer speci­fi­cally men­tions he can’t use pri­ors cor­rectly in his dreams. It also high­lights some in­ter­est­ing par­allels be­tween dreams and the monothe­matic delu­sions. For ex­am­ple, the typ­i­cal “And then I saw my mother, but she was also some­how my fourth grade teacher at the same time” effect seems sort of like Cap­gras and Fre­goli. Even more in­ter­est­ingly, the RDPC gets switched on dur­ing lu­cid dream­ing, pro­vid­ing an ex­pla­na­tion of why lu­cid dream­ers are able to rea­son nor­mally in dreams. Be­cause lu­cid dream­ing also in­volves a sud­den “switch­ing on” of “aware­ness”, this makes the RDPC a good tar­get area for con­scious­ness re­search.