Craving, suffering, and predictive processing (three characteristics series)

This is the third post of the “a non-mys­ti­cal ex­pla­na­tion of in­sight med­i­ta­tion and the three char­ac­ter­is­tics of ex­is­tencese­ries. I origi­nally in­tended this post to more closely con­nect no-self and un­satis­fac­tori­ness, but then de­cided on fo­cus­ing on un­satis­fac­tori­ness in this post and re­lat­ing it to no-self in the next one.

Unsatisfactoriness

In the pre­vi­ous post, I dis­cussed some of the ways that the mind seems to con­struct a no­tion of a self. In this post, I will talk about a spe­cific form of mo­ti­va­tion, which Bud­dhism com­monly refers to as crav­ing (taṇhā in the origi­nal Pali). Some dis­cus­sions dis­t­in­guish be­tween crav­ing (in the sense of want­ing pos­i­tive things) and aver­sion (want­ing to avoid nega­tive things); this ar­ti­cle uses the defi­ni­tion where both de­sire and aver­sion are con­sid­ered sub­types of crav­ing.

My model is that crav­ing is gen­er­ated by a par­tic­u­lar set of mo­ti­va­tional sub­sys­tems within the brain. Crav­ing is not the only form of mo­ti­va­tion that a per­son has, but it nor­mally tends to be the loud­est and most dom­i­nant. As a form of mo­ti­va­tion, crav­ing has some ad­van­tages:

  • Peo­ple tend to ex­pe­rience a strong crav­ing to pur­sue pos­i­tive states and avoid nega­tive states. If they had less crav­ing, they might not do this with an equal zeal.

  • Crav­ing tends to be au­to­matic and visceral. A strong crav­ing to eat when hun­gry may cause a per­son to get food when they need it, even if they did not in­tel­lec­tu­ally un­der­stand the need to eat.

At the same time, crav­ing also has a num­ber of dis­ad­van­tages:

  • Crav­ing su­perfi­cially looks like it cares about out­comes. How­ever, it ac­tu­ally cares about pos­i­tive or nega­tive feel­ings (valence). This can lead to be­hav­iors that are akin to wire­head­ing in that they sup­press the un­pleas­ant feel­ing while do­ing noth­ing about the prob­lem. If think­ing about death makes you feel un­pleas­ant and go­ing to the doc­tor re­minds you of your mor­tal­ity, you may avoid doc­tors—even if this ac­tu­ally in­creases your risk of dy­ing.

  • Crav­ing nar­rows your per­cep­tion, mak­ing you only pay at­ten­tion to things which seem im­me­di­ately rele­vant for your crav­ing. For ex­am­ple, if you have a crav­ing for sex and go to a party with the goal of find­ing some­one to sleep with, you may see ev­ery­one only in terms of “will sleep with me” or “will not sleep with me”. This may not be the best pos­si­ble way of clas­sify­ing ev­ery­one you meet.

  • Strong crav­ing may cause pre­ma­ture ex­ploita­tion. If you have a strong crav­ing to achieve a par­tic­u­lar goal, you may not want to do any­thing that looks like mov­ing away from it, even if that would ac­tu­ally help you achieve it bet­ter. For ex­am­ple, if you in­tensely crave a feel­ing of ac­com­plish­ment, you may get stuck play­ing video games that make you feel like you are ac­com­plish­ing some­thing, even if there was some­thing else that you could do that was more fulfilling in the long term.

  • Mul­ti­ple con­flict­ing crav­ings may cause you to thrash around in an un­suc­cess­ful at­tempt to fulfill all of them. If you crave to get your toothache fixed, but also a crav­ing to avoid den­tists, you may put off the den­tist visit even as you con­tinue to suffer from your toothache.

  • Crav­ing seems to act in part by cre­at­ing self-fulfilling prophe­cies; mak­ing you strongly be­lieve that you are go­ing to achieve some­thing, so as to cause you to do it. The stronger the crav­ing, the stronger the false be­liefs in­jected into your con­scious­ness. This may warp your rea­son­ing in all kinds of ways: up­dat­ing to be­lieve an un­pleas­ant fact may sub­jec­tively feel like you are al­low­ing that fact to be­come true by be­liev­ing in it, in­cen­tiviz­ing you to come up with ways to avoid be­liev­ing in it.

  • Fi­nally, al­though crav­ing is of­ten mo­ti­vated by a de­sire to avoid un­satis­fac­tory ex­pe­riences, it is ac­tu­ally the very thing that causes dis­satis­fac­tion in the first place. Crav­ing as­sumes that nega­tive feel­ings are in­trin­si­cally un­pleas­ant, when in re­al­ity they only be­come un­pleas­ant when crav­ing re­sists them.

Given all of these dis­ad­van­tages, it may be a good idea to try to shift one’s mo­ti­va­tion to be more driven by sub­sys­tems that are not mo­ti­vated by crav­ing. It seems to me that ev­ery­thing that can be ac­com­plished via crav­ing, can in prin­ci­ple be ac­com­plished by non-crav­ing-based mo­ti­va­tion as well.

For­tu­nately, there are sev­eral ways of achiev­ing this. For one, a crav­ing for some out­come X tends to im­plic­itly in­volve at least two as­sump­tions:

  1. achiev­ing X is nec­es­sary for be­ing happy or avoid­ing suffering

  2. one can­not achieve X ex­cept by hav­ing a crav­ing for it

Both of these as­sump­tions are false, but sub­sys­tems as­so­ci­ated with crav­ing have a built-in bias to se­lec­tively sam­ple ev­i­dence which sup­ports these as­sump­tions, mak­ing them fre­quently feel com­pel­ling. Still, it is pos­si­ble to give the brain ev­i­dence which lets it know that these as­sump­tions are wrong: that it is pos­si­ble to achieve X with­out hav­ing crav­ing for it, and that one can feel good re­gard­less of achiev­ing X.

Pre­dic­tive pro­cess­ing and binoc­u­lar rivalry

I find that a promis­ing way of look­ing at un­satis­fac­tori­ness and crav­ing and their im­pact on de­ci­sion-mak­ing comes from the pre­dic­tive pro­cess­ing (PP) model about the brain. My claim is not that crav­ing would work ex­actly like this, but some­thing roughly like this seems like a promis­ing anal­ogy.

Good in­tro­duc­tions to PP in­clude this book re­view as well as the ac­tual book in ques­tion… but for the pur­poses of this dis­cus­sion, you re­ally only need to know two things:

  • Ac­cord­ing to PP, the brain is con­stantly at­tempt­ing to find a model of the world (or hy­poth­e­sis) that would both ex­plain and pre­dict the in­com­ing sen­sory data. For ex­am­ple, if I up­set you, my brain might pre­dict that you are go­ing to yell at me next. If the next thing that I hear is you yel­ling at me, then the pre­dic­tion and the data match, and my brain con­sid­ers its hy­poth­e­sis val­i­dated. If you do not yell at me, then the pre­dicted and ex­pe­rienced sense data con­flict, send­ing off an er­ror sig­nal to force a re­vi­sion to the model.

  • Be­sides chang­ing the model, an­other way in which the brain can re­act to re­al­ity not match­ing the pre­dic­tion is by chang­ing re­al­ity. For ex­am­ple, my brain might pre­dict that I am go­ing to type a par­tic­u­lar sen­tence, and then fulfill that pre­dic­tion by mov­ing my fingers so as to write that sen­tence. PP goes so far as to claim that this is the mechanism be­hind all of our ac­tions: a part of your brain pre­dicts that you are go­ing to do some­thing, and then you do it so as to fulfill the pre­dic­tion.

Next I am go­ing to say a few words about a phe­nomenon called binoc­u­lar ri­valry and how it is in­ter­preted within the PP paradigm. I promise that this is go­ing to be rele­vant for the topic of crav­ing and suffer­ing in a bit, so please stay with me.

Binoc­u­lar ri­valry, first dis­cov­ered in 1593 and ex­ten­sively stud­ied since then, is what hap­pens when your left eye is shown one pic­ture (e.g. an image of Isaac New­ton), and your right eye is shown an­other (e.g. an image of a house) in the right. Peo­ple re­port that their ex­pe­rience keeps al­ter­nat­ing be­tween see­ing Isaac New­ton and see­ing a house. They might also see a brief mashup of the two, but such New­ton-houses are short-lived and quickly fall apart be­fore set­tling to a sta­ble image of ei­ther New­ton or a house.

Image credit: Schwartz et al. (2012), Mul­tista­bil­ity in per­cep­tion: bind­ing sen­sory modal­ities, an overview. Philo­soph­i­cal Trans­ac­tions of the Royal So­ciety B, 367, 896-905.

Pre­dic­tive pro­cess­ing ex­plains what’s hap­pen­ing as fol­lows. The brain is try­ing to form a sta­ble hy­poth­e­sis of what ex­actly the image data that the eyes are send­ing rep­re­sents: is it see­ing New­ton, or is it see­ing a house? Some­times the brain briefly con­sid­ers the hy­brid hy­poth­e­sis of a New­ton-house mashup, but this is quickly re­jected: faces and houses do not ex­ist as oc­cu­py­ing the same place at the same scale at the same time, so this idea is clearly non­sen­si­cal. (At least, non­sen­si­cal out­side highly un­nat­u­ral and con­trived ex­per­i­men­tal se­tups that psy­chol­o­gists sub­ject peo­ple to.)

Your con­scious ex­pe­rience al­ter­nat­ing be­tween the two images re­flects the brain switch­ing be­tween the hy­pothe­ses of “this is Isaac New­ton” and “this is a house”; the cur­rently-win­ning hy­poth­e­sis is sim­ply what you ex­pe­rience re­al­ity as.

Sup­pose that the brain ends up set­tling on the hy­poth­e­sis of “I am see­ing Isaac New­ton”; this matches the in­put from the New­ton-see­ing eye. As a re­sult, there is no er­ror sig­nal that would arise from a mis­match be­tween the hy­poth­e­sis and the New­ton-see­ing eye’s in­put. For a mo­ment, the brain is satis­fied that it has found a work­able an­swer.

How­ever, if one re­ally was see­ing Isaac New­ton, then the other eye should not keep send­ing an image of a house. The hy­poth­e­sis and the house-see­ing eye’s in­put do have a mis­match, kick­ing off a strong er­ror sig­nal which low­ers the brain’s con­fi­dence in the hy­poth­e­sis of “I am see­ing Isaac New­ton”.

The brain goes look­ing for a hy­poth­e­sis which would bet­ter satisfy the strong er­ror sig­nal… and then finds that the hy­poth­e­sis of “I am see­ing a house” serves to en­tirely quiet the er­ror sig­nal from the house-see­ing eye. Suc­cess?

But even as the brain set­tles on the hy­poth­e­sis of “I am see­ing a house”, this then con­tra­dicts the in­put com­ing from the New­ton-see­ing eye.

The brain is again mo­men­tar­ily satis­fied, be­fore the in­com­ing er­ror sig­nal from the hy­poth­e­sis/​New­ton-eye mis­match drives down the prob­a­bil­ity of the “I am see­ing a house” hy­poth­e­sis, caus­ing the brain to even­tu­ally go back to the “I am see­ing Isaac New­ton” hy­poth­e­sis… and then back to see­ing a house, and then to see­ing a New­ton, and...

One way of phras­ing this is that there are two sub­sys­tems, each of which are trans­mit­ting a par­tic­u­lar set of con­straints (about see­ing New­ton and a house). The brain is then try­ing and failing to find a hy­poth­e­sis which would fulfill both sets of con­straints, while also re­spect­ing ev­ery­thing else that it knows about the world.

As I will ex­plain next, my feel­ing is that some­thing similar is go­ing on with un­satis­fac­tori­ness. Crav­ing cre­ates con­straints about what the world should be like, and the brain tries to find an ac­tion which would fulfill all of the con­straints, while also tak­ing into ac­count ev­ery­thing else that it knows about the world. Suffer­ing/​un­satis­fac­tori­ness emerges when all of the con­straints are im­pos­si­ble to fulfill, ei­ther be­cause achiev­ing them takes time, or be­cause the brain is un­able to find any sce­nario that could fulfill all of them even in the­ory.

Pre­dic­tive pro­cess­ing and psy­cholog­i­cal suffering

There are two broad cat­e­gories of suffer­ing: men­tal and phys­i­cal dis­com­fort. Let’s start with the case of psy­cholog­i­cal suffer­ing, as it seems most di­rectly analo­gous to what we just cov­ered.

Let’s sup­pose that I have bro­ken an im­por­tant promise that I have made to a friend. I feel guilty about this, and want to con­fess what I have done. We might say that I have a crav­ing to avoid the feel­ing of guilt, and the as­so­ci­ated crav­ing sub­sys­tem sends a pre­dic­tion to my con­scious­ness: I will stop feel­ing guilty.

In the pre­vi­ous dis­cus­sion, an in­fer­ence mechanism in the brain was look­ing for a hy­poth­e­sis that would satisfy the con­straints im­posed by the sen­sory data. In this case, the same thing is hap­pen­ing, but

  • the hy­poth­e­sis that it is look­ing for is a pos­si­ble ac­tion that I could take, that would lead to the con­straint be­ing fulfilled

  • the sen­sory data is not ac­tu­ally com­ing from the senses, but is in­ter­nally gen­er­ated by the crav­ing and rep­re­sents the out­come that the crav­ing sub­sys­tem would like to see realized

My brain searches for a pos­si­ble world that would fulfill the pro­vided con­straints, and comes up with the idea of just ad­mit­ting the truth of what I have done. It pre­dicts that if I were to do this, I would stop feel­ing guilty over not ad­mit­ting my bro­ken promise. This satis­fies the con­straint of not feel­ing guilty.

How­ever, as my brain fur­ther pre­dicts what it ex­pects to hap­pen as a con­se­quence, it notes that my friend will prob­a­bly get quite an­gry. This trig­gers an­other kind of crav­ing: to not ex­pe­rience the feel­ing of get­ting yel­led at. This gen­er­ates its own goal/​pre­dic­tion: that no­body will be an­gry with me. This acts as a fur­ther con­straint for the plan that the brain needs to find.

As the con­straint of “no­body will be an­gry at me” seems in­com­pat­i­ble with the plan of “I will ad­mit the truth”, this gen­er­ates an er­ror sig­nal, driv­ing down the prob­a­bil­ity of this plan. My brain aban­dons this plan, and then con­sid­ers the al­ter­na­tive plan of “I will just stay quiet and not say any­thing”. This matches the con­straint of “no­body will be an­gry at me” quite well, driv­ing down the er­ror sig­nal from that par­tic­u­lar plan/​con­straint mis­match… but then, if I don’t say any­thing, I will con­tinue feel­ing guilty.

The mis­match with the con­straint of “I will stop feel­ing guilty” drives up the er­ror sig­nal, caus­ing the “I will just stay quiet” plan to be aban­doned. At worst, my mind may find it im­pos­si­ble to find any plan which would fulfill both con­straints, keep­ing me in an end­less loop of al­ter­nat­ing be­tween two un­vi­able sce­nar­ios.

There are some in­ter­est­ing as­pects about the phe­nomenol­ogy of such a situ­a­tion, which feel like they fit the PP model quite well. In par­tic­u­lar, it may feel like if I just fo­cus on a par­tic­u­lar crav­ing enough, think­ing about my de­sired out­come hard enough will make it true.

Re­call that un­der the PP frame­work, goals hap­pen be­cause a part of the brain as­sumes that they will hap­pen, af­ter which it changes re­al­ity to make that be­lief true. So fo­cus­ing re­ally hard on a crav­ing for X makes it feel like X will be­come true, be­cause the crav­ing is liter­ally rewrit­ing an as­pect of my sub­jec­tive re­al­ity to make me think that X will be­come true.

When I fo­cus hard on the crav­ing, I am tem­porar­ily guid­ing my at­ten­tion away from the parts of my mind which are point­ing out the ob­sta­cles in the way of X com­ing true. That is, those parts have less of a chance to in­cor­po­rate their con­straints into the plan that my brain is try­ing to de­velop. This mo­men­tar­ily re­duces the mo­tion away from this plan, mak­ing it seem more plau­si­ble that the de­sired out­come will in fact be­come real.

Con­versely, let­ting go of this crav­ing, may feel like it is liter­ally mak­ing the un­de­sired out­come more real, rather than like I am com­ing more to terms with re­al­ity. This is most ob­vi­ous in cases where one has a crav­ing for an out­come that is im­pos­si­ble for cer­tain, such as in the case of griev­ing about a friend’s death. Even af­ter it is cer­tain that some­one is dead, there may still be per­sis­tent thoughts of if only I had done X, with an im­plicit ad­di­tional fla­vor of if I just want to have done X re­ally hard, things will change, and I can’t stop fo­cus­ing on this pos­si­bil­ity be­cause my friend needs to be al­ive.

In this form, crav­ing may lead to all kinds of ra­tio­nal­iza­tion and bi­ased rea­son­ing: a part of your mind is liter­ally mak­ing you be­lieve that X is true, be­cause it wants you to find a strat­egy where X is true. This hal­lu­ci­nated be­lief may con­strain all of your plans and mod­els about the world in the same sense as get­ting di­rect sen­sory ev­i­dence about X be­ing true would con­strain your brain’s mod­els. For ex­am­ple, if I have a very strong urge to be­lieve that some­one is in­ter­ested in me, then this may cause me to in­ter­pret any of his words and ex­pres­sions in a way com­pat­i­ble with this be­lief, re­gard­less of how im­plau­si­ble and far-spread of a dis­tor­tion this re­quires.

The case of phys­i­cal pain

Similar prin­ci­ples ap­ply to the case of phys­i­cal pain.

We should first note that pain does not nec­es­sar­ily need to be aver­sive: for ex­am­ple, peo­ple may en­joy the pain of ex­er­cise, hot spices or sex­ual masochism. Mor­phine may also have an effect where peo­ple re­port that they still ex­pe­rience the pain but no longer mind it.

And, rele­vant for our topic, peo­ple prac­tic­ing med­i­ta­tion find that by shift­ing their at­ten­tion to­wards pain, it can be­come less aver­sive. The med­i­ta­tion teacher Shinzen Young writes that

… pain is one thing, and re­sis­tance to the pain is some­thing else, and when the two come to­gether you have an ex­pe­rience of suffer­ing, that is to say, ‘suffer­ing equals pain mul­ti­plied by re­sis­tance.’ You’ll be able to see that’s true not only for phys­i­cal pain, but also for emo­tional pain and it’s true not only for lit­tle pains but also for big pains. It’s true for ev­ery kind of pain no mat­ter how big, how small, or what causes it. When­ever there is re­sis­tance there is suffer­ing. As soon as you can see that, you gain an in­sight into the na­ture of “pain as a prob­lem” and as soon as you gain that in­sight, you’ll be­gin to have some free­dom. You come to re­al­ize that as long as we are al­ive we can’t avoid pain. It’s built into our ner­vous sys­tem. But we can cer­tainly learn to ex­pe­rience pain with­out it be­ing a prob­lem. (Young, 1994)

What does it mean to say that re­sist­ing pain cre­ates suffer­ing?

In the dis­cus­sion about binoc­u­lar ri­valry, we might have said that when the mind set­tled on a hy­poth­e­sis of see­ing Isaac New­ton, this hy­poth­e­sis was re­sisted by the sen­sory data com­ing from the house-see­ing eye. The mind would have set­tled on the hy­poth­e­sis of “I am see­ing Isaac New­ton”, if not for that re­sis­tance. Like­wise, in the pre­ced­ing dis­cus­sion, the de­ci­sion to ad­mit the truth was re­sisted by the de­sire to not get yel­led at.

Sup­pose that you have a sore mus­cle, which hurts when­ever you put weight on it. Like sen­sory data com­ing from your eyes, this con­strains the pos­si­ble in­ter­pre­ta­tions of what you might be ex­pe­rienc­ing: your brain might set­tle on the hy­poth­e­sis of “I am feel­ing pain”.

But the ex­pe­rience of this hy­poth­e­sis then trig­gers a re­sis­tance to that pain: a crav­ing sub­sys­tem wired to de­tect pain and re­sist it by pro­ject­ing a form of in­ter­nally-gen­er­ated sense data, effec­tively claiming that you are not in pain. There are now again two in­com­pat­i­ble streams of data that need to be rec­on­ciled, one say­ing that you are in pain, and an­other which says that you are not.

In the case of binoc­u­lar ri­valry, both of the streams were gen­er­ated by sen­sory in­for­ma­tion. In the dis­cus­sion about psy­cholog­i­cal suffer­ing, both of the streams were gen­er­ated by crav­ing. In this case, crav­ing gen­er­ates one of the streams and sen­sory in­for­ma­tion gen­er­ates the other.

On the left, a per­sis­tent pain sig­nal is strong enough to dom­i­nate con­scious­ness. On the right, a crav­ing for not be­ing in pain at­tempts to con­strain con­scious­ness so that it doesn’t in­clude the pain.

Now if you stop putting weight on the sore mus­cle, the pain goes away, fulfilling the pre­dic­tion of “I am not in pain”. As soon as your brain figures this out, your mo­tor cor­tex can in­cor­po­rate the crav­ing-gen­er­ated con­straint of “I will not be in pain” into its plan­ning. It gen­er­ates differ­ent plans of how to move your body, and when­ever it pre­dicts that one of them would vi­o­late the con­straint of “I will not be in pain”, it will re­vise its plan. The end re­sult is that you end up mov­ing in ways that avoid putting weight on your sore mus­cle. If you mis­calcu­late, the re­sult­ing pain will cause a rapid er­ror sig­nal that causes you to ad­just your move­ment again.

What if the pain is more per­sis­tent, and both­ers you no mat­ter how much you try to avoid mov­ing? Or if the cir­cum­stances force you to put weight on the sore mus­cle?

In that case, the brain will con­tinue look­ing for a pos­si­ble hy­poth­e­sis that would fulfill the con­straint of “I am not in pain”. For ex­am­ple, maybe you have pre­vi­ously taken painkil­lers that have helped with your pain. In that case, your mind may seize upon the hy­poth­e­sis that “by tak­ing painkil­lers, my pain will cease”.

As your mind pre­dicts the likely con­se­quences of tak­ing painkil­lers, it no­tices that in this simu­la­tion, the con­straint of “I am not in pain” gets fulfilled, driv­ing down the er­ror sig­nal be­tween the hy­poth­e­sis and the “I am not in pain” con­straint. How­ever, if the brain could sup­press the crav­ing-for-pain-re­lief merely by imag­in­ing a sce­nario where the pain was gone, then it would never need to take any ac­tions: it could just hal­lu­ci­nate pleas­ant states. Helping keep it an­chored into re­al­ity is the fact that sim­ply imag­in­ing the painkil­lers has not done any­thing to the pain sig­nal it­self: the imag­ined state does not match your ac­tual sense data. There is still an er­ror sig­nal gen­er­ated be­tween the mis­match of the imag­ined “I have taken painkil­lers and am free of pain” sce­nario, and the fact that the pain is not gone yet.

Your brain imag­ines a pos­si­ble ex­pe­rience: tak­ing painkil­lers and be­ing free of pain. This imag­ined sce­nario fulfills the con­straint of “I have no pain”. How­ever, it does not fulfill the con­straint of ac­tu­ally match­ing your sense data: you have not yet taken painkil­lers and are still in pain.

For­tu­nately, if painkil­lers are ac­tu­ally available, your mind is not locked into a state where the two con­straints of “I’m in pain” and “I’m not in pain” re­main equally im­pos­si­ble to achieve. It can take ac­tions—such as mak­ing you walk to­wards the medicine cab­i­net—that get you closer to­wards be­ing able to fulfill both of these con­straints.

There are stud­ies sug­gest­ing that phys­i­cal pain and psy­cholog­i­cal pain share similar neu­ral mechanisms [cita­tion]. And in med­i­ta­tion, one may no­tice that psy­cholog­i­cal dis­com­fort and suffer­ing in­volves avoid­ing un­pleas­ant sen­sa­tions in the same way as phys­i­cal pain does; the same mechanism has been re­cruited for more ab­stract plan­ning.

When the brain pre­dicts that a par­tic­u­lar ex­pe­rience would pro­duce an un­pleas­ant sen­sa­tion, crav­ing re­sists that pre­dic­tion and tries to find an­other way. Similarly, if the brain pre­dicts that some­thing will not pro­duce a pleas­ant sen­sa­tion, crav­ing may also re­sist that as­pect of re­al­ity.

Now, this pro­cess as de­scribed has a struc­tural equiv­alence to binoc­u­lar ri­valry, but as far as I know, binoc­u­lar ri­valry does not in­volve any par­tic­u­lar dis­com­fort. Suffer­ing ob­vi­ously does.

Be­ing in pain is gen­er­ally bad: it is usu­ally bet­ter to try to avoid end­ing up in painful states, as well as try to get out of painful states once you are in them. This is also true for other states, such as hunger, that do not nec­es­sar­ily feel painful, but still have a nega­tive emo­tional tone. Sup­pose that when­ever crav­ing gen­er­ates a self-fulfilling pre­dic­tion which re­sists your di­rect sen­sory ex­pe­rience, this gen­er­ates a sig­nal we might call “un­satis­fac­tori­ness”.

The stronger the con­flict be­tween the ex­pe­rience and the crav­ing, the stronger the un­satis­fac­tori­ness—so that a mild pain that is easy to ig­nore only causes a lit­tle un­satis­fac­tori­ness, and an ex­cru­ci­at­ing pain that gen­er­ates a strong re­sis­tance causes im­mense suffer­ing. The brain is then wired to use this un­satis­fac­tori­ness as a train­ing sig­nal, at­tempt­ing to avoid situ­a­tions that have pre­vi­ously in­cluded high lev­els of it, and to keep look­ing for ways out if it cur­rently has a lot of it.

It is also worth not­ing what it means for you to be par­a­lyzed by two strong, mu­tu­ally op­pos­ing crav­ings. Con­sider again the situ­a­tion where I am torn be­tween ad­mit­ting the truth to my friend, and stay­ing quiet. We might think that this is a situ­a­tion where the over­all sys­tem is un­cer­tain of the cor­rect course of ac­tion: some sub­sys­tems are try­ing to force the ac­tion of con­fronting the situ­a­tion, oth­ers are try­ing to force the ac­tion of avoid­ing it. Both courses of ac­tion are pre­dicted to lead to some kind of loss.

In gen­eral, it is a bad thing if a sys­tem ends up in a situ­a­tion where it has to choose be­tween two differ­ent kinds of losses, and has high in­ter­nal un­cer­tainty of the right ac­tion. A sys­tem should avoid such dilem­mas, ei­ther by avoid­ing the situ­a­tions them­selves or by find­ing a way to rec­on­cile the con­flict­ing pri­ori­ties.

Crav­ing-based and non-crav­ing-based motivation

What I have writ­ten so far might be taken to sug­gest that crav­ing is a re­quire­ment for all ac­tion and plan­ning. How­ever, the Bud­dhist claim is that crav­ing is ac­tu­ally just one of at least two differ­ent mo­ti­va­tional sys­tems in the brain. Given that neu­ro­science sug­gests the ex­is­tence of at least three differ­ent mo­ti­va­tional sys­tems, this should not seem par­tic­u­larly im­plau­si­ble.

Let’s take an­other look at the types of pro­cesses re­lated to binoc­u­lar ri­valry ver­sus crav­ing.

Crav­ing acts by ac­tively in­tro­duc­ing false be­liefs into one’s rea­son­ing. If crav­ing could just do this com­pletely un­in­hibited, rewrit­ing all ex­pe­rience to match one’s de­sires, no­body would ever do any­thing: they would just sit still, en­joy­ing a crav­ing-driven hal­lu­ci­na­tion of a world where ev­ery­thing was perfect.

In con­trast, in the case of binoc­u­lar ri­valry, no sys­tem is feed­ing the rea­son­ing pro­cess any false be­liefs: all the con­straints emerge di­rectly from the sense data and pre­vi­ous life-ex­pe­rience. To the ex­tent that the sys­tem can be said to have a prefer­ence over ei­ther the “I am see­ing a house” or the “I am see­ing Isaac New­ton” hy­poth­e­sis, it is just “if see­ing a house is the most likely hy­poth­e­sis, then I pre­fer to see a house; if see­ing New­ton is the most likely hy­poth­e­sis, then I pre­fer to see New­ton”. The com­pu­ta­tion does not have an in­trin­sic at­tach­ment to any par­tic­u­lar out­come, nor will it hal­lu­ci­nate a par­tic­u­lar ex­pe­rience if it has no good rea­son to.

Like­wise, it seems like there are modes of do­ing and be­ing which are similar in the re­spect that one is fo­cused on pro­cess rather than out­come: tak­ing what­ever ac­tions are best-suited for the situ­a­tion at hand, re­gard­less of what their out­come might be. In these situ­a­tions, lit­tle un­satis­fac­tori­ness seems to be pre­sent.

In an ear­lier post, I dis­cussed a pro­posal where an au­tonomously act­ing robot has two de­ci­sion-mak­ing sys­tems. The first sys­tem just figures out what­ever ac­tions would max­i­mize its re­wards and tries to take those ac­tions. The sec­ond “Blocker” sys­tem tries to pre­dict whether or not a hu­man over­seer would ap­prove of any given ac­tion, and pre­vents the first sys­tem from do­ing any­thing that would be dis­ap­proved of. We then have two eval­u­a­tion sys­tems: “what would bring the max­i­mum re­ward” (run­ning on a lower pri­or­ity) and “would a hu­man over­seer ap­prove of a pro­posed ac­tion” (tak­ing prece­dence in case of a dis­agree­ment).

It seems to me that there is some­thing similar go­ing on with crav­ing. There are pro­cesses which are neu­trally just try­ing to figure out the best ac­tion; and when those pro­cesses hit upon par­tic­u­larly good or bad out­comes, crav­ing is formed in an at­tempt to force the sys­tem into re­peat­ing or avoid­ing those out­comes in the fu­ture.

Sup­pose that you are in a situ­a­tion where the best pos­si­ble course of ac­tion only has a 10% chance of get­ting you through al­ive. If you are in a non-crav­ing-driven state, you may fo­cus on get­ting at least that 10% chance to­gether, since that’s the best that you can do.

In con­trast, the kind of be­hav­ior that is typ­i­cal for crav­ing is re­al­iz­ing that you have a sig­nifi­cant chance of dy­ing, de­cid­ing that this thought is com­pletely un­ac­cept­able, and re­fus­ing to go on be­fore you have an ap­proach where the thought of death isn’t so stark.

Both sys­tems have their up­sides and down­sides. If it is true that a 10% chance of sur­vival re­ally is the best that you can do, then you should clearly just fo­cus on get­ting the prob­a­bil­ity even that high. The crav­ing which causes trou­ble by thrash­ing around is only go­ing to make things worse. On the other hand, maybe this es­ti­mate is flawed and you could achieve a higher prob­a­bil­ity of sur­vival by do­ing some­thing else. In that case, the crav­ing ab­solutely re­fus­ing to go on un­til you have figured out some­thing bet­ter might be the right ac­tion.

There is also an­other ma­jor differ­ence, in that crav­ing does not re­ally care about out­comes. Rather, it cares about avoid­ing pos­i­tive or nega­tive feel­ings. In the case of avoid­ing death, crav­ing-ori­ented sys­tems are pri­mar­ily re­act­ing to the thought of death… which may make them re­ject even plans which would re­duce the risk of death, if those plans in­volved need­ing to think about death too much.

This be­comes par­tic­u­larly ob­vi­ous in the case of things like go­ing to the den­tist in or­der to have an op­er­a­tion you know will be un­pleas­ant. You may find your­self highly averse to go­ing, as you crave the com­fort of not need­ing to suffer from the un­pleas­ant­ness. At the same time, you also know that the op­er­a­tion will benefit you in the long term: any un­pleas­ant­ness will just be a pass­ing state of mind, rather than per­ma­nent dam­age. But avoid­ing un­pleas­ant­ness—in­clud­ing the very thought of ex­pe­rienc­ing some­thing un­pleas­ant—is just what crav­ing is about.

In con­trast, if you are in a state of equa­nim­ity with lit­tle crav­ing, you still rec­og­nize the thoughts of go­ing to the den­tist as hav­ing nega­tive valence, but this nega­tive valence does not bother you, be­cause you do not have a crav­ing to avoid it. You can choose what­ever op­tion seems best, re­gard­less of what kind of con­tent this ends up pro­duc­ing in your con­scious­ness.

Of course, choos­ing cor­rectly re­quires you to ac­tu­ally know what is best. Ex­pert med­i­ta­tors have been known to some­times ig­nore ex­treme phys­i­cal pain that should have caused them to seek med­i­cal aid. And they prob­a­bly would have sought help, if not for their abil­ity to drop their re­sis­tance to pain and ex­pe­rience it with ex­treme equa­nim­ity.

Nega­tive-valence states tend to cor­re­late with states which are bad for the achieve­ment of our goals. That is the rea­son why we are wired to avoid them. But the cor­re­la­tion is only par­tial, so if you fo­cus too much on avoid­ing un­pleas­ant­ness, you are fal­ling vic­tim to Good­hart’s Law: op­ti­miz­ing a mea­sure so much that you sac­ri­fice the goals that the mea­sure was sup­posed to track. Equa­nim­ity gives you the abil­ity to ig­nore your con­sciously ex­pe­rienced suffer­ing, so you don’t need to pay ad­di­tional men­tal costs for tak­ing ac­tions which fur­ther your goals. This can be use­ful, if you are strate­gic about ac­tu­ally achiev­ing your goals.

But while Good­hart­ing on a mea­sure is a failure mode, so is ig­nor­ing the mea­sure en­tirely. Un­pleas­ant­ness does still cor­re­late with things that make it harder to re­al­ize your val­ues, and the need to avoid dis­plea­sure nor­mally op­er­ates as an au­to­matic feed­back mechanism. It is pos­si­ble to have high equa­nim­ity and weaken this mechanism, with­out be­ing smart about it and do­ing noth­ing to de­velop al­ter­na­tive mechanisms. In that case you are just trad­ing Good­hart’s Law for the op­po­site failure mode.

Some other dis­ad­van­tages of craving

In the be­gin­ning of this post, I men­tioned a few other dis­ad­van­tages that crav­ing has, which I have not yet men­tioned ex­plic­itly. Let’s take a quick look at those.

Crav­ing nar­rows your per­cep­tion, mak­ing you only pay at­ten­tion to things that seem im­me­di­ately rele­vant for your crav­ing.

In pre­dic­tive pro­cess­ing, at­ten­tion is con­cep­tu­al­ized as giv­ing in­creased weight­ing to those fea­tures of the sen­sory data that seem most use­ful for mak­ing suc­cess­ful pre­dic­tions about the task at hand. If you have strong crav­ing to achieve a par­tic­u­lar out­come, your mind will fo­cus on those as­pects of the sen­sory data that seem use­ful for re­al­iz­ing your crav­ing.

Strong crav­ing may cause pre­ma­ture ex­ploita­tion. If you have a strong crav­ing to achieve a par­tic­u­lar goal, you may not want to do any­thing that looks like mov­ing away from it, even if that would ac­tu­ally help you achieve it bet­ter.

Sup­pose that you have a strong crav­ing to ex­pe­rience a feel­ing of ac­com­plish­ment: this means that the crav­ing is strongly pro­ject­ing a con­straint of “I will feel ac­com­plished” into your plan­ning, caus­ing an er­ror sig­nal if you con­sider any plan which does not fulfill the con­straint. If you are think­ing about a mul­ti­step plan which will take time be­fore you feel ac­com­plished, it will start out by you not feel­ing ac­com­plished. This con­tra­dicts the con­straint of “I will feel ac­com­plished”, caus­ing that plan to be re­jected in fa­vor of ones that bring you even some ac­com­plish­ment right away.

Crav­ing and suffering

We might sum­ma­rize the un­satis­fac­tori­ness-re­lated parts of the above as fol­lows:

  • Crav­ing tries to get us into pleas­ant states of con­scious­ness.

  • But pleas­ant states of con­scious­ness are those with­out crav­ing.

  • Thus, there are sub­sys­tems which are try­ing to get us into pleas­ant states of con­scious­ness by cre­at­ing con­stant crav­ing, which is the ex­act op­po­site of a pleas­ant state.

We can some­what rephrase this as:

  • The de­fault state of hu­man psy­chol­ogy in­volves a de­gree of al­most con­stant dis­satis­fac­tion with one’s state of con­scious­ness.

  • This dis­satis­fac­tion is cre­ated by the crav­ing.

  • The dis­satis­fac­tion can be ended by elimi­nat­ing crav­ing.

… which, if cor­rect, might be in­ter­preted to roughly equal the first three of Bud­dhism’s Four Noble Truths: the fourth is “Bud­dhism’s Noble Eight­fold Path is a way to end crav­ing”.

A more ra­tio­nal­ist fram­ing might be that the crav­ing is es­sen­tially act­ing in a way that looks similar to wire­head­ing: pur­su­ing plea­sure and hap­piness even if that sac­ri­fices your abil­ity to im­pact the world. Re­duc­ing the in­fluence of the crav­ing makes your mo­ti­va­tions less driven by wire­head­ing-like im­pulses, and more able to see the world clearly even if it is painful. Thus, re­duc­ing crav­ing may be valuable even if one does not care about suffer­ing less.

This gives rise to the ques­tion—how ex­actly does one re­duce crav­ing? And what does all of this have to do with the self, again?

We’ll get back to those ques­tions in the next post.

This is the third post of the “a non-mys­ti­cal ex­pla­na­tion of in­sight med­i­ta­tion and the three char­ac­ter­is­tics of ex­is­tencese­ries. The next post in the se­ries is “From self to crav­ing”.