The Neuroscience of Desire

Who knows what I want to do? Who knows what anyone wants to do? How can you be sure about something like that? Isn’t it all a question of brain chemistry, signals going back and forth, electrical energy in the cortex? How do you know whether something is really what you want to do or just some kind of nerve impulse in the brain? Some minor little activity takes place somewhere in this unimportant place in one of the brain hemispheres and suddenly I want to go to Montana or I don’t want to go to Montana.

- Don DeLillo, White Noise

Winning at life means achieving your goals that is, satisfying your desires. As such, it will help to understand how our desires work. (I was tempted to title this article The Hidden Complexity of Wishes: Science Edition!)

Previously, I introduced readers to the neuroscience of emotion (affective neuroscience), and explained that the reward system in the brain has three major components: liking, wanting, and learning. That post discussed ‘liking’ or pleasure. Today we discuss ‘wanting’ or desire.

The birth of neuroeconomics

Much work has been done on the affective neuroscience of desire,1 but I am less interested with desire as an emotion than I am with desire as a cause of decisions under uncertainty. This latter aspect of desire is mostly studied by neuroeconomics,2 not affective neuroscience.

From about 1880-1960, neoclassical economics proposed simple, axiomatic models of human choice-making focused on the idea that agents make rational decisions aimed at maximizing expected utility. In the 1950s and 60s, however, economists discovered some paradoxes of human behavior that violated the axioms of these models.3 In the 70s and 80s, psychology launched an even broader attack on these models. For example, while economists assumed that choices among objects should not depend on how they are described (‘descriptive invariance’), psychologists discovered powerful framing effects.4

In response, the field of behavioral economics began to offer models of human choice-making that fit the experimental data better than simple models of neoclassical economics did.5 Behavioral economists often proposed models that could be thought of as information-processing algorithms, so neuroscientists began looking for evidence of these algorithms in the human brain, and neuroeconomics was born.

(Warning: the rest of this post assumes some familiarity with microeconomics.)

Valuation and choice in the brain

Despite their differences, models of decision-making from neoclassical economics,6 behavioral economics,7 and even computer science8 share a common conclusion:

Decision makers integrate the various dimensions of an option into a single measure of its idiosyncratic subjective value and then choose the option that is most valuable. Comparisons between different kinds of options rely on this abstract measure of subjective value, a kind of ‘common currency’ for choice. That humans can infact compare apples to oranges when they buy fruit is evidence for this abstract common scale.9

Though economists tend to claim only that agents act ‘as if’ they use the axioms of economic theory to make decisions,10 there is now surprising evidence that subjective value and economic choice are encoded by particular neurons in the brain.11

More than a dozen studies show that the subjective utility of different goods or actions are encoded on a common scale by the ventromedial prefrontal cortex and the striatum in primates (including humans),12 as is temporal discounting.13 Moreover, the brain tracks forecasted and experienced value, probably for the purpose of learning.14 Researchers have also shown how modulation of a common value signal could account for loss aversion and ambiguity aversion,15 two psychological discoveries that had threatened standard economic models of decision-making. Finally, subjective value is learned via iterative updating (after experience) in dopaminergic neurons.16

Once a common-currency valuation of goods and actions has been performed, how is a choice made between them? Evidence implicates (at least) the lateral prefrontal and parietal cortex in a process that includes neurons encoding probabilistic reasoning.17 Interestingly, while valuation structures encode absolute (and thus transitive) subjective value, choice-making structures “rescale these absolute values so as to maximize the differences between the available options before choice is attempted,”18 perhaps via a normalization mechanism like the one discovered in the visual cortex.19

Beyond these basic conclusions, many open questions and controversies remain.20 The hottest debate today concerns whether different valuation systems encode inconsistent values for the same actions (leading to different conclusions on which action to take),21 or whether different valuation systems contribute to the same final valuation process (leading to a single, unambiguous conclusion on which action to take).22 I think this race is too close to call, though I lean toward the latter model due to the persuasive case made for it by Glimcher (2010).

Despite these open questions, 15 years of neuroeconomics research suggests an impressive reduction from economics to psychology to neuroscience may be possible, resulting in something like this23:

Self-help

With this basic framework in place, what can the neuroscience of desire tell us about how to win at life?

  1. Wanting is different than liking, and we don’t only want happiness or pleasure.24 Thus, the perfect hedonist might not be fully satisfied. Pay attention to all your desires, not just your desires for pleasure.

  2. In particular, you should subject yourself to novel and challenging activities regularly throughout your life. Doing so keeps your dopamine (motivation) system flowing, because novel and challenging circumstances drive you to act and find solutions, which in turn leads to greater satisfaction than do ‘lazy’ pleasures like sleeping and eating.25

  3. In particular, doing novel and challenging activities with your significant other will help you experience satisfaction together, and improve bonding and intimacy.26

  4. Your brain generates reward signals when experienced value surpasses forecasted value.14 So: lower your expectations and your brain will be pleasantly surprised when things go well. Things going perfectly according to plan is not the norm, so don’t treat it as if it is.

  5. Many of the neurons involved in valuation and choice have stochastic features, meaning that when the subjective utility of two or more options are similar (represented in the brain by neurons with similar firing rates), we sometimes choose to do something other than the action that has the most subjective utility.27 In other words, we sometimes fail to do what we most want to do, even if standard biases and faults (akrasia, etc.) are considered to be part of the valuation equation. So don’t beat yourself up if you have a hard time choosing between options of roughly equal subjective utility, or if you feel you’ve chosen an option that does not have the greatest subject utility.

The neuroscience of desire is progressing rapidly, and I have no doubt that we will know much more about it in another five years. In the meantime, it has already produced useful results.

And the neuroscience of pleasure and desire is not only relevant to self-help, of course. In later posts, I will examine the implications of recent brain research for meta-ethics and for Friendly AI.

Notes

1 Berridge (2007); Leyton (2009).

2 Good overviews of neuroeconomics include: Glimcher (2010, 2009); Glimcher et al. (2008); Kable & Glimcher (2009); Glimcher & Rustichini (2004); Camerer et al (2005); Sanfey et al (2006); Politser (2008); Montague (2007). Berns (2005) is an overview from a self-help perspective.

3 Most famously, the Allais Paradox (Allais, 1953) and the Ellsberg paradox (Ellsberg, 1961). Eliezer wrote three posts on the Allais paradox.

4 Tversky & Kahneman (1981).

5 The most famous example is Prospect Theory (Kahneman & Tversky, 1979).

6 von Neumann & Morgenstern (1944).

7 Kahneman & Tversky (1979).

8 Sutton & Barto (1998).

9 Kable & Glimcher (2009).

10 Friedman (1953); Gul & Pesendorfer (2008).

11 Kable & Glimcher (2009) is a good overview, as are sections 2 and 3 of Glimcher (2010).

12 Kable & Glimcher (2009); Padoa-Schioppa & Assad (2006, 2008); Takahashi et al. (2009); Lau & Glimcher (2008); Samejima et al. (2005); Plassmann et al. (2007); Hare et al. (2008); Hare et al. (2009).

13 Kable & Glimcher (2007); Louie & Glimcher (2010).

14 Rutledge et al. (2010); Delgado (2007); Knutson & Cooper (2005); O’Doherty (2004).

15 Fox & Poldrack (2008); Tom et al. (2007); Levy et al. (2007); Levy et al. (2010).

16 Niv & Montague (2009); Schultz et al. (1997); Tobler et al. (2003, 2005); Waelti et al. (2001); Bayer & Glimcher (2005); Fiorillo et al. (2003, 2008); Kobayashi & Schultz (2008); Roesch et al. (2007); D’Ardenne et al. (2008); Zaghloul et al. (2009); Pessiglione e tal. (2006).

17 For technical reasons, most of this work has been done on the saccadic-control system: Glimcher & Sparks (1992); Basso & Wurtz (1998); Dorris & Munoz (1998); Platt & Glimcher (1999); Yang & Shadlen (2007); Dorris & Glimcher (2004); Sugrue et al. (2004); Shadlen & Newsome (2001); Churchland et al. (2008); Kiani et al. (2008); Wang (2008); Kable & Glimcher (2007); Yu & Dayan (2005). But Glimcher (2010) provides some reasons to think these results will generalize.

18 Kable & Glimcher (2009).

19 Heeger (1992).

20 See Kable & Glimcher (2009), and the final chapter of Glimcher (2010). Neuroeconomists are also beginning to model how game-theoretic calculations occur in the brain: Fehr & Camerer (2007); Lee (2008); Montague & Lohrenz (2007); Singer & Fehr (2005).

21 Balleine et al. (2008); Bossaerts et al. (2009); Daw et al. (2005); Dayan and Balleine (2002); Rangel et al. (2008).

22 Glimcher (2009); Levy et al. (2010).

23 Figure 16.1 from Glimcher (2010).

24 Smith et al. (2009).

25 Berns (2005) provides a popular-level overview of the evidence, here. Some of the relevant research papers include: Berns et al. (2001); Benjamin et al. (1996); Kempermann et al. (1997).

26 Aron et al. (2000, 2003).

27 See chapters 9 and 10 of Glimcher (2010).

References

Allais (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école Américaine. Econometrica, 21: 503-546.

Aron, Norman, Aron, McKenna, & Heyman (2000). Couples shared participation in novel and arousing activities and experienced relationship quality. Journal of Personality and Social Psychology, 78: 273-283.

Aron, Norman, Aron, & Lewandowski (2003). Shared participation in self- expanding activities: Positive effects on experienced marital quality. In Noller & Feeney (eds.), Marital interaction (pp. 177-196). Cambridge University Press.

Balleine, Daw, & O’Doherty (2009). Multiple forms of value learning and the function of dopamine. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 367-387). Academic Press.

Basso & Wurtz (1998). Modulation of neuronal activity in superior colliculus by changes in target probability. Journal of Neuroscience, 18: 7519–7534.

Bayer & Glimcher (2005). Midbrain dopamine neurons encodea quantitative reward prediction error signal. Neuron, 47: 129–141.

Benjamin, Li, Patterson, Greenberg, Murphy, & Hamer (1996). Population and familial association between the D4 dopamine receptor gene and measures of novelty seeking. Nature Genetics, 12: 81-84.

Berns (2005). Satisfaction: the science of finding true fulfillment. Henry Holt and Co.

Berns, McClure, Pagnoni, & Montague (2001). Predictability modulates human brain response to reward. Journal of Neuroscience, 21: 2793-2798.

Berridge (2007). The debate over dopamine’s role in reward: the case for incentive salience. Psychopharmacology, 191: 391-431.

Bossaerts, , Preuschoff, & Hsu (2009). The neurobiological foundations of valuation in human decision-making under uncertainty. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 353–365). Academic Press.

Camerer, Loewenstein, & Prelec (2005). Neuroeconomics: how neuroscience can inform economics. Journal of Economic Literature, 43: 9–64.

Churchland, Kiani, & Shadlen (2008). Decision-making with multiple alternatives. Nature Neuroscience, 11: 693–702.

D’Ardenne, McClure, Nystrom, & Cohen (2008). BOLD responses reflecting dopaminergic signals in the human Ventral Tegmental Area. Science, 319: 1264–1267.

Daw, Niv, & Dayan (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8: 1704–1711.

Dayan and Balleine (2002). Reward, motivation, and reinforcement learning. Neuron, 36: 285–298.

Delgado (2007). Reward-related responses in the human striatum. Annals of the New York Academy of Sciences, 1104: 70–88.

Dorris & Munoz (1998). Saccadic probability influences motorpreparation signals and time to saccadic initiation. Journal of Neuroscience, 18: 7015–7026.

Dorris & Glimcher (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron 44: 365–378.

Ellsberg (1961). Risk, Ambiguity, and the Savage Axioms. Quarterly Journal of Economics, 75(4): 643–669.

Fehr & Camerer (2007). Social neuroeconomics: The neural circuitry of social preferences. Trends in Cognitive Science, 11: 419–427.

Fiorillo, Tobler, & Schultz (2003). Discrete coding of reward probability and uncertainty by dopamine neurons. Science, 299: 1898–1902.

Fiorillo, Newsome, & Schultz (2008). The temporal precision of reward prediction in dopamine neurons. Nature Neuroscience, 11: 966–973.

Fox & Poldrack (2008). Prospect theory and the brain. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 145-173). Academic Press.

Friedman (1953). The methodology of positive economics. In Friedman, Essays in Positive Economics. Chicago Press.

Glimcher (2009). Neuroscience, Psychology, and Economic Behavior: The Emerging Field of Neuroeconomics. In: Tommasi, Peterson, & Nadel (eds.), Cognitive Biology: Evolutionary and Developemental Perspectives on Mind, Brain, and Behavior (pp. 261-287). MIT Press.

Glimcher (2009). Choice: Towards a Standard Back-pocket Model. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 503-521). Academic Press.

Glimcher (2010). Foundations of Neuroeconomic Analaysis. Oxford University Press.

Glimcher & Sparks (1992). Movement selection in advance of action in the superior colliculus. Nature, 355: 542–545.

Glimcher & Rustichini (2004). Neuroeconomics: the consilience of brain and decision. Science, 306: 447–452.

Glimcher, Camerer, Fehr, & Poldrack (2008). Introduction: A Brief History of Neuroeconomics. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 1-12). Academic Press.

Gul & Pesendorfer (2008). The case for mindless economics. In Caplan & Schotter (eds.), The Foundations of Positive and Normative Economics (pp. 3–41). Oxford University Press.

Hare, O’Doherty, Camerer, Schultz, & Rangel (2008). Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. Journal of Neuroscience, 28: 5623–5630.

Hare, Camerer, & Rangel (2009). Self-control in decisionmaking involves modulation of the vmPFC valuation system. Science, 324: 646–648.

Heeger (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9: 181–197.

Kable & Glimcher (2007). The neural correlates of subjective value during intertemporal choice. Nature Neuroscience, 10: 1625–1633.

Kable & Glimcher (2009). The Neurobiology of Decision: Consensus and Controversy. Neuron, 63: 733-745.

Kahneman & Tversky (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, XLVII: 263-291.

Kempermann, Kuhn, & Gage (1997). More hippocampal neurons in adult mice living in an enriched environment. Nature, 386: 493-495.

Kiani, Hanks, & Shadlen (2008). Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. Journal of Neuroscience, 28: 3017–3029.

Knutson & Cooper (2005). Functional magnetic resonance imaging of reward prediction. Current Opinions in Neurology, 18: 411–417.

Kobayashi & Schultz (2008). Influence of reward delays on responses of dopamine neurons. Journal of Neuroscience, 28: 7837–7846.

Lau & Glimcher (2008). Value representations in the primate striatum during matching behavior. Neuron, 58: 451–463.

Lee (2008). Game theory and neural basis of social decision making. Nature Neuroscience, 11: 404–409.

Levy, Rustichini & Glimcher (2007). A single system represents subjective value under both risky and ambiguous decision-making in humans. In 37th Annual Society for Neuroscience Meeting, San Diego, California.

Levy, Snell, Nelson, Rustichini, & Glimcher (2010). Neural Representation of Subjective Value Under Risk and Ambiguity. Journal of Neurophysiology, 103: 1036-1047.

Leyton (2009). The neurobiology of desire: Dopamine and the regulation of mood and motivational states in humans. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 222-243). Oxford University Press.

Louie & Glimcher (2010). Separating value from choice: delay discounting activity in the lateral intraparietal area. Journal of Neuroscience, 30(16): 5498-5507.

Montague (2007). Your brain is (almost) perfect: How we make decisions. Plume.

Montague & Lohrenz (2007). To detect and correct: Norm violations and their enforcement. Neuron, 56: 14–18.

Niv & Montague (2008). Theoretical and empirical studies oflearning. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 331-351). Academic Press.

O’Doherty (2004). Reward representations and reward-related learning in the human brain: insights from neuroimaging. Current Opinions in Neurobiology, 14: 769–776.

Padoa-Schioppa & Assad (2006). Neurons in the orbitofrontalcortex encode economic value. Nature, 441: 223–226.

Padoa-Schioppa & Assad (2008). The representation of economic value in the orbitofrontal cortex is invariant for changes of menu. Nature Neuroscience 11: 95–102.

Pessiglione, Seymour, Flandin, Dolan, & Frith (2006). Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature, 442: 1042–1045.

Plassmann, O’Doherty, & Rangel (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. Journal of Neuroscience, 27: 9984–9988.

Platt & Glimcher (1999). Neural correlates of decision variables in parietal cortex. Nature, 400: 233–238.

Politser (2008). Neuroeconomics: a guide to the new science of making choices. Oxford University Press.

Rangel, Camerer, & Montague (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9: 545–556

Roesch, Calu, & Schoenbaum (2007). Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nature Neuroscience, 10: 1615–1624.

Rutledge, Dean, Caplin, & Glimcher (2010). Testing the reward prediction error hypothesis with an axiomatic model. Journal of Neuroscience, 30(40): 13525-13536.

Samejima, Ueda, Doya & Kimura (2005). Representation ofaction-specific reward values in the striatum. Science, 310: 1337–1340.

Sanfey, Loewenstein, McClure, & Cohen (2006). Neuroeconomics: cross-currents in research on decision-making. Trends in Cognitive Science, 10: 108–116.

Schultz, Dayan, & Montague (1997). A neural substrate of prediction and reward. Science, 275: 1593–1599.

Shadlen & Newsome (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology, 86: 1916–1936.

Singer & Fehr (2005). The neuroeconomics of mind reading and empathy. American Economic Review, 95: 340–345.

Smith, Mahler, Pecina, & Berridge (2009). Hedonic hotspots: generating sensory pleasure in the brain. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 27-49). Oxford University Press.

Sugrue, Corrado, & Newsome (2004). Matching behavior and the representation of value in the parietal cortex. Science, 304: 1782–1787.

Sutton & Barto (1998). Reinforcement Learning: An Introduction. MIT Press.

Takahashi, Roesch, Stalnaker, Haney, Calu, Taylor, Burke, & Schoenbaum (2009). The orbitofrontal cortex andventral tegmental area are necessary for learning from unexpected outcomes. Neuron, 62: 269–280.

Tobler, Dickinson, & Schultz (2003). Coding of predicted reward omission by dopamine neurons in a conditioned inhibition paradigm. Journal of Neuroscience, 23: 10402–10410.

Tobler, Fiorillo, & Schultz (2005). Adaptive coding of rewardvalue by dopamine neurons. Science, 307: 1642–1645.

Tom, Fox, Trepel & Poldrack (2007). The neural basis of loss aversion in decision-making under risk. Science, 315: 515–518.

Tversky & Kahneman (1981). The framing of decisions and the psychology of choice. Science, 211(4481): 453–458.

von Neumann & Morgenstern (1944). Theory of Games and Economic Behavior. Princeton University Press.

Waelti, Dickinson, & Schultz (2001). Dopamine responses comply with basic assumptions of formal learning theory. Nature, 412: 43–48.

Wang (2008). Decision making in recurrent neuronal circuits. Neuron, 60: 215–234.

Yang & Shadlen (2007). Probabilistic reasoning by neurons. Nature, 447: 1075–1080.

Yu & Dayan (2005). Uncertainty, neuromodulation and attention. Neuron, 46: 681–692.

Zaghloul, Blanco, Weidemann, McGill, Jaggi, Baltuch, & Kahana (2009). Human substantia nigra neurons encode unexpected financial rewards. Science, 323: 1496–1499.