Scope Insensitivity
Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”
An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5
A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).
2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.
3 Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.
4 Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173.
5 Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125.
6 David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300.
- Reliable Sources: The Story of David Gerard by 10 Jul 2024 19:50 UTC; 390 points) (
- On Caring by 7 Oct 2014 5:12 UTC; 324 points) (EA Forum;
- Why I No Longer Prioritize Wild Animal Welfare by 15 Feb 2023 12:11 UTC; 321 points) (EA Forum;
- Dark Arts of Rationality by 19 Jan 2014 2:47 UTC; 261 points) (
- On Caring by 15 Oct 2014 1:59 UTC; 246 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 244 points) (
- Purchase Fuzzies and Utilons Separately by 1 Apr 2009 9:51 UTC; 223 points) (
- Getting on a different train: can Effective Altruism avoid collapsing into absurdity? by 7 Oct 2022 10:52 UTC; 187 points) (EA Forum;
- The Meditation on Curiosity by 6 Oct 2007 0:26 UTC; 181 points) (
- Reliable Sources: The Story of David Gerard by 10 Jul 2024 19:50 UTC; 170 points) (EA Forum;
- Some cruxes on impactful alternatives to AI policy work by 10 Oct 2018 13:35 UTC; 165 points) (
- Subskills of “Listening to Wisdom” by 9 Dec 2024 3:01 UTC; 155 points) (
- The Cognitive Science of Rationality by 12 Sep 2011 20:48 UTC; 139 points) (
- Distancing EA from rationality is foolish by 25 Jun 2024 21:02 UTC; 137 points) (EA Forum;
- Purchase fuzzies and utilons separately by 27 Dec 2019 2:21 UTC; 125 points) (EA Forum;
- One Life Against the World by 18 May 2007 22:06 UTC; 125 points) (
- When Did EA Start? by 25 Jan 2023 14:51 UTC; 117 points) (EA Forum;
- Ask (Everyone) Anything — “EA 101” by 5 Oct 2022 10:17 UTC; 110 points) (EA Forum;
- Interpersonal Entanglement by 20 Jan 2009 6:17 UTC; 106 points) (
- How to Save the World by 1 Dec 2010 17:17 UTC; 103 points) (
- Prospect Theory: A Framework for Understanding Cognitive Biases by 10 Jul 2011 5:20 UTC; 98 points) (
- Rationality: Common Interest of Many Causes by 29 Mar 2009 10:49 UTC; 88 points) (
- The epistemic virtue of scope matching by 15 Mar 2023 13:31 UTC; 85 points) (
- The “Intuitions” Behind “Utilitarianism” by 28 Jan 2008 16:29 UTC; 84 points) (
- Can Humanism Match Religion’s Output? by 27 Mar 2009 11:32 UTC; 83 points) (
- What are the good rationality films? by 20 Nov 2024 6:04 UTC; 82 points) (
- Fake Selfishness by 8 Nov 2007 2:31 UTC; 76 points) (
- We Should Introduce Ourselves Differently by 18 May 2015 20:48 UTC; 76 points) (
- Practical Advice Backed By Deep Theories by 25 Apr 2009 18:52 UTC; 71 points) (
- The Importance of Artificial Sentience by 3 Mar 2021 17:17 UTC; 70 points) (EA Forum;
- Lobbying governments to improve wild animal welfare by 2 Aug 2022 6:21 UTC; 70 points) (EA Forum;
- Reply to Holden on The Singularity Institute by 10 Jul 2012 23:20 UTC; 69 points) (
- Timeless Identity by 3 Jun 2008 8:16 UTC; 61 points) (
- The Lifespan Dilemma by 10 Sep 2009 18:45 UTC; 61 points) (
- Typical Mind and Politics by 12 Jun 2009 12:28 UTC; 60 points) (
- Zut Allais! by 20 Jan 2008 3:18 UTC; 59 points) (
- Fact checking comparison between trachoma surgeries and guide dogs by 10 May 2017 22:33 UTC; 53 points) (EA Forum;
- Immortality: A Practical Guide by 26 Jan 2015 16:17 UTC; 53 points) (
- A (small) critique of total utilitarianism by 26 Jun 2012 12:36 UTC; 47 points) (
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 46 points) (
- Higher Purpose by 23 Jan 2009 9:58 UTC; 44 points) (
- A proposed adjustment to the astronomical waste argument by 27 May 2013 4:00 UTC; 43 points) (EA Forum;
- ...And Say No More Of It by 9 Feb 2009 0:15 UTC; 43 points) (
- Translating EA Online Content: Motivation and Learnings from a German Project by 11 Sep 2022 8:52 UTC; 38 points) (EA Forum;
- Welcome to Less Wrong! (5th thread, March 2013) by 1 Apr 2013 16:19 UTC; 37 points) (
- When Did EA Start? by 25 Jan 2023 14:30 UTC; 37 points) (
- A Proposed Adjustment to the Astronomical Waste Argument by 27 May 2013 3:39 UTC; 35 points) (
- Public Choice and the Altruist’s Burden by 22 Jul 2010 21:34 UTC; 35 points) (
- 3 Jun 2020 21:20 UTC; 33 points) 's comment on EA Handbook, Third Edition: We want to hear your feedback! by (EA Forum;
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- Intuitions about utilities by 6 Feb 2021 0:12 UTC; 32 points) (
- Welcome to Less Wrong! (2012) by 26 Dec 2011 22:57 UTC; 31 points) (
- Welcome to Less Wrong! (July 2012) by 18 Jul 2012 17:24 UTC; 31 points) (
- Heuristics and Biases in Charity by 2 Mar 2012 15:20 UTC; 30 points) (
- Welcome to Less Wrong! (6th thread, July 2013) by 26 Jul 2013 2:35 UTC; 30 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- Some cruxes on impactful alternatives to AI policy work by 22 Nov 2018 13:43 UTC; 28 points) (EA Forum;
- Is driving worth the risk? by 11 May 2021 5:04 UTC; 28 points) (
- A theory of human values by 13 Mar 2019 15:22 UTC; 28 points) (
- Scope Insensitivity Judo by 19 Jul 2019 17:33 UTC; 22 points) (
- Notes on nukes, IR, and AI from “Arsenals of Folly” (and other books) by 4 Sep 2023 19:02 UTC; 21 points) (EA Forum;
- Expected Value Estimates You Can (Maybe) Take Literally by 6 Apr 2016 15:11 UTC; 21 points) (EA Forum;
- We Could Move $80 Million to Effective Charities, Pineapples Included by 14 Dec 2017 4:40 UTC; 21 points) (EA Forum;
- Welcome to Less Wrong! (7th thread, December 2014) by 15 Dec 2014 2:57 UTC; 21 points) (
- 13 Aug 2010 20:40 UTC; 19 points) 's comment on Should I believe what the SIAI claims? by (
- Welcome to Less Wrong! (8th thread, July 2015) by 22 Jul 2015 16:49 UTC; 19 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Roll for Sanity by 13 Jul 2020 16:39 UTC; 16 points) (
- 20 Mar 2011 22:28 UTC; 15 points) 's comment on What comes before rationality by (
- The Astronomical Sacrifice Dilemma by 11 Mar 2024 19:58 UTC; 15 points) (
- Welcome to LessWrong (10th Thread, January 2017) (Thread A) by 7 Jan 2017 5:43 UTC; 15 points) (
- “Doing Good” is not a Mission by 28 Feb 2025 14:28 UTC; 14 points) (EA Forum;
- The Science of Effective Fundraising: Four Common Mistakes to Avoid by 11 Apr 2016 15:14 UTC; 14 points) (EA Forum;
- EA Claremont Winter 21/22 Intro Fellowship Retrospective by 21 Jan 2022 6:15 UTC; 14 points) (EA Forum;
- The Cognitive Science of Rationality by 12 Sep 2011 10:35 UTC; 14 points) (EA Forum;
- 1 Jun 2010 1:32 UTC; 13 points) 's comment on Diseased thinking: dissolving questions about disease by (
- Lighthaven Sequences Reading Group #11 (Tuesday 11/19) by 13 Nov 2024 5:33 UTC; 12 points) (
- 15 Dec 2008 2:05 UTC; 12 points) 's comment on For The People Who Are Still Alive by (
- The Science of Effective Fundraising: Four Common Mistakes to Avoid by 11 Apr 2016 15:19 UTC; 12 points) (
- On Caring [Atlas Fellowship] by 14 Oct 2014 7:00 UTC; 12 points) (
- SSC discussion: “bicameral reasoning”, epistemology, and scope insensitivity by 27 May 2015 5:08 UTC; 11 points) (
- 4 Jan 2012 7:32 UTC; 11 points) 's comment on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far by (
- 10 Aug 2009 12:03 UTC; 11 points) 's comment on You’re Calling *Who* A Cult Leader? by (
- Welcome to LessWrong (January 2016) by 13 Jan 2016 21:34 UTC; 11 points) (
- How long has civilisation been going? by 22 Jul 2017 6:41 UTC; 11 points) (
- Notes on nukes, IR, and AI from “Arsenals of Folly” (and other books) by 4 Sep 2023 19:02 UTC; 11 points) (
- New website on careers for optimal philanthropy by 22 Nov 2011 20:06 UTC; 11 points) (
- 30 Aug 2016 14:42 UTC; 11 points) 's comment on Open Thread, Aug 29. - Sept 5. 2016 by (
- [SEQ RERUN] One Life Against the World by 12 Jun 2011 1:38 UTC; 10 points) (
- [SEQ RERUN] Scope Insensitivity by 11 Jun 2011 1:08 UTC; 10 points) (
- Training Regime Day 16: Hamming Questions by 1 Mar 2020 18:46 UTC; 10 points) (
- 1 Nov 2022 22:15 UTC; 9 points) 's comment on Scope insensitivity: failing to appreciate the numbers of those who need our help by (EA Forum;
- 28 Mar 2009 7:25 UTC; 9 points) 's comment on Altruist Coordination—Central Station by (
- Welcome to Less Wrong! (9th thread, May 2016) by 17 May 2016 8:26 UTC; 9 points) (
- 28 Jul 2013 21:58 UTC; 9 points) 's comment on Arguments Against Speciesism by (
- 5 Oct 2011 10:03 UTC; 9 points) 's comment on Not By Empathy Alone by (
- 3 Dec 2014 0:34 UTC; 8 points) 's comment on The history of the term ‘effective altruism’ by (EA Forum;
- 15 May 2019 17:48 UTC; 8 points) 's comment on Benefits of EA engaging with mainstream (addressed) cause areas by (EA Forum;
- The epistemic virtue of scope matching by 16 Mar 2023 22:08 UTC; 8 points) (Progress Forum;
- 2 Mar 2010 1:43 UTC; 8 points) 's comment on Open Thread: March 2010 by (
- Reason is not the only means of overcoming bias by 9 Sep 2010 22:59 UTC; 8 points) (
- 20 Oct 2007 20:40 UTC; 8 points) 's comment on Pascal’s Mugging: Tiny Probabilities of Vast Utilities by (
- 14 Dec 2010 1:00 UTC; 8 points) 's comment on What topics would you like to see more of on LessWrong? by (
- 14 Mar 2018 14:38 UTC; 8 points) 's comment on Caring less by (
- 9 Oct 2014 15:22 UTC; 8 points) 's comment on On Caring by (
- 27 Jul 2017 23:27 UTC; 7 points) 's comment on Does Effective Altruism Lead to the Altruistic Repugnant Conclusion? by (EA Forum;
- How can there be a godless moral world ? by 21 Jun 2021 12:34 UTC; 7 points) (
- Rationality Reading Group: Part W: Quantified Humanism by 24 Mar 2016 3:48 UTC; 7 points) (
- 6 Sep 2018 20:25 UTC; 6 points) 's comment on Which piece got you more involved in EA? by (EA Forum;
- 17 Apr 2015 19:48 UTC; 6 points) 's comment on Rationality Reading Group: Introduction and A: Predictably Wrong by (
- To Be Particular About Morality by 31 Dec 2022 19:58 UTC; 6 points) (
- 8 Jun 2018 1:17 UTC; 5 points) 's comment on Beyond Astronomical Waste by (
- 7 Sep 2010 20:32 UTC; 5 points) 's comment on A “Failure to Evaluate Return-on-Time” Fallacy by (
- 16 Feb 2015 10:54 UTC; 5 points) 's comment on Wisdom for Smart Teens—my talk at SPARC 2014 by (
- Rationalism for New EAs by 18 Oct 2021 16:00 UTC; 5 points) (
- 25 Oct 2021 19:57 UTC; 5 points) 's comment on Self-Integrity and the Drowning Child by (
- Introduction to Effective Altruism Reading List by 18 Nov 2018 20:54 UTC; 4 points) (EA Forum;
- Sobre importar-se by 20 Jul 2023 16:33 UTC; 4 points) (EA Forum;
- Adquira sentimentos calorosos e útilons separadamente by 20 Jul 2023 18:48 UTC; 4 points) (EA Forum;
- 28 Jul 2013 20:15 UTC; 4 points) 's comment on Open thread, July 23-29, 2013 by (
- 3 Jan 2025 20:44 UTC; 4 points) 's comment on Zombies among us by (
- 24 Mar 2024 12:09 UTC; 4 points) 's comment on Shortform by (
- 4 Oct 2014 17:24 UTC; 3 points) 's comment on Why is effective altruism new and obvious? by (EA Forum;
- 3 Jun 2024 1:37 UTC; 3 points) 's comment on Research: Rescuers during the Holocaust by (
- 10 Jun 2020 7:28 UTC; 2 points) 's comment on EA Handbook, Third Edition: We want to hear your feedback! by (EA Forum;
- [Opzionale] Utilons, prestigio e sensazioni che scaldano il cuore by 23 Dec 2022 1:49 UTC; 2 points) (EA Forum;
- 13 Mar 2018 3:52 UTC; 2 points) 's comment on Is Effective Altruism fundamentally flawed? by (EA Forum;
- Sulla solidarietà by 23 Dec 2022 1:48 UTC; 2 points) (EA Forum;
- 13 Jun 2010 14:35 UTC; 2 points) 's comment on Open Thread June 2010, Part 2 by (
- United We Blame by 16 Apr 2017 11:40 UTC; 2 points) (
- Welcome to Less Wrong! (11th thread, January 2017) (Thread B) by 16 Jan 2017 22:25 UTC; 2 points) (
- 4 Nov 2011 21:07 UTC; 2 points) 's comment on Less Wrong Couchsurfing Network by (
- 11 Mar 2011 20:00 UTC; 2 points) 's comment on Non-personal preferences of never-existed people by (
- 1 Feb 2012 3:40 UTC; 2 points) 's comment on Risk aversion vs. concave utility function by (
- 5 Apr 2009 0:41 UTC; 2 points) 's comment on On dollars, utility, and crack cocaine by (
- 24 Jul 2011 18:25 UTC; 2 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 31 Jan 2023 20:17 UTC; 2 points) 's comment on My Model Of EA Burnout by (
- 7 Feb 2010 17:39 UTC; 2 points) 's comment on Ethics has Evidence Too by (
- 1 Apr 2011 5:25 UTC; 2 points) 's comment on A Thought on Pascal’s Mugging by (
- 12 Oct 2015 12:58 UTC; 1 point) 's comment on EA’s Image Problem by (EA Forum;
- 満足感と効用は別々に手に入れる by 16 Jul 2023 8:44 UTC; 1 point) (EA Forum;
- 他者を気づかうこと(care)について by 15 Jul 2023 8:38 UTC; 1 point) (EA Forum;
- 26 Aug 2018 17:16 UTC; 1 point) 's comment on Public Opinion about Existential Risk by (EA Forum;
- 14 Mar 2012 1:38 UTC; 1 point) 's comment on Real-life expected utility maximization [response to XiXiDu] by (
- 13 Apr 2013 4:46 UTC; 1 point) 's comment on The Lifespan Dilemma by (
- One-on-one charity by 23 Aug 2011 0:46 UTC; 1 point) (
- What if AI is “IT”, and we don’t know about this? by 27 Oct 2019 23:32 UTC; 1 point) (
- 9 Apr 2010 16:23 UTC; 1 point) 's comment on Open Thread: April 2010 by (
- 12 May 2009 11:44 UTC; 1 point) 's comment on Open Thread: May 2009 by (
- Against picking up pennies by 13 Dec 2009 6:07 UTC; 1 point) (
- 23 Jun 2009 21:40 UTC; 1 point) 's comment on The Domain of Your Utility Function by (
- 4 Jan 2015 18:58 UTC; 0 points) 's comment on Blind Spots: Compartmentalizing by (EA Forum;
- 1 Jun 2009 12:34 UTC; 0 points) 's comment on This Failing Earth by (
- 18 Nov 2013 6:41 UTC; 0 points) 's comment on Open Thread, November 15-22, 2013 by (
- 27 Nov 2012 0:32 UTC; 0 points) 's comment on Giving What We Can, 80,000 Hours, and Meta-Charity by (
- 7 Dec 2011 16:32 UTC; 0 points) 's comment on Facing the Intelligence Explosion discussion page by (
- 8 Dec 2011 5:29 UTC; 0 points) 's comment on Facing the Intelligence Explosion discussion page by (
- 19 Jun 2011 17:58 UTC; 0 points) 's comment on Utility Maximization and Complex Values by (
- Unbounded linear utility functions? by 11 Oct 2015 23:30 UTC; 0 points) (
- 28 Jun 2012 12:46 UTC; 0 points) 's comment on A (small) critique of total utilitarianism by (
- 19 Feb 2013 4:46 UTC; -2 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 3 Feb 2024 21:24 UTC; -3 points) 's comment on Brute Force Manufactured Consensus is Hiding the Crime of the Century by (
- What Are Your Preferences Regarding The FLI Letter? by 1 Apr 2023 4:52 UTC; -4 points) (
The same idea goes for insisting that the charity you donate to is actually good at its mission. If you get your warm glow from the image of yourself as a good person, and if your dollars follow your glow, then competition among charitable organizations will take the form of trying to get good at triggering that self-image. If you get your glow from results, and if your dollars follow that, then charities will have much better incentives.
Good point. But it is not a matter of course that one can decide where he gets his glow from. So if you think that you get your glow from results, then you might just get your glow from believing that you are very smart, or at least smarter than most people, and thereby trigger your self-image yourself, wich allmost certainly will make you blind to anything that suggests the opposit (dunning kruger effekt) and that is very dangerous. I believe that instead of chasing comfort in the form of glow, one should be very skeptical of the glow and do what one wants rather than what is comfortable.
Your conclusion matches your data, but the data is suspiciously focused on charity. Is scope neglect easier to elicit in such contexts? Other explanations include it being hard to make large numbers relevant, and lack of imagination by researchers.
Douglas, I understand that scope insensitivity decreases substantially, but does not go away entirely, when personal profits are at stake.
It’s not easy to devise experiments that distinguish unambiguously between the explanations that center around prototype-dominated affect, versus the warm glow of moral satisfaction. It seems pretty likely that both effects are at work.
How do people react if told “Here is a fixed amount of cash, that must go to charity. How do you wish it to be spent?”
Might that not distinguish “purchase of moral satisfaction” from “scope neglect”?
I strongly favor the “warm glow” explanation, but I’d take it a step further.
For most people, the warm glow is only worth it if they get social credit.
Those yellow LiveStrong bracelets are a great example. They’re about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?
Those yellow LiveStrong bracelets are a great example. They’re about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?
Actually, in my experience it’s the other way round—people feel they’re doing their bit just by wearing the bracelets, so they’ll pay less for a bracelet than they’d donate anonymously.
But like most anecdotes, that one story doesn’t tell you anything—we need statistics if we want to truly know how people behave.
I’m not sure I buy that this is completely about scope insensitivity rather than marginal utility and people thinking in terms of their fair share of a kantian solution. Or put differently, I think the scope insensitivity is partly inherent in the question, rather than a bias of the people answering.
Let’s say I’d be willing to spend $100 to save 10 swans from gruesome deaths. How much should I, personally, be willing to spend to save 100 swans from the same fate? $1000? $10,000 for 1,000 swans? What about 100,000 swans -- $1,000,000?
But I don’t have $1,000,000, so I can’t agree to spend that much, even if I believe that it is somehow intrinsically worth that much. When I’m looking at what I personally spend, I’m comparing my ideas about the value of saving swans to the personal utility I give up by spending that money. $100 is a night out. $1000 is a piece of furniture or a small vacation. $10,000 is a car or a year’s rent. $100,000 is a big chunk of my net worth and a sizable percentage of what I consider FU money. As I go up the scale my pain increases non-linearly, and my personal pain is what I’m measuring here.
So considering a massive problem like saving 2 million swans, I might take the Kantian approach. If say, 10% of people were willing to put $50 toward it, that seems like it would be enough money, so I’ll put $50 toward it figuring that I’d rather live in a world where people are willing to do that than not.
Like many interpretations of studies like this, I think you’re pulling to trigger on an irrationality explanation too fast. I believe that what people are thinking here is much more complicated than you’re giving them credit for and with an appropriate model their responses might not appear to be innumerate.
It’s a hard question to ask in a way that scales appropriately, because money only has value based on scarcity, so you can’t say “If you are emperor of a region with unlimited money to spend, what it is worth to save N swans?” because the answer is just “as much as it takes”. Money only has value if it is scarce, and what you’re really interested in is “Using 2007 US dollars as units: How much other consumption should be foregone to save N swans?”. But people can only judge that accurately from their own limited perspective where they have only so much consumption capacity to go around.
Exactly what I was thinking while I was reading this! Perhaps the example used isn’t a good one.
While I agree with your point, I think the big takeaway here is that humans are not always capable of understanding massive scales. Our universe is one such example where our minds just cannot comprehend galactic scales. Yes, there is a pulling of the trigger, as you say, but I think a more reasonable learning here is that after certain lengths numbers just stop making sense to us.
You point out a potential flaw in the reasoning for concluding ‘scope insensitivity’. But you then seem to go off into saying that ‘scope insensitivity is incorrect’, and I don’t think you supported that claim enough. Remember, reversed stupidity is not intelligence.
I perceive that I’ve neglected to convey the existence of a gigantic body of supporting evidence.
Michael Sullivan, see e.g. http://www.sas.upenn.edu/~baron/cv1.htm:
There is much counterevidence in the literature as well, but more importantly the literature does not clearly suggest the extent to which people are scope sensitive when they are (which is often), nor does it suggest what normative sensitivity might look like given the complexities of the decision problems and of human preferences. The literature doesn’t tell us the extent to which self-identifying total-utilitarian-style altruists in particular are scope sensitive, nor what methods of assigning WTP values they use. Whether or not their decisions are normative according to their professed optimization criteria, and more importantly whether their decisions are more or less normative than a naive “shut up and multiply the salient numbers” approach, is unknown.
A naive total utilitarian approach is clearly lacking. There are always hidden and unmentioned complexities like predetermined ecological niche sizes, i.e. 50 saved birds will quickly breed so as to fill a niche whereas 5,000 birds will remain at the limits. The difference between 1,000 out of 50,000 versus 1,000 out of 2,000 human lives saved is a substantial difference: realistic attempts at either will look very different from each other. Logarithmic scaling is common and can be a natural result of (implicit) consideration of conjunctions, exaggerations, credibility calculations (like whether it’d be easy or difficult to fake a positive result), baselines, opportunity costs, and so on; it is unclear what a normative evaluation of disutility from wars of various casualties would look like, but logarithmicness doesn’t seem obviously wrong. (The different framings in the original paper suggest different metrics for evaluation; there’s no reason to expect consistent valuations across levels of organization. “Deaths per day” offers an uncomplicated metric, “magnitude of war” prompts highly complex evaluations where log-normal distributions are significant.) Lives (alleged) to be saved affect utility calculations only additively, less than do estimated probabilities of internal successes or failures. In brief, a substantial amount of information is not represented by the numbers, and so substantial deviations from naive additive WTP values should be expected.
Naive total utilitarianism is a fast and frugal algorithm which ignores many considerations and makes no attempt to reach normative decisions. Whether it’s more or less consistent with total utilitarians’ values than more intuitive approaches is unclear, and which to prefer in the absence of such information is likewise unclear. Finally, don’t forget that meta-level uncertainty about total utilitarianism should be taken into account.
ETA: I should highlight that there is much variance between subjects and between studies. I do not argue that some subjects in some studies don’t simply purchase moral satisfaction or the like (though the research indicates this is uncommon), but I do argue that some non-negligible number of subjects in some non-negligible number of studies might be more effective altruists than any explicitly algorithm/equation-centered approach would allow for.
ETA2: The above analysis assumes that people’s responses to surveys about why/how they made a decision or what affected them isn’t generally correlated much with their actual decision processes. This assumption is reasonable and isn’t necessary but it’s not overwhelmingly disjunctive.
Hmm… pinging my head for a plausible reason for why I would rate one health program higher or lower this math popped out: Program A promised to save 4,500 / 11,000 refugees; Program B promised to save 4,500 / 250,000 refugees. Program A has a significantly higher “success rate.” Since I know nothing about how health programs work the potentially naive request that Program A is chosen and sent to work at Site B. Why wouldn’t its success rate work with larger numbers? I assume that reality has a few gotchas but I can see the mental reasoning there.
Likewise, for the disease cures, it would make more sense to work on a cure that had a much higher success rate. A cure that works 90% is “better” than a cure that works 10% of the time. The math in terms of lives saved will frustrate the dying and those who care about them but the value placed on the cure may not be counting lives saved. In these examples, the scope problem may be pointing toward the researchers and the participants valuing different things instead of the participants values breaking down around large numbers.
I am interested in comparing Program A (4,500 / 11,000 refugees saved) to a Program C (100,000 / 250,000). The ratios are much closer (41% saved and 40%, respectively). Also, merely asking the question, “Which cure is more valuable?” and listing the cures with different stats. Would this be enough to learn of any correlations between the amount of support and the perceived value/success of the options?
Another experiment could explicitly instruct people to assign money to Programs A, B, and C with the goal of saving the most people. Presumably this will help the participants switch whatever values they have with the values of saving lives. Would the results be different? Why or why not?
This certainly does not apply to the oiled birds or protecting wilderness. Also of note, I did not read any of the linked articles. Perhaps my questions are answered there?
I don’t see how the “potentially naive request” translates to this setting. Say there is a potential cure for disease A which saves 4,500 people of 11,000 afflicted, and a potential cure for disease B which saves 9,000 people of 200,000 afflicted (just to make up some numbers where each potential cure is strictly better along one of the two axes). What’s the argument for working on the cure for disease A, rather than for disease B?
(I’m not going to argue with the “send Program A to work at Site B” argument, but I am also skeptical that many people in the study actually took it into account.)
By that math, saving one person with 100% probability is worth the same as saving the entire population of earth with 100% probability, is it not?
An alternate interpretation is that people conceptualize the problem not in terms of absolute number of birds saved, but in terms of the fraction of birds saved. And they have no idea how many migrating birds there are. Presenting the number 2000, or 200,000, probably suggests to them that that’s on the order of how many migrating birds there are.
This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.
That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don’t seem to currently be providing full text access. So, here’s an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
From Abhijit V. Benerjee and Esther Duflo’s Poor Economics,
Interesting study. What was the reason given for the warned students not giving more money to everyone in Mali?
Do we value saving lives independently of the good feelings we get from it?
Sure: there’s the issues of rewards, reputation and status to consider. The effect of saving lives on the former may scale somewhat linearly—but the effect on the others certainly does not.
I do, I can’t speak for the rest of ‘we’.
How do you know that you do?
I’m not questioning scope insensitivity in general here, but can someone explain to me why does it matter what number of birds they’re trying to save? Obviously, your contribution alone is not going to save them all (unless your’re rich and donating a lot of money), and, if you don’t know anything about how efficient those programs are, you may as well assume a fixed amount of money will save a fixed number of birds.
I think the original stipulation was not “how much would you give to a program saving X, Y or Z birds?”, but “how much would you pay to save X, Y or Z birds?” in which the fixed amount of money is explicitly saving different numbers.
Ah, ok, makes sense.
No, it doesn’t. That kind of possibility never exists in the real world: “name a quantity to save all birds”. It’s unreasonable to expect even the most rational of all to behave like a computer in that kind of situation.
I have a question. (I’m not questioning the scope insensitivity though, well, kinda)
Let’s just say people would pay the same amount for 2000,20000,and 200000 birds saved. But wouldn’t 200000 birds cause a bigger reaction in the society so that more people would pay?
Let’s say there are 1000 people paying for 2000 birds (each 80 dollars). But 20000 would raise a stronger attention which leads to 10000 people willing to pay, same for 200000 birds saved.
I think. It also might be somehow related to the bystander effect, as people generally believe if there’s 200000 birds drowning from oil, there would be more of other people paying. Which gives them the feeling of: other people probably already paid more, why would I need to increase the number?
People tend to have a feeling along the lines of: “ they must have asked more people to donate for all the lakes in Ontario than just one area, if more people pay, then the result is the same, even if I pay only a little.”
If X is the number each person if willing to pay, Z is the amount of money needed to save the birds, there’s also another factor, Y, which is how many people are willing to pay.
If for 2000 birds X times Y is equals to Z
And the amount of birds increase to 20000
X doesn’t change
Z is ten times more than before
But isn’t Y also ten times more?
X times 10Y is equals to 10Z
So is it not a bias anymore? I just feel like the real situation might be more complex.
Although I’m not sure wheather my assumption of “the bigger social attention” is true. I don’t know anything, I’m still in high school. (English is also not my first language)
I can see natural situations where scope insensitivity seems to be the right place to be:
Assuming we are ignorant about the absolute value of saving 4500 lives.
Assuming all potentially affected people contribute on average with the scope insensitive (constant) value. Then, the contribution per saved live would become a constant, like for 45 saved of 200 we have 200 contributions to save 45 lives and for 45,000 saved of 200,000 we have 200,000 contributions to save 45,000. That seems to make perfectly sense.
Assuming that the number of people who get to know about a problem is proportional to the problem size. Hence the number of people who can (and will on average) contribute to its solution is proportional to the problem size. Hence each single contribution should not be proportionate to its size. That is not at all a bad (implicit) assumption to have, IMHO.
It even seems to me that any personal contribution must be intrinsically scope insensitive w.r.t. the denominator (the out of how many birds/humans/...), because any single person can’t possibly pay alone for a solution of a problem that affects a billion humans.
I ran across a work of fiction that proposed an interesting hypothesis as to why we have some of this programming:
If you’re a primitive tribesman, and something wipes out half your kin in a single incident, that’s probably not something you can pick up your spear and hope to fight with any effectiveness. But if it gets just one or two, that might be something you can take on and win. And so as numbers grow larger we tend to grow numb to it and prefer avoidance over confrontation as a survival strategy.
My suggestion for alternative explanation is that people somehow assume that for saving more birds, more people will be asked to donate, so after dividing, the amounts per person will be very similar.
When donating, people think of their capacity. A person’s capacity is obviously limited. There is only a finite amount of money a person can have.
When people answer that they would pay $78, they only expect to save like 10 birds, but not all birds. It is already the limit of their capacity for those birds. However many birds in danger, they can only expect to save 10 birds and the rest, if they are not personally witnessing it, they can only leave them to die.
Now, say if your organisation saving these birds is in possession of a time machine and they could fly back to the time of disaster to save the birds, you could ask people for how long they can contribute $78 each month. Then perhaps the answers and total amount would be different.
I call BS. There is an opportunity cost for passing up a chance to help some seabirds. Most people don’t go through a given day being presented with lots of opportunities to save different numbers of seabirds. If they were, they’d do the math. Most people wouldn’t assume that there were a dozen different seabird charities who are all pledging to help different numbers of seabirds because that’s not the world we live in. If it was, people would process this differently. When people are presented with options, such as in stores where different brands compete for shelf space, they do tend to do the math
There maybe a much simpler explanation for seeming scope “insensitivity” in saturation. The differences in the lead example with the birds seems random variation to me. Probably most participants had some maximum amount they would ever consider parting with for non personal life threatening scenarios, and it doesn’t take very many birds in the scenarios to reach their maximums. Seems like modeling artificially contrived scenarios like this is overthinking the issue without giving adequate thought to alternative more likely explanations.
Fun fact for those reading this in the far future, when Eliezer said “effective altruist” in this piece, he most likely was using the literal meaning, not referring to the EA movement, as that name hadn’t been coined yet.
The uncertainty in how many people would be saved anyway without intervention is (as an absolute number, not a percentage) much larger for the 250000 people case. If someone claims to save 4500 people, and the uncertainty is greater than 4500, I may be skeptical that they can save anyone at all.
Imagine it as medical trials instead. If I claim I can cure 4500 out of 250000 people it may be that I can’t cure anyone at all and I’m just counting the spontaneous remissions as “cures”. If I claim I can cure 4500 out of 11000 people, it’s very unlikely that they would have all recovered spontaneously.
how many lives an action saves is less important than the emotional connotations of the act which takes the lives. take micronutrient dispersal programs vs terrorism. malnutrition kills orders of magnitude more people and yet far more money is spent on terrorist prevention (well, mostly terrorist prevention signaling, but that’s another topic). This is because fighting terrorism is more exciting than fighting scurvy. The order of magnitude difference in impact is ignored when evaluating which thing to spend money on. This makes choosing terrorism easier as saving 100 people from terrorism is much better for public relations than saving 100 random kids with goiters.
I came across an interesting book that includes the topic of scope insensitivity: “Determining the value of non-marketed goods: economics, psychological, and policy relevant aspects of contingent valuation methods” by Raymond J. Kopp, Werner W. Pommerehne, Norbert Schwarz. They suggest that while scope insensitivity on surveys is possible, it is not inevitable.
After providing an impressive list of studies rejecting the insensitivity hypothesis, they highlight two in particular: “First, the scope insensitivity hypothesis is strongly rejected (p<.001) by two large recent in-person contingent valuation studies, Carson, Wilks and Imber (1994) and Carson et al. (1994), which used extensive visual aids and very clean experimental designs to value goods thought to have substantial passive use considerations.”
In order to prevent scope insensitivity, they suggest that the “respondent must (i) clearly understand the characteristics of the good they are asked to value, (ii) find the CV scenario elements related to the good’s provision plausible, and (iii) answer the CV questions in a deliberate and meaningful manner.”
The world of business tends to emphasize pattern over particular. But intellectual aspects of pattern prevents people from caring.
So the marketer would win by using the sad and more concrete image of the oily bird, persuading more people by means of the Ludic fallacy.
Keyword connection for literature searches: Loss Aversion
Jonah Lehrer has blogged recently about what he described as loss aversion—doctors will take more risks if the same problem is framed as reducing loss of life rather than saving lives.
He summarizes some of the same papers mentioned in the post and also a new Feb 2010 PNAS paper, “Amygdala damage eliminates monetary loss aversion”.
The ideas overlap with those in Circular Altruism, I’m not sure which post I originally meant to make this comment under.
Vegetarianism is similar. I know many vegetarians who only think about the poor cow who now is served as dinner instead of the thousands of animals who are killed by pesticides, fertilizers, and mechanized farming equipment needed to grow a bowl of soy beans.
We should not make decisions based on emotional reactions. They do not scale.
I haven’t read the studies. I’d like your opinion on the following idea. Could it be that the way to ask the question relates to the type of curve you get? Could you lead someone to come up with a linear ramp-up of money?
Also: how does the amount the subjects stated compare to the actual cost? If I have to save one bird, it might cost me a few hundred dollars in travel expenses, etc. But saving two birds is only slightly more.
If they did, would their opinion change?
I think mining is nasty, dirty, and dangerous. But I love uranium mining, even though the ore is radioactive. Why? Because each kilogram of uranium ore you pull out of the ground replaces at least ten* kilograms of coal. Uranium mining represents a net reduction to the total amount of mining that happens (with a constant energy load).
Likewise, when you go from growing plants to feed a cow to feed a human to growing plants to feed a human, you reduce the amount of plants necessary at least tenfold,* which similarly sounds like a tenfold reduction in the animals killed by farming processes.
So the thing that vegetarians aren’t thinking about strengthens their argument. Are you sure you’re thinking clearly about this issue, instead of trying to score points?
* I don’t have the time/energy to look up the actual numbers at the moment- I’m >98% confident they’re over 10 times, and strongly suspect they’re less than 100.
This is only somewhat related, as it is less true of overtly political domains, but I am confused by the frequency with which seemingly reasonable methods support naively counter-intuitive conclusions against naively intuitive conclusions where ultimately the naively intuitive conclusions win, i.e. where bullet biting loses to traditionalism. E.g. mathematical or statistical arguments, even solid-seeming ones, often lose in practice due to leaving out important considerations which the brain’s automatic algorithms don’t miss.
Ironically this is especially true in the heuristic and biases literature where normative math is often misunderstood and experimental results are often misinterpreted. The weakness of the findings in the heuristics and biases literature undermines the most commonly cited support of the “the world is mad” hypothesis and so there is a lack of alternative wide-scale explanations for any perceived wide-spread irrationality. Lack of incentives for “rationality” in various domains remains a blanket explanation but it can explain almost anything and is perhaps unjustifiably hinged on a notion of rationality that might or might not be well-supported. In general any behavior can be explained away as a response to a set of incentives that does not include objective truth.
If conclusions reached via common human intuitions or epistemic practices are generally more valid than is suggested by their cited supporting arguments, and if uncommon epistemic practices often lead to conclusions that are less valid than those practices seem to suggest, then it may be wise for those who utilize uncommon epistemic practices to be relatively more wary of their uncommon conclusions and relatively more curious about possible explanations of common conclusions than they otherwise would have been. Scientism/falsificationism, Bayesianism, skepticism, and similar philosophically-inspired memeplexes are examples of sources of uncommon epistemic practices.
This is the main motivation for many vegetarians, from an energy reduction perspective. Ten times (approximately) more plants means ten times (approximately) the energy taken for the same amount of food/energy for the consumer.
Yes 10 times as many plants need to be grown but the harvest methods are quite different.
A cow provides fertilizer(manure) and the farming equipment(it eats the grass there) .
I suspect based on my recollections sun->plants is 1% and plants->animals is 10% and animal->animal is also 10%.
Also per kg meat is more dense so you are shipping less of it.
That’s for free range grass fed cattle. I doubt that is >10% of the beef market.
true.
I am from Australia though.
http://www.anra.gov.au/topics/agriculture/beef/index.html
20 million total cows vs half a million in feedlots.
http://micpohling.wordpress.com/2007/04/08/world-top-15-country-on-highest-number-of-cattle/
Brazil is one of the countries with the most cows
http://beefmagazine.com/mag/beef_brazilian_beef/
One “missing picture” in the Brazilian cattle industry though, is that of a North American-style feedlot. Only 4% of the cattle killed each year are “fattened” in feedlots. With Europe being Brazil’s main beef export market, the majority is grown to finish under a hormone-free regime on grass pastures.
If three groups of subjects were asked how much they would pay to save 2000/20000/200000 birds… Was one group asked how much they would pay to save 2000 birds, another group asked how much they would pay to save 20000 birds, and the final group asked how much they would pay to save 200000 birds? Or was one group asked how much they would pay to save 2000, then 20000, then 200000 birds, and the experiment repeated on the other two groups? I didn’t quite understand… I think I was reading too hard into the subtext. But I’m leaning towards the first one, can anyone elaborate?
The first one. One group was asked about 2000 birds, a separate group was asked about 20000 birds, and another separate group was asked about 200000 birds.
Thanks. :3
This may be nitpicky but I found an errata in the references. [3] I believe should be 1993 instead of 1995.
That said, there are 3 broken links for me - [4], [6] and [7] - and the non-broken links don’t seem to currently be providing full text access. So, here’s an updated references table, with links to full text access in each except for the book in [3] which has an amazon link instead:
[1] Desvousges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
[2] Kahneman, D. 1986. Comments on the contingent valuation method. Pp. 185-194 in Valuing environmental goods: a state of the arts assessment of the contingent valuation method, eds. R. G. Cummings, D. S. Brookshire and W. D. Schulze. Totowa, NJ: Roweman and Allanheld.
[3] McFadden, D. and Leonard, G. 1993. Issues in the contingent valuation of environmental goods: methodologies for data collection and analysis. In Contingent valuation: a critical assessment, ed. J. A. Hausman. Amsterdam: North Holland.
[4] Kahneman, D., Ritov, I. and Schkade, D. A. 1999. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues, Journal of Risk and Uncertainty, 19: 203-235.
[5] Carson, R. T. and Mitchell, R. C. 1995. Sequencing and Nesting in Contingent Valuation Surveys. Journal of Environmental Economics and Management, 28(2): 155-73.
[6] Baron, J. and Greene, J. 1996. Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2: 107-125.
[7] Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. 1997. Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14: 238-300.
Please forgive this post here. There are some forgotten escaped characters and when I went to edit it, I ended up getting a separate post instead.
Actually scope insensitivity might be a species-wide positive adaptation. It is more economical to rescue 200,000 birds for $88 and this could reflect a reasonable expectation of economies of scale.
Who are you to say for Z person that the value of saving X number of Y valuable commodity is linear?
Real world example: the fire alarm goes off in my apartment building at night about once every two weeks. Many people decide to stay in their room, as opposed to evacuating the building. They aren’t understanding the magnitude of how bad it would be if there was a fire and they ended up getting seriously injured or dying. (There have been two real fires so far; the chance of a real fire is not trivial)
Counterpoint: do you understand the magnitude of how bad it would be if there was a fire and you ended up getting seriously injured or dying?
You continue to live in the apartment building which already had two fires and which has a malfunctioning alarm system.
I don’t. I’m not scope sensitive. The alarm system is working fine, it’s just that it’s sensitive to people who are cooking (I think). I’m eager to move out ASAP though.
I hope you have renter’s insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.
Could one way of thinking about this, be that, decisions for loss aversion in the nth being from oneself, get less and less sensitive as our degrees of separation increase? My example would be the indiscriminate termination of the indigenous tribes of the Americas, on its ‘discovery’.
I find the remark about the exponential increase in scope inducing a linear increase in willingness-to-pay perhaps being due to the number of zeroes quite amusing, and it leads me to speculate how a different base numbering system would change the willingness-to-pay.
I predict that given identical proficiency in any base b numbering system, a base-2 numbering system would decrease willingness-to-pay for an identical exponential increase in scope, and a base-16 numbering system would increase it, as a result of the shorter length representations!
I immediately and conclusively conclude that if we were to do away with our silly digits and embrace hexadecimality then the average human would be willing to part with x1.6 more units of purchasing power.
I suppose in a world where people have limited time and have other priorities, we often glaze over the numbers and don’t think about what they really mean in terms of magnitude. I also think desensitization due to the mass media has something to do with it—we are shown statistics and huge numbers all them time for scenarios much worse (war deaths, crimes, disease deaths), so a number as large as 200,000 birds saved wouldn’t make anyone bat an eye—it just becomes another number in the book.
Might this imply fault insensitivity too? For any given behavior “B” for every continued repeat of that behavior resulting in an impact “i-0″ through “i-x”, humans are only willing to curtail that behavior to a point despite the increasing impact of future actions?
Would scope neglect only affect include the application on altruism? What about applications to project scope and budgeting?
I do not know about scientific studies (which does not mean much), but at least anecdotally I think the answer is a yes at least for people who are not trained/experienced in making exactly these kinds of decisions.
One thing I have heard anecdotally is that people often significantly increase the prize when deciding to build/buy a house/car/vacation because they “are already spending lots of money, so who cares about adding 1% to the prize here and there to get neat extras” and thus spend years/months/days of income on things which they would not have bought if they had treated this as a separate decision.
This is a bit different from the bird-charity example, but it seems very related to me in that our intuitions have trouble with keeping track of absolute size.
This very much reminds me of the quote attributed to Stalin:
”One death is a tragedy; one million is a statistic”
I find solace that while many fellow citizens do not invest much time in educating themselves, a few insights have gained recognition through often famous (or infamous) quotes and little nuggets of advice.
Prototypes possess inherent limitations in terms of their physical attributes. Nevertheless, one can attain a sense of moral fulfillment by conceiving an improved version of oneself, which represents a non-physical characteristic of a superior self-prototype. It remains uncertain whether non-physical attributes, such as an enhanced version of oneself, share the same upper boundaries as physical qualities, like a specific number of birds. Consequently, does scope neglect genuinely occur if the scope in question influences a non-physical aspect of a prototype? Is the scalability of a prototype restricted when the scope impacts a non-physical characteristic?
interesting
I wonder whether a cost judgement plays a part in these examples. Saving 2000 birds will have low effort cost and the risk of failure is less significant—trending towards ‘there’s nothing to lose’. Meanwhile, and attempt to save 200,000 birds (on its face) appears to entail a higher effort costs and the repercussions of failure would be more severe. Where faced with a situation where the default state is a negative outcome, people are often reluctant to invest their resources.
I understand action wise, it might be good collectively; but I also understand for victims of certain crimes for example, it is very hard to tell them hey what you feel about the crime is not rational, and please donate to something else