Torture vs. Dust Specks
“What’s the worst that can happen?” goes the optimistic saying. It’s probably a bad question to ask anyone with a creative imagination. Let’s consider the problem on an individual level: it’s not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years. This is one of the worse things that can realistically happen to one person in today’s world.
What’s the least bad, bad thing that can happen? Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.
For our next ingredient, we need a large number. Let’s use 3^^^3, written in Knuth’s up-arrow notation:
3^3 = 27.
3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(… 7625597484987 times …)))).
3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you’ve exponentiated 7625597484987 times. That’s 3^^^3. It’s the smallest simple inconceivably huge number I know.
Now here’s the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
I think the answer is obvious. How about you?
- The Long Long Covid Post by 10 Feb 2022 13:10 UTC; 110 points) (
- Humans are utility monsters by 16 Aug 2013 21:05 UTC; 108 points) (
- A longtermist critique of “The expected value of extinction risk reduction is positive” by 1 Jul 2021 21:01 UTC; 106 points) (EA Forum;
- Not for the Sake of Happiness (Alone) by 22 Nov 2007 3:19 UTC; 95 points) (
- 25 Jan 2012 19:28 UTC; 89 points) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- Intellectual insularity and productivity by 11 Jun 2012 15:10 UTC; 80 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 80 points) (
- Minimalist extended very repugnant conclusions are the least repugnant by 24 Oct 2022 9:46 UTC; 76 points) (EA Forum;
- Circular Altruism by 22 Jan 2008 18:00 UTC; 76 points) (
- Pascal’s Muggle: Infinitesimal Priors and Strong Evidence by 8 May 2013 0:43 UTC; 71 points) (
- Some Thoughts on Metaphilosophy by 10 Feb 2019 0:28 UTC; 62 points) (
- The Lifespan Dilemma by 10 Sep 2009 18:45 UTC; 59 points) (
- The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing by 10 Feb 2010 3:15 UTC; 53 points) (
- A (small) critique of total utilitarianism by 26 Jun 2012 12:36 UTC; 47 points) (
- Don’t Build Fallout Shelters by 7 Jan 2013 14:38 UTC; 46 points) (
- Human errors, human values by 9 Apr 2011 2:50 UTC; 45 points) (
- LessWrong Survey Results: Do Ethical Theories Affect Behavior? by 19 Dec 2012 8:40 UTC; 35 points) (
- A Case Study of Motivated Continuation by 31 Oct 2007 1:27 UTC; 35 points) (
- 4 Nov 2012 5:05 UTC; 34 points) 's comment on 2012 Less Wrong Census/Survey by (
- Nash equilibriums can be arbitrarily bad by 1 May 2019 14:58 UTC; 34 points) (
- Torture vs. Dust vs. the Presumptuous Philosopher: Anthropic Reasoning in UDT by 3 Sep 2009 23:04 UTC; 33 points) (
- Complexity and Intelligence by 3 Nov 2008 20:27 UTC; 33 points) (
- Expected utility, unlosing agents, and Pascal’s mugging by 28 Jul 2014 18:05 UTC; 32 points) (
- Sublimity vs. Youtube by 18 Mar 2011 5:33 UTC; 31 points) (
- On Values Spreading by 11 Sep 2015 3:57 UTC; 28 points) (EA Forum;
- On Less Wrong traffic and new users—and how you can help by 31 May 2010 8:19 UTC; 28 points) (
- 26 Jun 2011 14:27 UTC; 27 points) 's comment on Discussion: Yudkowsky’s actual accomplishments besides divulgation by (
- A theory of human values by 13 Mar 2019 15:22 UTC; 27 points) (
- 8 Apr 2020 20:59 UTC; 26 points) 's comment on If you value future people, why do you consider near term effects? by (EA Forum;
- Continuous Improvement by 11 Jan 2009 2:09 UTC; 26 points) (
- Covid 12/8/22: Another Winter Wave by 8 Dec 2022 14:40 UTC; 23 points) (
- 12 Oct 2013 11:41 UTC; 23 points) 's comment on Open Thread, October 7 - October 12, 2013 by (
- Scope Insensitivity Judo by 19 Jul 2019 17:33 UTC; 22 points) (
- The Empty White Room: Surreal Utilities by 23 Jul 2013 8:37 UTC; 22 points) (
- (Moral) Truth in Fiction? by 9 Feb 2009 17:26 UTC; 21 points) (
- 23 May 2011 9:28 UTC; 20 points) 's comment on What makes Less Wrong awesome? by (
- Ethics Notes by 21 Oct 2008 21:57 UTC; 20 points) (
- 10 Apr 2020 17:40 UTC; 19 points) 's comment on Why I’m Not Vegan by (EA Forum;
- Torture and Dust Specks and Joy—Oh my! or: Non-Archimedean Utility Functions as Pseudograded Vector Spaces by 23 Aug 2019 11:11 UTC; 19 points) (
- Is Morality a Valid Preference? by 21 Feb 2011 1:18 UTC; 19 points) (
- Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions by 11 Nov 2009 3:00 UTC; 19 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- Torture vs. Shampoo by 23 Sep 2013 4:34 UTC; 18 points) (
- 28 Jan 2008 20:32 UTC; 18 points) 's comment on The “Intuitions” Behind “Utilitarianism” by (
- 21 Apr 2012 22:44 UTC; 18 points) 's comment on Stupid Questions Open Thread Round 2 by (
- “3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism” by 29 Mar 2016 15:16 UTC; 17 points) (
- 28 Jul 2012 8:15 UTC; 17 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer by 23 Nov 2021 10:47 UTC; 16 points) (EA Forum;
- Partial Aggregation’s Utility Monster by 7 Oct 2022 16:22 UTC; 16 points) (EA Forum;
- 3^^^3 holes and <10^(3*10^31) pigeons (or vice versa) by 10 Feb 2012 1:25 UTC; 16 points) (
- 28 Feb 2014 5:06 UTC; 16 points) 's comment on The sin of updating when you can change whether you exist by (
- Effective altruism is similar to the AI alignment problem and suffers from the same difficulties [Criticism and Red Teaming Contest entry] by 19 Jun 2022 9:01 UTC; 14 points) (EA Forum;
- Looking for answers about quantum immortality. by 9 Sep 2019 2:16 UTC; 13 points) (
- Do EAs underestimate opportunities to create many small benefits? by 25 Jan 2016 4:20 UTC; 11 points) (EA Forum;
- Should I buy roofies from the darknet? by 22 Apr 2022 15:40 UTC; 11 points) (
- 24 Jul 2012 18:41 UTC; 11 points) 's comment on Less Wrong fanfiction suggestion by (
- 26 Apr 2011 6:53 UTC; 11 points) 's comment on Thomas C. Schelling’s “Strategy of Conflict” by (
- 17 Dec 2021 22:41 UTC; 10 points) 's comment on Against Negative Utilitarianism by (EA Forum;
- A Terrifying Halloween Costume by 1 Nov 2007 2:54 UTC; 10 points) (
- Slightly advanced decision theory 102: Four reasons not to be a (naive) utility maximizer by 23 Nov 2021 11:02 UTC; 10 points) (
- 19 Oct 2012 3:42 UTC; 9 points) 's comment on 2012 Less Wrong Census Survey: Call For Critiques/Questions by (
- 27 Oct 2012 0:46 UTC; 9 points) 's comment on Wanting to Want by (
- [SEQ RERUN] Torture vs. Dust Specks by 11 Oct 2011 3:58 UTC; 9 points) (
- P: 0 ⇐ P ⇐ 1 by 27 Aug 2017 21:57 UTC; 9 points) (
- 3 Nov 2019 22:00 UTC; 9 points) 's comment on But exactly how complex and fragile? by (
- Revisiting torture vs. dust specks by 8 Jul 2009 11:04 UTC; 9 points) (
- 18 Nov 2012 11:13 UTC; 8 points) 's comment on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He’s not.] by (
- 19 Jul 2012 19:45 UTC; 8 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 19 Feb 2022 10:51 UTC; 7 points) 's comment on antimonyanthony’s Shortform by (EA Forum;
- 13 Dec 2010 14:26 UTC; 7 points) 's comment on What is Evil about creating House Elves? by (
- 21 Dec 2010 20:34 UTC; 7 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 6 by (
- 4 Jan 2012 4:22 UTC; 6 points) 's comment on Rationality quotes January 2012 by (
- 14 Apr 2012 21:41 UTC; 6 points) 's comment on Our Phyg Is Not Exclusive Enough by (
- Potential vs already existent people and aggregation by 4 Dec 2014 13:38 UTC; 6 points) (
- 4 Feb 2010 12:54 UTC; 6 points) 's comment on Deontology for Consequentialists by (
- 21 Dec 2009 2:54 UTC; 5 points) 's comment on Mandating Information Disclosure vs. Banning Deceptive Contract Terms by (
- 26 Sep 2011 6:01 UTC; 5 points) 's comment on Cognitive Neuroscience, Arrow’s Impossibility Theorem, and Coherent Extrapolated Volition by (
- 24 Jul 2010 9:49 UTC; 5 points) 's comment on Contrived infinite-torture scenarios: July 2010 by (
- 5 May 2009 21:16 UTC; 4 points) 's comment on Off Topic Thread: May 2009 by (
- 1 May 2009 6:29 UTC; 3 points) 's comment on Conventions and Confusing Continuity Conundrums by (
- 10 Jun 2010 4:36 UTC; 3 points) 's comment on Bayes’ Theorem Illustrated (My Way) by (
- Short, Extreme, Forgotten Torture vs Death by 7 Feb 2021 9:31 UTC; 3 points) (
- 24 Apr 2020 22:01 UTC; 3 points) 's comment on Deminatalist Total Utilitarianism by (
- 30 Jul 2010 22:00 UTC; 3 points) 's comment on Open Thread: July 2010, Part 2 by (
- Meetup : West LA—Big Numbers by 7 Jun 2015 3:39 UTC; 3 points) (
- 16 Sep 2010 19:01 UTC; 3 points) 's comment on LW’s first job ad by (
- 1 Dec 2014 19:33 UTC; 3 points) 's comment on Integral versus differential ethics by (
- 3 Apr 2022 16:19 UTC; 3 points) 's comment on MIRI announces new “Death With Dignity” strategy by (
- 6 Oct 2009 18:06 UTC; 3 points) 's comment on The Presumptuous Philosopher’s Presumptuous Friend by (
- 2 Jun 2014 8:44 UTC; 2 points) 's comment on Pascal’s Mugging Solved by (
- 19 Nov 2015 4:37 UTC; 2 points) 's comment on Marketing Rationality by (
- 18 Mar 2011 16:55 UTC; 2 points) 's comment on Sublimity vs. Youtube by (
- ...and then sometimes, for no clear reason, they innately become good. by 9 Jun 2021 3:07 UTC; 2 points) (
- 30 Jul 2009 19:37 UTC; 2 points) 's comment on The Trolley Problem in popular culture: Torchwood Series 3 by (
- 15 Sep 2014 12:50 UTC; 2 points) 's comment on What are your contrarian views? by (
- 10 May 2022 1:47 UTC; 1 point) 's comment on AI Alignment YouTube Playlists by (EA Forum;
- 5 Oct 2015 18:00 UTC; 1 point) 's comment on Systematically under explored project areas? by (EA Forum;
- 31 Oct 2007 17:58 UTC; 1 point) 's comment on Torture vs. Dust Specks by (
- 28 Apr 2015 12:38 UTC; 1 point) 's comment on Open Thread, Apr. 27 - May 3, 2015 by (
- 22 May 2012 15:15 UTC; 1 point) 's comment on People v Paper clips by (
- Metacontrarian Metaethics by 20 May 2011 5:36 UTC; 1 point) (
- What if AI is “IT”, and we don’t know about this? by 27 Oct 2019 23:32 UTC; 1 point) (
- 18 Apr 2012 3:40 UTC; 1 point) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- Metrics in Everything: “Human Lives” by 26 Feb 2017 16:18 UTC; 1 point) (
- 8 Nov 2021 22:09 UTC; 1 point) 's comment on A system of infinite ethics by (
- 10 May 2020 0:11 UTC; 1 point) 's comment on AI Boxing for Hardware-bound agents (aka the China alignment problem) by (
- 22 Sep 2020 0:16 UTC; 0 points) 's comment on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs by (EA Forum;
- 30 Oct 2007 20:19 UTC; 0 points) 's comment on Torture vs. Dust Specks by (
- 31 Oct 2007 13:30 UTC; 0 points) 's comment on Torture vs. Dust Specks by (
- 8 May 2013 19:05 UTC; 0 points) 's comment on Privileging the Question by (
- In favour of total utilitarianism over average by 22 Dec 2015 5:07 UTC; 0 points) (
- 27 Jun 2017 7:00 UTC; 0 points) 's comment on Open thread, June 26 - July 2, 2017 by (
- 5 Apr 2013 1:43 UTC; 0 points) 's comment on The Moral Void by (
- 26 Dec 2010 6:14 UTC; 0 points) 's comment on Vegetarianism by (
- 14 Nov 2012 18:50 UTC; 0 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- 7 Nov 2012 21:48 UTC; 0 points) 's comment on Meetup : Meetup, Champaign IL, by (
- 22 Jul 2010 5:06 UTC; 0 points) 's comment on Fight Zero-Sum Bias by (
- 29 Feb 2012 9:33 UTC; 0 points) 's comment on Trapping AIs via utility indifference by (
- 31 Jan 2008 4:56 UTC; 0 points) 's comment on The “Intuitions” Behind “Utilitarianism” by (
- 8 May 2009 20:00 UTC; 0 points) 's comment on Beware Trivial Inconveniences by (
- 3 Oct 2010 9:42 UTC; 0 points) 's comment on Rational Terrorism or Why shouldn’t we burn down tobacco fields? by (
- 25 Jan 2008 13:27 UTC; -1 points) 's comment on Circular Altruism by (
- Is there a possibility of being subjected to eternal torture by aliens? by 28 Aug 2020 5:37 UTC; -1 points) (
- Do the people behind the veil of ignorance vote for “specks”? by 11 Nov 2011 1:26 UTC; -1 points) (
- 6 Apr 2012 9:03 UTC; -2 points) 's comment on SMBC comic: poorly programmed average-utility-maximizing AI by (
- Torture vs Dust Specks Yet Again by 20 Aug 2013 12:06 UTC; -2 points) (
- 20 Jul 2013 23:33 UTC; -3 points) 's comment on Policy Debates Should Not Appear One-Sided by (
- 27 Apr 2011 22:00 UTC; -5 points) 's comment on Thomas C. Schelling’s “Strategy of Conflict” by (
- A response to “Torture vs. Dustspeck”: The Ones Who Walk Away From Omelas by 30 Nov 2011 3:34 UTC; -8 points) (
- Complexity based moral values. by 6 Apr 2012 17:09 UTC; -12 points) (
- Complexity of value has implications for Torture vs Specks by 2 Jun 2012 21:11 UTC; -13 points) (
- Yudkowsky’s brain is the pinnacle of evolution by 24 Aug 2015 20:56 UTC; -66 points) (
Does this analysis focus on pure, monotone utility, or does it include the huge ripple effect putting dust specks into so many people’s eyes would have? Are these people with normal lives, or created specifically for this one experience?
The ripple effect is real, but as in Pascal’s Wager, for every possible situation where the timing is critical and something bad will happen if you are distracted for a moment, there’s a counterbalancing situation where the timing is critical and something bad will happen unless you are distracted for a moment, so those probably balance out into noise.
I doubt this.
Why?
I think you can be allowed to imagine that any ripple effect caused by someone getting a barely-noticeable dust speck in their eyes (perhaps it makes someone mad enough to beat his dog) would be about the same as that of the torture (perhaps the torturers go home and beat their dogs because they’re so desensitized to torturing).
The answer that’s obvious to me is that my mental moral machinery—both the bit that says “specks of dust in the eye can’t outweigh torture, no matter how many there are” and the bit that says “however small the badness of a thing, enough repetition of it can make it arbitrarily awful” or “maximize expected sum of utilities”—wasn’t designed for questions with numbers like 3^^^3 in. In view of which, I profoundly mistrust any answer I might happen to find “obvious” to the question itself.
Isn’t this just appeal to humility? If not, what makes this different?
It is not humility to note that extrapolating models unimaginably far beyond their normal operating ranges is a fraught business. Just because we can apply a certain utility approximation to our monkeysphere, or even a few orders of magnitude above our monkeysphere, doesn’t mean the limiting behavior matches our approximation.
In other words, you’re meta-cogitation is 1 - do I trust my very certain intuition? or 2 - do I trust the heuristic from formal/mathematical thinking (that I see as useful partially and specifically to compensate for inaccuracies in our intuition)?
Since there was a post about what seems obvious to the speaker might not be to the listener in this blog a few days ago, I thought I would point out that : It was NOT AT ALL obvious to me what should be preferred, torture 1 man for 50 years or speck of dust in 3^^^3 people. Can you please plase clarify/update what the point of the post was?
The dust speck is described as “barely enough to make you notice”, so however many people it would happen to, it seems better than even something a lot less worse than 50 years of horrible torture. There are so many irritating things that a human barely notices in his/her life, what’s an extra dust speck?
I think I’d trade the dust specks for even a kick in the groin.
But hey, maybe I’m missing something here...
If 3^^^3 people get dust in their eye, an extraordinary number of people will die. I’m not thinking even 1% of those affected will die, but perhaps 0.000000000000001% might, if that. But when dealing with numbers this huge, I think the death toll would measure greater than 7 billion. Knowing this, I would take the torture.
The premise assumes it’s “barely enough to make you notice”, which was supposed to rule out any other unpleasant side-effects.
No, I’m pretty sure it makes you notice. It’s “enough”. “barely enough”, but still “enough”. However, that doesn’t seem to be what’s really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.
Can you explain a bit about your moral or decision theory that would lead you to conclude that?
Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.
When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn’t a utilitarian, this didn’t bother me.
As a utilitarian (in broad strokes), I agree, and this doesn’t bother me because this example is so far out of the range of what is possible that I don’t object to saying, “yes, somewhere out there torture might be a better choice.” I don’t have to worry about that changing what the answer is around these parts.
That’s not quite what I meant by “explain”—I had understood that to be your position, and was trying to get insight into your reasoning.
Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem?
If an axiom, it clearly must be produced by a schema of some sort (as you clearly don’t have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema?
If a theorem, what sort of axioms, and how arranged, produce it?
That’s not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don’t look too closely. Its validity depends greatly on which you’ve picked.
As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.
No-one asked for a general explanation.
The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don’t have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don’t value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
It helps me understand your reasoning, yes. But if you aren’t arguing within a fairly consistent utilitarian framework, there’s not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.
So far it sounds like you’re telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That’s the point.
At what point is utilitarianism not completely arbitrary?
I’m not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it’s a simple way of satisfying some basic axioms without going completely off the rails in situations that don’t require Knuth up-arrow notation to describe.
But that’s all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don’t actually hold makes less sense than it otherwise would, not more.
I believe I suggested earlier that I don’t know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked “At what point is utilitarianism not completely arbitrary?” because I wanted to know more about utilitarianism. That’s all.
Ah. Well, informally, if you’re interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you’re considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn’t too tricky. I’m not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I’m not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
Thank you.
The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that’s a different thought experiment.
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is… insanely contrived. More contrived than the dilemma itself.
However, let’s say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.
The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn’t enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it’s contrived. It’s a thought experiment. I don’t assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment—if you wander, you’re answering a less interesting question.
I don’t agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn’t enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person’s idea of ‘bad’ does not fit my model of how causality works. That seems like a worse infraction, to me.
However, all of that is irrelevant, because I answered the more “interesting question” in the comment you quoted. To be blunt, why are we still talking about this?
I’m not sure I agree, but “which impossible thing is more impossible” does seem an odd thing to be arguing about, so I’ll not go into the reasons unless someone asks for them.
I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.
Anon, I deliberately didn’t say what I thought, because I guessed that other people would think a different answer was “obvious”. I didn’t want to prejudice the responses.
So what do you think?
He gives his answer here.
Thank you!
Exactly, if Elizier had went out and said what he thought, nothing good would come out of it. The point is to make you think.
Even when applying the cold cruel calculus of moral utilitarianism, I think that most people acknowledge that egalitarianism in a society has value in itself, and assign it positive utility. Would you rather be born into a country where 9⁄10 people are destitute (<$1000/yr), and the last is very wealthy (100,000/yr)? Or, be born into a country where almost all people subsist on a modest (6-8000/yr) amount?
Any system that allocates benefits (say, wealth) more fairly might be preferable to one that allocates more wealth in a more unequal fashion. And, the same goes for negative benefits. The dust specks may result in more total misery, but there is utility in distributing that misery equally.
Well, there’s valuing money at more utility per dollar when you have less money and less utility per dollar when you have more money, which makes perfect sense. But that’s not the same as egalitarianism as part of utility.
I don’t believe egalitarianism has value in itself. Tell me, would you rather get all your wealth continuously throughout the year, or get a disproportionate amount on Christmas?
If wealth is evenly distributed, it will lead to more total happiness, but I don’t see any advantage in happiness being evenly distributed.
I don’t see how your comment relates to this post.
Perhaps it could be framed in terms of the utility of psychological comfort. Suppose that one person is tortured to avoid 3^^^3 people getting dust specks. Won’t almost every one of those 3^^^3 people empathize with the tortured person enough to feel a pang of discomfort more uncomfortable than a dust speck?
Only if they find out that the tortured person exists, which would be an event that’s not in the problem statement.
Third-to-last sentence sets up a false dichotomy between “more fairly” and “more unequal.”
The dust specks seem like the “obvious” answer to me, but how large the tiny harm must be to cross the line where the unthinkably huge number of them outweighs a single tremendous one isn’t something I could easily say, when clearly I don’t think simply calculating the total amount of harm caused is the right measure.
It seems obvious to me to choose the dust specks because that would mean that the human species would have to exist for an awfully long time for the total number of people to equal that number and that minimum amount of annoyance would be something they were used to anyway.
I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period. If one must go further, it seems like the suffering from torture is qualitatively worse than the suffering from any number of dust specks.
I think you have misunderstood the point of the thought experiment. Eliezer could have imagined that the intense and prolonged suffering experienced by the victim was not intentionally caused, but was instead the result of natural causes. The “torture is wicked” reply cannot be used to resist the decision to bring about this scenario. (There may, of course, be other reasons for objecting to that decision.)
Anon prime: dollars are not utility. Economic egalitarianism is instrumentally desirable. We don’t normally favor all types of equality, as Robin frequently points out.
Kyle: cute
Eliezer: My impulse is to choose the torture, even when I imagine very bad kinds of torture and very small annoyances (I think that one can go smaller than a dust mote, possibly something like a letter on the spine of a book that your eye sweeps over being in a shade less well selected a font). Then, however, I think of how much longer the torture could last and still not outweigh the trivial annoyances if I am to take the utilitarian perspective and my mind breaks. Condoning 50 years of torture, or even a day worth, is pretty much the same as condoning universes of agonium lasting for eons in the face of numbers like these, and I don’t think that I can condone that for any amount of a trivial benefit.
(This was my favorite reply, BTW.)
I admire the restraint involved in waiting nearly five years before selecting a favorite.
Well too bad he didn’t wait a year longer then ;). I think preferring torture is the wrong answer for the same reason that I think universal health-care is a good idea. The financial cost of serious illness and injury is distributed over the taxpaying population so no single individual has to deal with a spike in medical costs ruining their life. And I think it’s still the correct moral choice regardless of whether universal health-care happens to be more expensive or not.
Analogous I think the exact same applies to dust vs torture. I don’t think the correct moral choice is about minimizing the total area under the pain-curve at all, it’s about avoiding severe pain-spikes for any given individual even at the cost of having a larger area under the curve. I don’t think “shut up and multiply” applies here in it’s simplistic conception in the way it might apply in the scenario where you have to choose whether 400 people live for sure or 500 people live with .9 probability (and die with .1 probability).
Irrespective of the former however, the thought experiment is a bit problematic because it’s more complex than apparent at first, if we really take it seriously. Eliezer said the dust-specks are “barely noticed”, but being conscious or aware of something isn’t an either-or thing, awareness falls on a continuum so whatever “pain” the dust-specks causes has to be multiplied by how aware the person really is. If someone is tortured that person is presumably very aware of the physical and emotional pain.
Other possible consequences like lasting damage or social repercussions not counting, I don’t really care all that much about any kind of pain that happens to me while I’m not aware of it. I could probably figure out whether or not pain is actually registered in my brain during having my upcoming operation under anesthesia, but the fact that I won’t bother tells me very clearly, that awareness of pain is an important weight we have to multiply in some fashion with the actual pain-registration in the brain.
That’s just an additional consideration though, even if we simplify it and imagine the pain is directly comparable and has no difference in quality at all, while the total quantity of pain is excessively higher in the dust-scenario compared to the torture-scenario, it changes nothing about my current choice.
So what does that tell me about the relationship between utility and morality? I don’t accept that morality is just about the total lump sums of utility and disutility, I think we also have to consider the distribution of those in any given population. Why is that I ask myself and my brain offers the following answer to this question:
If I was the only agent in the entire universe and had to pick torture vs dust for myself (and obviously if I was immortal/ had a long enough life to experience all those dust specks), I would still prefer the larger area under the curve over the pain-spike, even if I assume direct comparability of the two kinds of pain. I suspect the reason for this choice is a type of time-discounting my brain does, I’d rather suffer a little pain every day for a trillion years than a big spike for 50 years. Considering that briefly speaking utility is (or at least I think should be defined as) a thing that only results from the interaction of minds and environments, my mind and its workings are definitely part of the equation that says what has utility and what doesn’t. And my mind wants to suffer low disutility evenly distributed over a long time-period rather than suffer great disutility for a 50 year spike (assuming a trillion-year lifetime).
If you’re going to say that, you’ll need some threshhold, and pain over the threshhold makes the whole society count as worse than pain under the threshhold. This will mean that any number of people with pain X is better than one person with pain X + epsilon, where epsilon is very small but happens to push it over the threshhold.
Alternately, you could say that the disutility of pain gradually changes, but that has other problems. I suggest you read up on the repugnant conclusion ( http://plato.stanford.edu/entries/repugnant-conclusion/ )--depending on exactly what you mean, what you suggest is similar to the proposed solutions, which don’t really work.
Personally, I choose C: torture 3^^^3 people for 3^^^3 years. Why? Because I can.
Ahem. My morality is based on maximizing average welfare, while also avoiding extreme individual suffering, rather than cumulative welfare.
So torturing one man for fifty years is not preferable to annoying any number of people.
This is different when the many are also suffering extremely, though—then it may be worthwhile to torture one even more to save the rest.
Trivial annoyances and torture cannot be compared in this quantifiable manner. Torture is not only suffering, but lost opportunity due to imprisonment, permanent mental hardship, activation of pain and suffering processes in the mind, and a myriad of other unconsidered things.
And even if the torture was ‘to have flecks of dust dropped in your eyes’, you still can’t compare a ‘torturous amount’ applied to one person, to substantial number dropped in the eyes of many people: We aren’t talking about cpu cycles here—we are trying to quantify qualifiables.
If you revised the question, and specified stated exactly how the torture would affect the individual, and how they would react to it, and the same for each of the ‘dust in the eyes’ people (what if one goes blind? what of their mental capacity to deal with the hardship? what of the actual level of moisture in their eyes, and consequently the discomfort being felt?) then, maybe then, we could determine which was the worse outcome, and by how much.
There are simply too many assumptions that we have to make in this, mortal, world to determine the answer to such questions: you might as well as how many angels dance on the head of a pin. Or you could start more simply and ask: if you were to torture two people in exactly the same way, which one would suffer more, and by how much?
And you notice, I haven’t even started to think about the ethical side of the question...
Can you compare apples and oranges? You certainly don’t seem to have much trouble when you decide how to spend your money at the grocery store.
It was rather clear from the context that the “dust in the eye” was a very, very minor torture. People are not going blind. They are perfectly capable of dealing with it. It’s just not 3^^^3 times as minor as the torture.
If you were to torture two people in exactly the same way, they’d suffer about equally. Why do you imply that’s some sort of unanswerable question?
If you weren’t talking about the ethical side, what were you talking about? He wasn’t trying to compare everything about the two choices, just which was more ethical. It would be impossible if he didn’t limit it like that.
I’m pretty sure the question itself revolves around ethics, as far as I can tell the question is: given these 2 choices, which would you consider, ethically speaking, the ideal option?
I think this all revolves around one question: Is “disutility of dust speck for N people” = N*”disutility of dust speck for one person”?
This, of course, depends on the properties of one’s utility function.
How about this… Consider one person getting, say, ten dust specks per second for an hour vs 106060 = 36,000 people getting a single dust speck each.
This is probably a better way to probe the issue at its core. Which of those situations is preferable? I would probably consider the second. However, I suspect one person getting a billion dust specks in their eye per second for an hour would be preferable to 1000 people getting a million per second for an hour.
Suffering isn’t linear in dust specks. Well, actually, I’m not sure subjective states in general can be viewed in a linear way. At least, if there is a potentially valid “linear qualia theory”, I’d be surprised.
But as far as the dust specks vs torture thing in the original question? I think I’d go with dust specks for all.
But that’s one person vs buncha people with dustspecks.
Oh, just had a thought. A less extreme yet quite related real world situation/question would be this: What is appropriate punishment for spammers?
Yes, I understand there’re a few additional issues here, that would make it more analogous to, say, if the potential torturee was planning on deliberately causing all those people a DSE (Dust Speck Event)
But still, the spammer issue gives us a more concrete version, involving quantities that don’t make our brains explode, so considering that may help work out the principles by which these sorts of questions can be dealt with.
The problem with spammers isn’t the cause of a singular dust spec event: it’s the cause of multiple dust speck events repeatedly to individuals in the population in question. It’s also a ‘tragedy of the commons’ question, since there is more than one spammer.
To respond to your question: What is appropriate punishment for spammers? I am sad to conclude that until Aubrey DeGray manages to conquer human mortality, or the singularity occurs, there is no suitable punishment for spammers.
After either of those, however, I would propose unblocking everyone’s toilets and/or triple shifts as a Fry’s Electronics floor lackey until the universal heat death, unless you have even >less< interesting suggestions.
If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects—would you do it? I certainly would. In fact, I would probably make the trade even if it were 2 or 3 times longer-lasting and of the same intensity. But something doesn’t make sense now… am I saying I would gladly double or triple the pain I feel over my whole life?
The upshot is that there are some very nonlinear phenomena involved with calculating amounts of suffering, as Psy-Kosh and others have pointed out. You may indeed move along one coordinate in “suffering-space” by 3^^^3 units, but it isn’t just absolute magnitude that’s relevant. That is, you cannot recapitulate the “effect” of fifty years of torturing with isolated dust specks. As the responses here make clear, we do not simply map magnitudes in suffering space to moral relevance, but instead we consider the actual locations and contours. (Compare: you decide to go for a 10-mile hike. But your enjoyment of the hike depends more on where you go, than the distance traveled.)
“If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects—would you do it? I certainly would.”″
Hubris. You don’t know, can’t know, how that pain would/could be instrumental in processing external stimuli in ways that enable you to make better decisions.
“The sort of pain that builds character, as they say”.
The concept of processing ‘pain’ in all its forms is rooted very deep in humanity—get rid of it entirely (as opposed to modulating it as we currently do), and you run a strong risk of throwing the baby out with the bathwater, especially if you then have an assurance that your life will have no pain going forward. There’s a strong argument to be made for deference to traditional human experience in the face of the unknown.
Yes the answer is obvious. The answer is that this question obviously does not yet have meaning. It’s like an ink blot. Any meaning a person might think it has is completely inside his own mind. Is the inkblot a bunny? Is the inkblot a Grateful Dead concert? The right answer is not merely unknown, because there is no possible right answer.
A serious person—one who take moral dilemmas seriously, anyway—must learn more before proceeding.
The question is an inkblot because too many crucial variables have been left unspecified. For instance, in order for this to be an interesting moral dilemma I need to know that it is a situation that is physically possible, or else analogous to something that is possible. Otherwise, I can’t know what other laws of physics or logic apply or don’t apply, and therefore can’t make an assessment. I need to know what my position is in this universe. I need to know why this power has been invested in me. I need to know the nature of the torture and who the person is who will be tortured. I need to consider such factors as what the torture may mean to other people who are aware of it (such as the people doing the torture). I need to know something about the costs and benefits involved. Will the person being tortured know they are being tortured? Or can it be arranged that they are born into the torture and consider it a normal part of their life. Will the person being tortured have volunteered to have been tortured? Will the dust motes have peppered the eyes of all those people anyway? Will the torture have happened anyway? Will choosing torture save other people from being tortured?
It would seem that torture is bad. On the other hand, just being alive is a form of torture. Each of us has a Sword of Damocles hanging over us. It’s called mortality. Some people consider it torture when I keep telling them they haven’t finished asking their question...
The non-linear nature of ‘qualia’ and the difficulty of assigning a utility function to such things as ‘minor annoyance’ has been noted before. It seems to some insolvable. One solution presented by Dennett in ‘Consciousness Explained’ is to suggest that there is no such thing as qualia or subjective experience. There are only objective facts. As Searle calls it ‘consciousness denied’. With this approach it would (at least theoretically) be possible to objectively determine the answer to this question based on something like the number of ergs needed to fire the neurons that would represent the outcomes of the two different choices. The idea of which would be the more/less pleasant experience is therefore not relevant as there is no subjective experience to be had in the first place. Of course I’m being sloppy here- the word choice would have to be re-defined to include that each action is determined by the physical configuration of the brain and that the chooser is in fact a fictional construct of that physical configuration. Otherwise, I admit that 3^^^3 people is not something I can easily contemplate, and that clouds my ability to think of an answer to this question.
Uh… If there’s no such thing as qualia, there’s no such thing as actual suffering, unless I misunderstand your description of Dennett’s views.
But if my understanding is correct, and those views were correct, then wouldn’t the answer be “nobody actually exists to care one way or another?” (Or am I sorely mistaken in interpreting that view?)
Regarding your example of income disparity: I might rather be born into a system with very unequal incomes, if, as in America (in my personal and biased opinion), there is a reasonable chance of upping my income through persistence and pluck. I mean hey, that guy with all that money has to spend it somewhere—perhaps he’ll shop at my superstore!
But wait, what does wealth mean? In the case where everyone has the same income, where are they spending their money? Are they all buying the same things? Is this a totalitarian state? An economy without disparity is pretty disturbing to contemplate, because it means no one is making an effort to do better than other people, or else no one can do better. Money is not being concentrated or funnelled anywhere. Sounds like a pretty moribund economy.
If it’s a situation where everyone always gets what they want and need, then wealth will have lost its conventional meaning, and no one will care whether one person is rich and another one isn’t. What they will care about is the success of their God, their sports teams, and their children.
I guess what I’m saying is that there may be no interesting way to simplify interesting moral dilemmas without destroying the dilemma or rendering it irrelevant to natural dilemmas.
If even one in a hundred billion of the people is driving and has an accident because of the dust speck and gets killed, that’s a tremendous number of deaths. If one in a hundred quadrillion of them survives the accident but is mangled and spends the next 50 years in pain, that’s also a tremendous amount of torture.
If one in a hundred decillion of them is working in a nuclear power plant and the dust speck makes him have a nuclear accident....
We just aren’t designed to think in terms of 3^^^3. It’s too big. We don’t habitually think much about one-in-a-million chances, much less one in a hundred decillion. But a hundred decillion is a very small number compared to 3^^^3.
That is an interesting argument (I’ve considered it before) though I think it misses the point of the thought experiment. As I understand it, it’s not about any of the possible consequences of the dust specks, but about specks as (very minor) intrinsically bad things themselves. It’s about whether you’re willing to measure the unpleasantness of getting a dust speck in your eye on the same scale as the unpleasantness of being tortured, as (vastly) different in degree rather than fundamentally different in kind.
I would say that it is pretty easy to think in terms of 3^^^3. Just assume that everything that could happen due to a dust speck in your eye, will happen.
How do you know that more accidents are caused than avoided by dust specks?
(Of course I realize I’m saying “you” to a 5-year-old comment but you get the picture.)
Douglas and Psy-Kosh: Dennett explicitly says that in denying that there are such things as qualia he is not denying the existence of conscious experience. Of course, Douglas may think that Dennett is lying or doesn’t understand his own position as well as Douglas does.
James Bach and J Thomas: I think Eliezer is asking us to assume that there are no knock-on effects in either the torture or the dust-speck scenario, and the usual assumption in these “which economy would you rather have?” questions is that the numbers provided represent the situation after all parties concerned have exerted whatever effort they can. (So, e.g., if almost everyone is described as destitute, then it must be a society in which escaping destitution by hard work is very difficult.) Of course I agree with both of you that there’s danger in this sort of simplification.
J Thomas: You’re neglecting that there might be some positive-side effects for a small fraction of the people affected by the dust specks; in fact, there is some precedent for this. The resulting average effect is hard to estimate, but (considering that dust specks seem to mostly add entropy to the thought processes of the affected persons), would likely still be negative.
Copying g’s assumption that higher-order effects should be neglected, I’d take the torture. For each of the 3^^^3 persons, the choice looks as follows:
1.) A 1/(3^^^3) chance of being tortured for 50 years. 2.) A 1 chance of getting a dust speck.
I’d definitely prefer the former. That probability is so close to zero that it vastly outweighs the differences in disutility.
Hmm, tricky one.
Do I get to pick the person who has to be tortured?
As I read this I knew my answer would be the dust specks. Since then I have been mentally evaluating various methods for deciding on the ethics of the situation and have chosen the one that makes me feel better about the answer I instinctively chose.
I can tell you this though. I reckon I personally would choose max five minutes of torture to stop the dust specks event happening. So if the person threatened with 50yrs of torture was me, I’d choose the dust specks.
What if it were a repeatable choice?
Suppose you choose dust specks, say, 1,000,000,000 times. That’s a considerable amount of torture inflicted on 3^^^3 people. I suspect that you could find the number of times equivalent to torturing each of thoes 3^^^3 people 50 years, and that number would be smaller than 3^^^3. In other words, choose the dust speck enough times, and more people would be tortured effectually for longer than if you chose the 50-year torture an equivalent number of times.
If that math is correct, I’d have to go with the torture, not the dust specks.
Likewise, if this was iterated 3^^^3+1 times(ie 3^^^3 plus the reader),it could easily be 50*3^^^3 (ie > 3^^^3+1) people tortured. The odds are if it’s possible for you to make this choice, unless you have reason to believe otherwise they may too, making this an implicit prisoner’s dilemma of sorts. On the other side, 3^^^3 specks could possibly crush you, and/or your local cluster of galaxies into a black hole, so there’s that to consider if you consider the life within meaningful distance of of every one of those 3^^^3 people valuable.
I’m not sure I follow your argument.
I’m going to assume that for a single person, 3^^3 dust specks = 50 years of torture. (My earlier figure seems wrong, but 3^^3 dust specks over 50 years is a little under 5,000 dust specks per second.) I’m going to ignore the +1 because these are big numbers already.
If this were iterated 3^^^3 times, then we have the choice between:
TORTURE: 3^^^3 people are each tortured for 50 years, once.
DUST SPECKS: 3^^^3 people are tortured for 50 years, repeated (3^^^3)/(3^^3)=3^(3^^3-3^3) times.
The probability I’m the only person person selected out of 3^^^3 for such a decision p(i) is less than any reasonable estimate of how many people could be selected, imho. Let’s say well below 700dB against. The chances are much greater that some probability fo those about to be dust specked or tortured also gets this choice (p(k)). p(k)*3^^^3 > p(i) ⇒ 3^^^3 > p(i)/p(k) ⇒ true for any reasonable p(i)/p(k)
So this means that the effective number of dust particles given to each of us is going to be roughly (1-p(i))p(k)3^^^3.
I’m going to assume any amount of dust larger in mass than a few orders of magnitude above the Chandrasekhar limit (1e33 kg) is going to result in a black hole. I can even assume a significant error margin in my understanding of how black holes work, and the reuslts do not change.
The smallest dust particle is probably a single hydrogen atom(really everything resoles to hydrogen at small enough quantities, right?). 1 mol of hydrogen weighs about 1 gram. So (1-p(i))(p(k)3^^^3 (1 gram/mol)(6e-23 ‘specks’/mol) (1e-3 kg/g) (1e-33 kg/black hole) = roughly ( 3^^^3 ) (~1e-730) = roughly 3^^^3 black holes.
ie 3^(3_1^3_2^3_3^...^3_7e13 −730) = roughly 3^(3_1^3_2^3_3^...^3_7e13)
ie 3_1^3_2^3_3^...^3_7e13 − 730 = roughly 3_1^3_2^3_3^...^3_7e13.
In conclusion, I think at this level, I would choose ‘cancel’ / ‘default’ / ‘roll a dice and determine the choice randomly/not choose’ BUT would woefully update my concept of the sizee of the universe to contain enough mass to even support a reasonably infentessimal probability of some proportion of 3^^^3 specks of dust, and 3^^^3 people or at least some reasonable proportion thereof.
The question I have now is how is our model of the universe to update given this moral dillema? What is the new radius of the universe given this situation? It can’t be big enough for 3^^^3 dust specks piled on the edge of our universe outside of our light cone somewhere. Either way I think the new radius ought to be termed the “Yudkowsky Radius”.
I don’t really care what happens if you take the dust speck literally; the point is to exemplify an extremely small disutility.
I suppose you could view the utility as a meaninful object in this frame and abstract away the dust, too, but in the end the dust-utility system is going to encompaps both anyway so solving the problem on either level is going to solve it on both.
Kyle wins.
Absent using this to guarantee the nigh-endless survival of the species, my math suggests that 3^^^3 beats anything. The problem is that the speck rounds down to 0 for me.
There is some minimum threshold below which it just does not count, like saying, “What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?” I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0. For the speck, I am going to blink in the next few seconds anyway.
That in no way addresses the intent of the question, since we can just increase it to the minimum that does not round down. Being poked with a blunt stick? Still hard, since I think every human being would take one stick over some poor soul being tortured. Do I really get to be the moral agent for 3^^^3 people?
As others have said, our moral intuitions do not work with 3^^^3.
Why would that round down to zero? That’s a lot more people having cancer than getting nuked!
(It would be hilarious if Zubon could actually respond after almost a decade)
Wow. The obvious answer is TORTURE, all else equal, and I’m pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?
Given that human brains are known not to be able to intuitively process even moderately large numbers, I’d say the question can’t meaningfully be asked—our ethical modules simply can’t process it. 3^^^3 is too large—WAY too large.
I’m unconvinced that the number is too large for us to think clearly. Though it takes some machinery, humans reason about infinite quantities all the time and arrive at meaningful conclusions.
My intuitions strongly favor the dust speck scenario. Even if forget 3^^^^3 and just say that an infinite number of people will experience the speck, I’d still favor it over the torture.
Robin is absolutely wrong, because different instances of human suffering cannot be added together in any meaningful way. The cumulative effect when placed on one person is far greater than the sum of many tiny nuisances experienced by many. Whereas small irritants such as a dust mote do not cause “suffering” in any standard sense of the word, the sum total of those motes concentrated at one time and placed into one person’s eye could cause serious injury or even blindness. Dispersing the dust (either over time or across many people) mitigates the effect. If the dispersion is sufficient, there is actually no suffering at all. To extend the example, you could divide the dust mote into even smaller particles, until each individual would not even be aware of the impact.
So the question becomes, would you rather live in a world with little or no suffering (caused by this particular event) or a world where one person suffers badly, and those around him or her sit idly by, even though they reap very little or no benefit from the situation?
The notion of shifting human suffering onto one unlucky individual so that the rest of society can avoid minor inconveniences is morally reprehensible. That (I hope) is why no one has stood up and shouted yeay for torture.
The problem with this claim is that you can construct a series of overlapping comparisons involving experiences that differ but slightly in how painful they are. Then, provided that the series has sufficiently many elements, you’ll reach the conclusion that an experience of pain, no matter how intense, is preferable to arbitrarily many instances of the mildest pain imaginable.
(Strictly speaking, you could actually avoid this conclusion by assuming that painful experiences of a given intensity have diminishing marginal value and that this value converges to a finite quantity. Then if the limiting value of a very mild pain is less than the value of a single extremely painful experience, the continuity argument wouldn’t work. However, I see no independent motivation for embracing a theory of value of this sort. Moreover, such a theory would have incredible implications, e.g., that to determine how bad someone’s pain is one needs to consider whether sentient beings have already experienced pains of that intensity in remote regions of spacetime.)
Yeah, this is a common attempt to avoid this particular repugnant conclusion. This approach leads to conclusions like that a 3^^^3 mildly stabbed toes are better than a single moderately stabbed one. (Because if not, we can construct an unbroken chain of comparable pain experiences from specks to torture.)
The motivation is there, to make dust specks and torture incomparable. Unfortunately, this approach doesn’t work, as it results in infinitely many arbitrarily defined discontinuities.
The obvious answer is TORTURE, all else equal, and I’m pretty sure this is obvious to Eliezer too.
That is the straightforward utilitarian answer, without any question. However, it is not the common intuition, and even if Eliezer agrees with you he is evidently aware that the common intuition disagrees, because otherwise he would not bother blogging it. It’s the contradiction between intuition and philosophical conclusion that makes it an interesting topic.
Robin’s answer hinges on “all else being equal.” That condition can tie up a lot of loose ends, it smooths over plenty of rough patches. But those ends unravel pretty quickly once you start to consider all the ways in which everything else is inherently unequal. I happen to think the dust speck is a 0 on the disutility meter, myself, and 3^^^3*0 disutilities = 0 disutility.
I believe that ideally speaking the best choice is the torture, but pragmatically, I think the dust speck answer can make more sense. Of course it is more intuitive morally, but I would go as far as saying that the utility can be higher for the dust specks situation (and thus our intuition is right). How? the problem is in this sentence: “If neither event is going to happen to you personally,” the truth is that in the real world, we can’t rely on this statement. Even if it is promised to us or made into a law, this type of statements often won’t hold up very long. Precedents have to be taken into account when we make a decision based on utility. If we let someone be tortured now, we are building a precedent, a tradition of letting people being tortured. This has a very low utility for people living in the affected society. This is well summarized in the saying “What goes around comes around”.
If you take the strict idealistic situation described, the torture is the best choice. But if you instead deem the situation to be completely unrealistic and you pick a similar one by simply not giving a 100% reliability on the sentence: “If neither event is going to happen to you personally,” the best choice can become the dust specks, depending on how much you believe the risk of a tradition of torture will be established. (and IMO traditions of torture and violence is the kind of thing that spreads easily as it stimulates resentment and hatred in the groups that are more affected.) The torture situation has much risk of getting worst but not the dust speck situation.
The scenario might have been different if torture was replaced by a kind of suffering that is not induced by humans. Say… an incredibly painful and long (but not contagious) illness.
Is it better to have the dust specks everywhere all the time or to have the existence of this illness once in history?
Torture. See Norcross: http://www.ruf.rice.edu/~norcross/ComparingHarms.pdf
Your link is 404ing. Is http://spot.colorado.edu/~norcross/Comparingharms.pdf the same one?
Here’s the link (both links above are dead).
Here’s the latest working link (all three above are dead)
Also, here’s an archive in case that one ever breaks!
Robin, could you explain your reasoning. I’m curious.
Humans get barely noticeable “dust speck equivalent” events so often in their lives that the number of people in Eliezer’s post is irrelevant; it’s simply not going to change their lives, even if it’s a gazillion lives, even with a number bigger than Eliezer’s (even considering the “butterfly effect”, you can’t say if the dust speck is going to change them for the better or worse—but with 50 years of torture, you know it’s going to be for the worse).
Subjectively for these people, it’s going to be lost in the static and probably won’t even be remembered a few seconds after the event. Torture won’t be lost in static, and it won’t be forgotten (if survived).
The alternative to torture is so mild and inconsequential, even if applied to a mind-boggling number of people, that it’s almost like asking: Would you rather torture that guy or not?
@Robin,
“But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet.”
I thought that Sebastian Hagen and I had said it. Or do you think we gave weasel answers? Mine was only contingent on my math being correct, and I thought his was similarly clear.
Perhaps I was unclear in a different way. By asking if the choice was repeatable, I didn’t mean to dodge the question; I meant to make it more vivid. Moral questions are asked in a situation where many people are making moral choices all the time. If dust-speck displeasure is additive, then we should evaluate our choices based on their potential aggregate effects.
Essentially, it’s a same-ratio problem, like showing that 6:4::9:6, because 6x3=9x2 and 4x3=6x2. If the aggregate of dust-specking can ever be greater than the equivalent aggregate of torturing, then it is always greater.
Hmm, thinking some more about this, I can see another angle (not the suffering angle, but the “being prudent about unintended consequences” angle):
If you had the choice between very very slightly changing the life of a huge number of people or changing a lot the life of only one person, the prudent choice might be to change the life of only one person (as horrible as that change might be).
Still, with the dust speck we can’t really know if the net final outcome will be negative or positive. It might distract people who are about to have genius ideas, but it might also change chains of events that would lead to bad things. Averaged over so many people, it’s probably going to stay very close to neutral, positive or negative. The torture of one person might also look very close to neutral if averaged with the other 3^^^3 people, but we know that it’s going to be negative. Hmm..
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
The square of the number of milliseconds in 50 years is about 10^21.
Would you rather one person tortured for a millisecond (then no ill effects), or that 3^^^3/10^21 people get a dust speck per second for 50 centuries?
OK, so the utility/effect doesn’t scale when you change the times. But even if each 1% added dust/torture time made things ten times worse, when you reduce the dust-speckled population to reflect that it’s still countless universes worth of people.
I’m with Tomhs. The question has less value as a moral dilemma than as an opportunity to recognize how we think when we “know” the answer. I intentionally did not read the comments last night so I could examine my own thought process, and tried very hard to hold an open mind (my instinct was dust). It’s been a useful and interesting experience. Much better than the brain teasers which I can generally get because I’m on hightened alert when reading El’s posts. Here being on alert simply allowed me to try to avoid immediately giving in to my bias.
Averaging utility works only when law of large numbers starts to play a role. It’s a good general policy, as stuff subject to it happens all the time, enough to give sensible results over the human/civilization lifespan. So, if Eliezer’s experiment is a singular event and similar events don’t happen frequently enough, answer is 3^^^3 specks. Otherwise, torture (as in this case, similar frequent enough choices would lead to a tempest of specks in anyone’s eye which is about 3^^^3 times worse then 50 years of torture, for each and every one of them).
Benquo, your first answer seems equivocal, and so did Sebastian’s on a first reading, but now I see that it was not.
Torture,
Consider three possibilities:
(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.
Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).
You know, that actually persuaded me to override my intuitions and pick torture over dust specks.
You don’t even have to go that far. Replace “dust specks” with “the inconvenience of not going outside the house” and “tiny chance of torture” with “tiny chance that being outside the house will lead to you getting killed”.
Yeah, I understood the point.
There isn’t any right answer. Answers to what is good or bad is a matter of taste, to borrow from Nietzsche.
To me the example has messianic quality. One person suffers immensely to save others from suffering. Does the sense that there is a ‘right’ answer come from a Judeo-Christian sense of what is appropriate. Is this a sort of bias in line with biases towards expecting facts to conform to a story?
Also, this example suggests to me that the value pluralism of Cowen makes much more sense than some reductive approach that seeks to create one objective measure of good and bad. One person might seek to reduce instances of illness, another to maximize reported happiness, another to maximize a personal sense of beauty. IMO, there isn’t a judge who will decide who is right and who is wrong, and the decisive factor is who can marhsal the power to bring about his will, as unsavory as that might be (unless your side is winning).
Why is this a serious question? Given the physical unreality of the situation, the putative existence of 3^^^3 humans and the ability to actually create the option in the physical universe—why is this question taken seriously while something like is it better to kill Santa Claus or the Easter Bunny considered silly?
Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn’t sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the “obvious” answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a single individual. But I think he took the problem statement too literally; the discomfort of the dust mote was intended to be vanishingly small, over a vast population, thus keeping the problem interesting rather than “obvious.”
But most interesting to me is that no one pointed out that fundamentally, the assessed goodness of any act is a function of the values (effective, but not necessarily explicit) of the assessor. And assessed morality as a function of group agreement on the “goodness” of an act, promoting the increasingly coherent values of the group over increasing scope of expected consequences.
Now the values of any agent will necessarily be rooted in an evolutionary branch of reality, and this is the basis for increasing agreement as we move toward the common root, but this evolving agreement in principle on the direction of increasing morality should never be considered to point to any particular destination of goodness or morality in any objective sense, for that way lies the “repugnant conclusion” and other paradoxes of utilitarianism.
Obvious? Not at all, for while we can increasingly converge on principles promoting “what works” to promote our increasingly coherent values over increasing scope, our expression of those values will increasingly diverge.
The hardships experienced by a man tortured for 50 years cannot compare to a trivial experience massively shared by a large number of individuals—even on the scale that Eli describes. There is no accumulation of experiences, and it cannot be conflated into a larger meta dust-in-the-eye experience; it has to be analyzed as a series of discreet experiences.
As for larger social implications, the negative consequence of so many dust specked eyes would be negligible.
Wow. People sure are coming up with interesting ways of avoiding the question.
Eliezer wrote “Wow. People sure are coming up with interesting ways of avoiding the question.”
I posted earlier on what I consider the more interesting question of how to frame the problem in order to best approach a solution.
If I were to simply provide my “answer” to the problem, with the assumption that the dust in the eyes is likewise limited to 50 years, then I would argue that the dust is to be preferred to the torture, not on a utilitarian basis of relative weights of the consequences as specified, but on the bigger-picture view that my preferred future is one in which torture is abhorrent in principle (noting that this entails significant indirect consequences not specified in the problem statement.)
Eliezer, are you suggesting that declining to make up one’s mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?
As for your link to the “motivated stopping” article, I don’t quite see why declining to decide on this is any more “stopping” than choosing a definite one of the options. Or are you suggesting that it’s an instance of motivated continuation? Perhaps it is, but (as you said in that article) the problem with excessive “continuation” is that it can waste resources and miss opportunities. I don’t see either of those being an issue here, unless you’re actually threatening to do one of those two things—in which case I declare you a Pascal’s mugger and take no notice.
What happens if there aren’t 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.
Robin, of course it’s not obvious. It’s only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions must be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive—otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).
From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It’s perfectly straightforward to construct a perfectly consistent mathematics that rejects the axiom of infinity. Besides the philosophical justification for ultrafinitism (ie, infinite sets don’t really exist), these theories corresponds to various notions of bounded computation (such as logspace or polytime). This is a natural requirement, if we want to require moral judgements to be made quickly enough to be relevant to decision making—and that rules out seriously computing with numbers like 3^^^3.
I once read the following story about a Russian mathematician. I can’t find the source right now.
Cast: Russian mathematician RM, other guy OG
RM: “Truly large numbers don’t really exist in the same sense that small ones do.”
OG: “That’s ridiculous. Consider the powers of two. Does 2ˆ1 exist?”″
RM: “Yes.”
OG: “OK, does 2ˆ2 exist?”
RM: ”.Yes.”
OG: “So you’d agree that 2ˆ3 exists?”
RM: ”...Yes.”
OG: “How about 2ˆ4?”
RM: ”.......Yes.”
OG: “So this is silly. Where would you ever draw the boundary?”
RM: ”..............................................................................................................................................”
Eliezer, are you suggesting that declining to make up one’s mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?
Yes, I am.
Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass.
Regarding (2), whenever a tester finds a user input that crashes your program, it is always bad—it reveals a flaw in the code—even if it’s not a user input that would plausibly occur; you’re still supposed to fix it. “Would you kill Santa Claus or the Easter Bunny?” is an important question if and only if you have trouble deciding. I’d definitely kill the Easter Bunny, by the way, so I don’t think it’s an important question.
Followup dilemmas:
For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?
For those who would pick TORTURE, what about Vassar’s universes of agonium? Say a googolplex-persons’ worth of agonium for a googolplex years.
Unless the 3^^^3 people are forming a hive mind, I pick the specks.
I’m terribly inexperienced in translating ethical preferences into money, but in that scenario I wouldn’t pay the penny. A penny can be better used in buying more utility than removing specks from 3^^^3 eyeballs.
Fascinating question. No matter how small the negative utility in the dust speck, multiplying it with a number such as 3^^^3 will make it way worse than torture. Yet I find the obvious answer to be the dust speck one, for reasons similar to what others have pointed out—the negative utility rounds down to zero.
But that doesn’t really solve the problem, for what if the harm in question was slightly larger? At what point does it cease rounding down? I have no meaningful criteria to give for that one. Obviously there must be a point where it does cease doing so, for it certainly is much better to torture one person for 50 years than 3^^^3 people for 49 years.
It is quite counterintuitive, but I suppose I should choose the torture option. My other alternatives would be to reject utilitarianism (but I have no better substitutes for it) or to modify my ethical system so that it solves this problem, but I currently cannot come up with an unproblematic way of doing so.
Still, I can’t quite bring myself to do so. I choose specks, and admit that my ethical system is not consistent yet. (Not that it would be a surprise—I’ve noticed that all my attempts at building entirely consistent ethical systems tend to cause unwanted results at one point or the other.)
For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?
A single penny to avoid one dust speck, or to avoid 3^^^3 dust specks? No to the first one. To the second one, depends on how often they occured—if I somehow could live for 3^^^3 years, getting one dust speck in my eye per year, then no. If they actually inconvenienced me, then yes—a penny is just a penny.
“Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass.”
This goes to the crux of the matter, why to the extent the future is uncertain, it is better to decide based on principles (representing wisdom encoded via evolutionary processes over time) rather than on the flat basis of expected consequences.
Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?
Is the question significantly changed by this rephrasing? It makes SPECKS the default choice, and it changes 3^^^3 to “all.” Are we better able to process “all” than 3^^^3, or can we really process “all” at all? Does it change your answer if we switch the default?
Would you force every qualia-experiencing being who will ever exist to blink one additional time to save one person from being horribly tortured for fifty years without hope or rest?
> For those who would pick TORTURE, what about Vassar’s universes of agonium? Say a googolplex-persons’ worth of agonium for a googolplex years.
If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big “dust crunch” my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. The dust is absolute doom for all potential futures.
Of course, I’m assuming that 3^^^3 conscious beings are unlikely to ever exist and so that dust would be applied over and over to the same people causing the universe to be filled with dust. Maybe this isn’t how the mechanics of the problem work.
> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?
That’s assuming you’re interpreting the question correctly. That you aren’t dealing with an evil genie.
You never said we couldn’t choose who specifically gets tortured, so I’m assuming we can make that selection. Given that, the once agonizingly difficult choice is made trivially simple. I would choose 50 years of torture for the person who made me make this decision.
Since I chose the specks—no, I probably wouldn’t pay a penny; avoiding the speck is not even worth the effort to decide to pay the penny or not. I would barely notice it; it’s too insignificant to be worth paying even a tiny sum to avoid.
I suppose I too am “rounding down to zero”; a more significant harm would result in a different answer.
You’re avoiding the question. What if a penny was automatically payed for you each time in the future to avoid dust specks floating in your eye? The question is whether the dust speck is worth at least a negative penny of disutility. For me, I would say yes.
“For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?”
To avoid all the dust specks, yeah, I’d pay a penny and more. Not a penny per speck, though ;)
The reason is to avoid having to deal with the “unintended consequences” of being responsible for that very very small change over such a large number of people. It’s bound to have some significant indirect consequences, both positive and negative, on the far edges of the bell curve… the net impact could be negative, and a penny is little to pay to avoid responsibility for that possibility.
The first thing I thought when I read this question was that the dust specks were obviously preferable. Then I remembered that my intuition likes to round 3^^^3 down to something around twenty. Obviously, the dust specks are preferable to the torture for any number at all that I have any sort of intuitive grasp over.
But I found an argument that pretty much convinced me that the torture was the correct answer.
Suppose that instead of making this choice once, you will be faced with the same choice 10^17 times for the next fifty years (This number was chosen so that it was more than a million per second.) If you have a problem imagining the ability to make more than a million choices per second, imagine that you have a dial in front of you which goes from zero to a 10^17. If you set the dial to n, then 10^17-n people will get tortured starting now for the next fifty years, and n dust specks will fly into the eyes of each of 3^^^3 people during the next fifty years.
The dial starts at zero. For each unit that you turn the dial up, you are saving one person from being tortured by putting a dust speck in the eyes of each of the 3^^^3 people, the exact choice presented.
So, if you thought the correct answer was the dust specks, you’d turn the dial from zero to one right? And then you’d turn it from one to two, right?
But, if you turned the dial all the way up to 10^17, you’d effectively be rubbing the corneas of the 3^^^3 people with sandpaper for fifty years (of course, their corneas would wear through, and their eyes would come apart under that sort of abrasion. It would probably take less than a million dust specks per second to do that, but let’s be conservative and make them smaller dust specks.) Even if you don’t count the pain involved, they’d be blind forever. How many people would you blind in order to save one person from being tortured for fifty years? You probably wouldn’t blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you
have saved from torture.
So if your answer was the dust specks, you’d either end up turning the knob all the way up to 10^17, or you’d have to stop somewhere, because there’s no escaping that in this scenario, there’s a real dial in front of you, and you have to turn it to some n between 0 and a 10^17.
If you left the dial on, say, 10^10, I’d ask “Tell me, what is so special about the difference between hitting someone with 10^10 dust specs versus hitting them with 10^10+1, that wasn’t special about the difference between hitting them with zero versus one?” If anything, the more dust specks there are, the less of a difference one more would make.
There are easily 10^17 continuous gradations between no inconvenience and having ones eyes turned to pulp, and I don’t really see what would make any of them terribly different from each other. Yet n=0 is obviously preferable to n=10^17, and so, each individual increment of n must be bad.
This has nothing to do with the original question. You rephrased it so that it now asks if you’d rather torture one person or 3^^^3. Of course you rather torture one person than 3^^^3. That does not equal torturing one person or that 3^^^3 people get dust specks in their eyes for a fraction of a second.
The reasoning here seems very broken to me (I have no opinion on the conclusion yet):
Look at a version of the reverse dial. Say that you start with 3^^^3 people having 1000000 dust-specks a second rubbed in their eye, and 0 people tortured. Each time you turn the dial up by 1, 1 person is moved over from the “speck in the eye” list over to the “tortured for 50 years” list, and the frequency is reduced by 1 spec/second. Would you turn the dial up to 1000000?
So because there is a continuum between the right answer (lots of torture) and the wrong answer (3^^^3 horribly blinded people), you would rather blind those people?
Nah, he was pretty clearly challenging the use of induction in the above post.
The larger problem is assuming linearity in an obviously nonlinear situation—this also explains why the induction appears to work either way. Applying 1 pound of force to someone’s kneecap is simply not 1/10th as bad as applying 10 pounds of force to someone’s kneecap.
″… whenever a tester finds a user input that crashes your program, it is always bad—it reveals a flaw in the code—even if it’s not a user input that would plausibly occur; you’re still supposed to fix it. “Would you kill Santa Claus or the Easter Bunny?” is an important question if and only if you have trouble deciding. I’d definitely kill the Easter Bunny, by the way, so I don’t think it’s an important question.”
I write code for a living; I do not claim that it crashes the program. Rather the answer is irrelevant as I don’t think that the question is important or insightful regarding our moral judgements since it lacks physical plausibility. BTW, since one can think of God as “Santa Claus for grown-ups”, the Easter Bunny lives.
By “pay a penny to avoid the dust specks” I meant “avoid all dust specks”, not just one dust speck. Obviously for one speck I’d rather have the penny.
So if someone would pay a penny, they should pick torture if it were 3^^^^3 people getting dust specks, which makes it suspect that they understood 3^^^3 in the first place.
what about Vassar’s universes of agonium? Say a googolplex-persons’ worth of agonium for a googolplex years.
To reduce suffering in general rather than your own (it would be tough to live with), bring on the coddling grinders. (10^10^100)^2 is a joke next to 3^^^3.
Having said that, it depends on the qualia-experiencing population of all existence compared to the numbers affected, and whether you change existing lives or make new ones. If only a few googolplex-squared people-years exist anyway, I vote dust.
I also vote to kill the bunny.
For those who would pick TORTURE, what about Vassar’s universes of agonium? Say a googolplex-persons’ worth of agonium for a googolplex years.
Torture, again. From the perspective of each affected individual, the choice becomes:
1.) A (10(10100))/(3^^^3) chance of being tortured for (10(10100)) years.
2.) A 1 chance of a dust speck.
(or very slightly different numbers if the (10(10100)) people exist in addition to the 3^^^3 people; the difference is too small to be noticable)
I’d still take the former. (10(10100))/(3^^^3) is still so close to zero that there’s no way I can tell the difference without getting a larger universe for storing my memory first.
Eliezer, it’s the combination of (1) totally untrustworthy brain machinery and (2) no immediate need to make a choice that I’m suggesting means that withholding judgement is reasonable. I completely agree that you’ve found a bug; congratulations, you may file a bug report and add it to the many other bug reports already on file; but how do you get from there to the conclusion that the right thing to do is to make a choice between these two options?
When I read the question, I didn’t go into a coma or become psychotic. I didn’t even join a crazy religion or start beating my wife. If for some reason I actually had to make such a choice, I still wouldn’t go nuts. So I think analogies with crashing software are inappropriate. (Again, I don’t deny that there’s a valid bug report. I’m just questioning its severity.)
So what we have here is an architectural problem with the software, which produces a failure mode in which input radically different from any that will ever actually be supplied provokes a small user-interface glitch. It would be nice to fix it, but it doesn’t strike me as unreasonable if it doesn’t make it through some people’s triage.
(Santa Claus versus the Easter Bunny is much nearer to being a realistic question, and so far as I can tell there isn’t anything in my mental machinery that fundamentally isn’t equipped to consider it. Kill the bunny.)
Let’s suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:
torture: 1*1 = 1 pp
For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of pain. Then if e < 1/3^^^3
spec: 1 * e < 1 pp
and if e > 1/3^^^3
spec: 1 * e > 1 pp
So assuming our moral calculus is to always choose whichever option generates the least pp, we need only ask if e is greater than or less than 1/n.
If you’ve been paying attention, I now have an out to give no answer: we don’t know what e is, so I can’t decide (at least not based on pp). But I’ll go ahead and wager a guess. Since 1/3^^^3 is very small, I think that most likely any pain sensing system of any present or future intelligence will have e > 1/3^^^3, then I must choose torture because torture costs 1 pp but the specs cost more than 1 pp.
This doesn’t feel like what, as a human, I would expect the answer to be. I want to say don’t torture the poor guy and all the rest of us will suffer the spec so he need not be tortured. But I suspect this is human inability to deal with large numbers, because I think about how I would be willing to accept a spec so the guy wouldn’t be torture since e pp < 1 pp, and every other individual, supposing they were pp-fearing people, would make the same short-sighted choice. But the net cost would be to distribute more pain with the specs than the torture ever would.
Weird how the human mind can find a logical answer and still expect a nonlogical answer to be the truth.
“Wow. People sure are coming up with interesting ways of avoiding the question.”
My response was a real request for information- if this is a pure utility test, I would select the dust specks. If this were done to a complex, functioning society, adding dust specks into everyone’s eyes would disrupt a great deal of important stuff- someone would almost certainly get killed in an accident due to the distraction, even on a planet with only 10^15 people and not 3^^^^3.
Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don’t think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace’s principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.
“For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?”
Yes. Note that, for the obvious next question, I cannot think of an amount of money large enough such that I would rather keep it than use it to save a person from torture. Assuming that this is post-Singularity money which I cannot spend on other life-saving or torture-stopping efforts.
“You probably wouldn’t blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you have saved from torture.”
This is cheating, to put it bluntly- my utility function does not assign the same value to blinding someone and putting six billion dust specks in everyone’s eye, even though six billion specks are enough to blind people if you force them into their eyes all at once.
“I’d still take the former. (10(10100))/(3^^^3) is still so close to zero that there’s no way I can tell the difference without getting a larger universe for storing my memory first.”
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.
People are being tortured, and it wouldn’t take too much money to prevent some of it. Obviously, there is already a price on torture.
My algorithm goes like this:
there are two variables, X and Y.
Adding a single additional dust speck to a person’s eye over their entire lifetime increases X by 1 for every person this happens to.
A person being tortured for a few minutes increases Y by 1.
I would object to most situations where Y is greater than 1. But I have no preferences at all with regard to X.
See? Dust specks and torture are not the same. I do not lump them together as “disutility”. To do so seems to me a preposterous oversimplification. In any case, it has to be argued that they are the same. If you assume they’re the same, then you’re just assuming the torture answer when you state the question—it’s not a problem of ethical philosophy but a problem of addition.
I am not convinced that this question can be converted into a personal choice where you face the decision of whether to take the speck or a 1/3^^^3 chance of being tortured. I would avoid the speck and take my chances with torture, and I think that is indeed an obvious choice.
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.
The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.
1/3^^^3 chance of being tortured… If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.
That wouldn’t make it inevitable. You could get away with it, but then you could get multiple tortures. Rolling 6 dice often won’t get exactly one “1”.
Answer depends on the person’s POV on consciousness.
Tom McCabe wrote:
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.
Is there something wrong with viewing this from the perspective of the affected individuals (unique or not)? For any individual instance of a person, the probability of directly experiencing the torture is (10(10100))/(3^^^3), regardless of how many identical copies of this person exist.
Mike wrote:
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day …
I’m wondering how you would phrase the daily choice in this case, to get the properties you want. Perhaps like this:
1.) Add a period of (50*365)/3^^^3 days to the time period you will be tortured at the end of your life.
2.) Get a speck.
This isn’t quite the same as the original question, as it gives choices between the two extremes. And in practice, this could get rather annoying, as just having to answer the question would be similarly bad to getting a speck. Leaving that aside, however, I’d still take the (ridiculously short) torture every day.
The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.
I don’t think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)).
If so, I want my anti-wish back. Evil Genie never said anything about compression. No wonder he has so many people to dust. I’m complaining to GOD Over Djinn.
If they’re not compressed, surely a copy will still experience qualia? Does it matter that it’s identical to another? If the sum experience of many copies is weighted as if there was just one, then I’m officially converting from infinite set agnostic to infinite set atheist.
Bayesianism, Infinite Decisions, and Binding replies to Vann McGee’s “An airtight dutch book”, defending the permissibility of an unbounded utility function.
An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases.
If you allow a discontinuity where the utility of the infinite case is not the same as the limit of the utilities of the finite cases, then you have to allow a corresponding discontinuity in planning where the rational infinite plan is not the limit of the rational finite plans.
It is clearly not so easy to have a non-subjective determination of utility.
After some thought I pick the torture. That is because the concept of 3^^^3 people means that no evolution will occur while that many people live. The one advantage to death is that it allows for evolution. It seems likely that we will have evovled into much more interesting life forms long before 3^^^3 of us have passed.
What’s the utility of that?
Recovering Irrationalist:
True: my expected value would be 50 years of torture, but I don’t think that changes my argument much.
Sebastian:
I’m not sure I understand what you’re trying to say. (50*365)/3^^^3 (which is basically the same thing as 1/3^^^3) days of torture wouldn’t be anything at all, because it wouldn’t be noticeable. I don’t think you can divide time to that extent from the point of view of human consciousness.
I don’t think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
To the extent that you think that and it is reasonable, I suppose that would undermine my argument that the personal choice framework is the wrong way of looking at the question. I would choose the speck every day, and it seems like a clear choice to me, but perhaps that just reflects that I have the bias this thought experiment was meant to bring out.
I’ll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.
Some comments:
While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I’d said 3^^^^3.
If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be “acclimating” to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.
Scope insensitivity—extremely sublinear aggregation by individuals considering bad events happening to many people—can lead to mass defection in a multiplayer prisoner’s dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it’s best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?
I may be influenced by having previously dealt with existential risks and people’s tendency to ignore them.
Sum(1/n^2, 1, 3^^^3) < Sum(1/n^2, 1, inf) = (pi^2)/6
So an algorithm like, “order utilities from least to greatest, then sum with a weight if 1/n^2, where n is their position in the list” could pick dust specks over torture while recommending most people not go sky diving (as their benefit is outweighed by the detriment to those less fortunate).
This would mean that scope insensitivity, beyond a certain point, is a feature of our morality rather than a bias; I am not sure my opinion of this outcome.
That said, while giving an answer to the one problem that some seem more comfortable with, and to the second that everyone agrees on, I expect there are clear failure modes I haven’t thought of.
Edited to add:
This of course holds for weights of 1/n^a for any a>1; the most convincing defeat of this proposition would be showing that weights of 1/n (or 1/(n log(n))) drop off quickly enough to lead to bad behavior.
On recently encountering the wikipedia page on Utility Monsters and thence to the Mere Addition Paradox, it occurs to me that this seems to neatly defang both.
Edited—rather, completely defangs the Mere Addition Paradox, may or may not completely defang Utility Monsters depending on details but at least reduces their impact.
I agree with this analysis provided there is some reason for linear aggregation.
Why should the utility of the world be the sum of the utilities of its inhabitants? Why not, for instance, the
min
of the utilities of its inhabitants?I think that’s what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.
U1(world) = min_people(u(person)) instead of U2(world) = sum_people(u(person))
so U1(torture) = -big, U1(dust) = -tiny
U2(torture) = -big, U2(dust) = -outrageously massive
Thus, if you use U1, you choose dust because -tiny > -big,
but if you use U2, you choose torture because -big > -outrage.
But I see no real reason to prefer one intuition over the other, so my question is this:
Why linear aggregation of utilities?
Min is a really bad metric—it means that, for example, my decision of whether to torture someone or not doesn’t matter as long as someone out there is also getting tortured. So it doesn’t actually lead to an answer of the dust speck problem. And if you limit it to the min of people involved, it leads to things like… “then it’s better to break 1 billion people’s non-dominant arms than one person’s dominant arm” which in my opinion is absurd.
I find it hard to believe that you believe that. Under that metric, for example, “pick a thousand happy people and kill their dogs” is a completely neutral act, along with lots of other extremely strange results.
Oh, good point, maybe a kind of alphabetical ordering could break ties.
So then, we disregard everyone who isn’t affected by the possible action and maximize over the utilities of those who are.
But still, this prefers a million people being punched once to any one person being punched twice, which seems silly—I’m just trying to parse out my intuition for choosing dust specks.
I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
Or, for a maybe more dramatic instance: “Find the world’s unhappiest person and kill them”. Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world’s unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world—including the soon-to-be-ex-unhappiest-person—is extremely happy and very much wishes to go on living.
The specific problem which causes that is that most versions of utilitarianism don’t allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it’s completely morally OK to do very bad things to huge numbers of people—in fact, it’s no worse than radically improving huge numbers of lives—as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.
You can attempt to mitigate this property with too-clever objections, like “aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all”. I don’t think that actually works, but didn’t want it to obscure the point, so I picked “kill their dog” as an example, because it’s a clearly bad thing which definitely doesn’t bump anyone to the bottom.
And why should they consider 3^^^^3 differently, if their function asymptotically approaches a limit? Besides, human utility function would take the whole, and then perhaps consider duplicates, uniqueness (you don’t want your prehistoric tribe to lose the last man who knows how to make a stone axe), and so on, rather than evaluate one by one and then sum.
The false allure of oversimplified morality is in ease of inventing hypothetical examples where it works great.
One could, of course, posit a colder planet. Most of the population would prefer that planet to be warmer, but if the temperature rise exceeds 5 Celsius, the gas hydrates melt, and everyone dies. And they all have to decide at one day. Or one could posit a planet Linearium populated entirely by people that really love skydiving, who would want to skydive everyday but that would raise the global temperature by 100 Celsius, and they’d rather be alive than skydive every day and boil to death. They opt to skydive at their birthdays at the expense of 0.3 degree global temperature rise, which each one of them finds to be an acceptable price to pay for getting to skydive at your birthday.
I will admit, that was a pretty awesome lesson to learn. Marcello’s reasoning had it click in my head but the kicker that drove the point home was scaling it to 3^^^^3 instead of 3^^^3.
I think I understand why one should derive the conclusion to torture one person, given these premises.
What I don’t understand is the premises. In the article about scope insensitivity you linked to, it was very clear that the scope of things made things worse. I don’t understand why it should be wrong to round down the dust speck, or similar very small disutilities, to zero—Basically, what Scott Clark said: 3^^^3*0 disutilities = 0 disutility.
Rounding to zero is odd. In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?
It is also in violation of the structure of the thought experiment—a dust speck was chosen as the least bad bad thing that can happen to someone. If you would round it to zero, then you need to choose slightly worse thing—I can’t imagine your intuitions will be any less shocked by preferring torture to that slightly worse thing.
That was a mistake, since so many people round it to zero.
It seems to have been. Since the criteria for the choice was laid out explicitly, though, I would have hoped that more people would notice that the thought experiment they solved so easily was not actually the one they had been given, and perform the necessary adjustment. This is obviously too optimistic—but perhaps can serve itself as some kind of lesson about reasoning.
I conceed that it is reasonable within the constraints of the thought experiment. However, I think it should be noted that this will never be more than a thought experiment and that if real world numbers and real world problems are used, it becomes less clear cut, and the intuition of going against the 50 years torture is a good starting point in some cases.
It’s odd. If you think about it, Eliezer’s Argument is absolutely correct. But it seems rather unintuitive even though I KNOW it’s right. We humans are a bit silly sometimes. On the other hand, we did manage to figure this out, so it’s not that bad.
“In the absence of other considerations, you have no preference whether or not people get a dust speck in their eye?”
I can regard the moral significance as zero. I don’t have to take the view that morality “is” preferences, of any kind or degree.
Excessive demandingness is a famous problem with utiltarianism: rounding down helps to curtail it.
But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.
It’s not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.
Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being’s life in torture.
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
If
Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
By our universe you do not mean only the observable universe, but include the level I multiverse
then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.
Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don’t see why I should change it. “I don’t like the conclusions!!!” is not a valid objection.
If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we’ll have larger problems than the potential nuking of New York.
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.
Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however… Should we:
a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?
or should we
b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
The trouble is that there is a continuous sequence from
Take $1 from everyone
Take $1.01 from almost everyone
Take $1.02 from almost almost everyone
...
Take a lot of money from very few people (Denmark)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
Typo here?
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
You may not know exactly where the limit is, but the point isn’t that the limit is at some exact number, the point is that there is a limit. There’s some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
Yes I do.
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don’t know it).
But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
Maybe I didn´t understand you the first time.
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can’t just answer that that “might” be the case—if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
What do you mean with “whatever the actual numbers are”. Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don’t know it. Pretend I put the real number there instead of 20.
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
I don’t think you understand.
Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don’t know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
Yes, in my last comment I agreed to it. There is such a number. I don’t think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
So you’re saying there exists such a number, such that taking that amount of money from someone wouldn’t ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
(Of course, it’s not always clear which choice the answer is—hence why so many argue over it—but the answer has to be, in principle, either “yes” or “no”.)
First, I’m not talking about temperature, but about categories “hot” and “cold”.
Second, why in the world would good/bad be binary?
I have no idea—I don’t know what an omniscient person (aka God) will do, and in any case the answer is likely to be “depends on which morality we are talking about”.
Oh, and would an omniscient being call that water hot or cold?
You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” exclusively; I also use terms like “warm” or “cool” or “this might be a great temperature for a swimming pool, but it’s horrible for tea”.)
Also, if you weren’t talking about temperature, why bother mentioning degrees Celsius when talking about “hotness” and “coldness”? Clearly temperature has something to do with it, or else you wouldn’t have mentioned it, right?
Because you can always replace a question of goodness with the question “Would an omniscient, moral person choose to take this action?”.
Just because you have no idea what the answer could be doesn’t mean the true answer can fall outside the possible space of answers. For instance, you can’t answer the question of “Would an omniscient moral reasoner choose to take this action?” with something like “fish”, because that falls outside of the answer space. In fact, there are only two possible answers: “yes” or “no”. It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either “yes or “no”, and that holds true even if you don’t know what the answer is.
There is only one “morality” as far as this discussion is concerned. There might be other “moralities” held by aliens or whatever, but the human CEV is just that: the human CEV. I don’t care about what the Babyeaters think is “moral”, or the Pebblesorters, or any other alien species you care to substitute—I am human, and so are the other participants in this discussion. The answer to the question “which morality are we talking about?” is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I’d rather talk game theory with Clippy than morality—it’s far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
I don’t think so.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
Really? Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
Name a third alternative that is actually an answer, as opposed to some sort of evasion (“it depends”), and I’ll concede the point.
Also, I’m aware that this isn’t your main point, but… how is the argument circular? I’m not saying something like, “It’s binary, therefore there are two possible states, therefore it’s binary”; I’m just saying “There are two possible states, therefore it’s binary.”
Are you human? (y/n)
Which part do you object to? The “coherent” part, the “extrapolated” part, or the “volition” part?
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
Conditional on having a good definition of “action” and on having a good definition of “morally”.
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
Agree.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
But they have a consequence: Morality currently is not useful for practical purposes.
That’s… an interesting position. Are you willing to live with it? X-)
You can, of course define morality in this particular way, but why would you do that?
By that definition, almost all actions are bad.
Also, why the heck do you think there exist words for “better” and “worse”?
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
In this case I do not disagree with you. The number of people on earth is simply not large enough.
But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.
Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don’t think you will change my mind.
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.
If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
I think Okeymaker was actually referring to all the people in the universe. While the number of “people” in the universe (defining a “person” as a conscious mind) isn’t a known number, let’s do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn’t nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker’s modus tollens and reply that I would prefer to nuke New York.)
Now, do you have any actual argument as to why the ‘badness’ function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don’t think you do. This is why this stuff strikes me as pseudomath. You don’t even state your premises let alone justify them.
You’re right, I don’t. And I do not really need it in this case.
What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “torture for 50 years” and “dust specks” so this generally makes sense at all.
The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no matter how many more people suffer.”
If however the number of possible distinct people should be finite—even after taking into account level II and level III multiverses—due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there’s only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don’t think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn’t feel any stronger because there’s more ‘copies’ of it running in perfect unison, it can’t even tell the difference. It won’t affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it’s going to keep growing without a limit, but that’s simply not true.
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.
No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,… such that C(dustspeck,m_j) > jε.
Besides which, even if I had somehow messed up, you’re not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
Well, in my view, some details of implementation of a computation are totally indiscernible ‘from the inside’ and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I’m not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they’re ‘important’, there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
Okeymaker, I think the argument is this:
Torturing one person for 50 years is better than torturing 10 persons for 40 years.
Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year.
Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year.
Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month.
Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week.
Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day.
Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour.
Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute.
Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.
Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond.
Torturing for 1 millisecond is exactly what a dust speck does.
And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.
Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn’t make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.
If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.
Okay, here’s a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The “planet” part is obviously not possible, and the “distinct” part may or may not be possible, but for the purposes of a discussion about morality, it’s fine to assume these.)
Now let’s suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who “wins” (or “loses”, more accurately) will be tortured for 50 years. Which would you choose?
If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let’s suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a).
However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c).
Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a).
Q.E.D.
And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can’t stop time, so there’s no reason to prefer (c) to (b).
Ah, sorry; I wasn’t clear. What I meant was that blinking increases your probability of being tortured beyond the normal “baseline” probability of torture. Obviously, even if you don’t blink, there’s still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn’t blinked (since you can’t see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I’m saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.
Let me see if I get this straight:
The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery.
It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone’s eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you’d prefer to be in the present instead of the future.
The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.
This seems pretty unlikely to be true.
I think you underestimate the magnitude of 3^^^3 (and thereby overestimate the magnitude of 1/3^^^3).
Both numbers seem basically arbitrarily small (probability 0).
Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.
Well, I mean, obviously a single person can’t be kidnapped more than once every 50 years (assuming that’s how long each torture session lasts), and certainly not several times a day, since he/she wouldn’t have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I’d say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.
I worked it out back of the envelope, and the probability of being kidnapped when you blink is only 1/5^^^5.
Well, now I know you’re underestimating how big 3^^^3 is (and 5^^^5, too). But let’s say somehow you’re right, and the probability really is 1/5^^^5. All I have to do is modify the thought experiment so that the planet has 5^^^5 people instead of 3^^^3. There, problem solved.
So, new question: would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 5^^^5 people get dust specks in their eyes?
Agree, having lived in chronic pain supposedly worse than untrained childbirth, I’d say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can’t really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this
We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.
I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn’t mean we can’t find exponential factors that dominate it at every point at least along the “less than 50 years” range.
Obviously. Just important to remember that extremity of suffering is something we frequently fail to think well about.
Absolutely. We’re bad at anything that we can’t easily imagine. Probably, for many people, intuition for “torture vs. dust specks” imagines a guy with a broken arm on one side, and a hundred people saying ‘ow’ on the other.
The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn’t take the number of people saved by an intervention into account; we just picture the typical effect on a single person.
What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don’t know how bad being in prison is, but it probably becomes much worse than I imagine if you’re there for 50 years, and we don’t think about that at all when arguing (or voting) about prison sentences.
My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter’s Law: however bad you imagine it to be, it’s worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I’ve yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.
Obligatory xkcd.
That would have been a better comic without the commentary in the last panel.
But the alt text is great X-)
My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment—that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree—you can adjust to being in intense suffering but that doesn’t make the intense suffering go away. That’s why I think its a special class of states of being—one that invokes action. What do people think?
That strikes me as a deliberate set up for a continuum fallacy.
Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?
I’d much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don’t have any mechanism which could compound their suffering. They aren’t even different subjectivities. I don’t see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can’t even tell subjectively how redundant it’s hardware is.
Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something’s still experiencing pain but it’s not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.
It’s not a continuum fallacy because I would accept “There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T” as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.
I’m not sure what you mean by this. I don’t believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that’s ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.
Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?
This is where the argument for choosing torture falls apart for me, really. I don’t think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years.
I think he’s questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he’s referring to.
Personally, I think the problem stems from dust specks being such a minor inconvenience that it’s basically below the noise threshold. I’d almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don’t cause crashes or anything).
There’s the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly’s statement:
“There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T”
We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that
“For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds.”
Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post…
Well, you basically have to concede that “torture” wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.
The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky’s original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.
In other words, the “torture one person rather than allow 3^^^3 dust specks” wins, quite predictably, if and only if it is true that that the ‘pain’ component of the utility function is measured in one and only one dimension.
So the question is, basically, do you measure your utility function in terms of a single input variable?
If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity… or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.
If you don’t, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.
It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.
One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose “utilons,” and one for… call them “red flags.” As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you’ve accumulated.
The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)… but the overall weighted average of all human moral reasoning suggests that people who think they’ve done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.
Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.
The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.
Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.
Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.
The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.
Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.
Which outcome is worse?
If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...
Well, the “linear harm” theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.
We are forced to conclude that there is a “total extinction” term in our calculation of harm, one that rises very rapidly in an ‘inflationary’ way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.
Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some “big enough” number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.
But I can’t recall a similar argument for nonlinear harm measurement being presented in any of the comments I’ve sampled, so I wanted to mention it.
But I thought it was interesting and couldn’t recall seeing it elsewhere.
I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death.
In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad.
Not every non-linear relationship can be thwacked with bigger and bigger numbers...
For one thing N=1 T=1 trivially satisfies your condition…
I mean, suppose that you got yourself a function that takes in a description of what’s going on in a region of spacetime, and it spits out a real number of how bad it is.
Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That’s much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).
One thing that function can’t do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.
Obviously I only meant to consider values of T and N that actually occur in the argument we were both talking about.
Well I’m not sure what’s the point then. What you’re trying to induct from it.
Yes, if this is the case (would be nice if Eliezer confirmed it) I can see where the logic halts from my perspective :)
Explanatory example if someone care:
I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute.
And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.
Okay, so let’s zoom in here. What is preferable?
Torturing 1 person for 60 seconds
Torturing 100 person for 59 seconds
Torturing 10000 person for 58 seconds
etc.
Kind of a paradox of the heap. How many seconds of torture are still torture?
And 10^30 is really a lot of people. That’s what Eliezer meant with “scope insensitivity”. And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?
That’s fighting the hypothetical. Assume that the speck is such that the harm caused by the spec slightly outweighs the benefits.
Or the benefits could slightly outweigh the harm.
You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50⁄50. Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.
In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would’ve gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it’s merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.
exactly! No knock-on effects. Perhaps you meant to comment on the grandparent(great-grandparent? do I measure from this post or your post?) instead?
yeah, clicked wrong button.
If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well… you’re weird.
Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years?
We take much greater risks without a moment’s thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it’s probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.
If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person
I find this reasoning problematic, because in the dust specks there is effectively nothing to acclimate to… the amount of inconvenience to the individual will always be smaller in the speck scenario (excluding secondary effects, such as the individual being distracted and ending up in a car crash, of course).
Which exact person in the chain should first refuse?
Now, this is considerably better reasoning—however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded “you among many have to make the following choice...”, I could agree with you. But the current wording implied that it was once-a-universe sort of choice.
Well as long as we’ve gone to all the trouble to collect 85 comments on this topic, this seems like an great chance for a disagreement case study. It would be interesting to collect stats on who takes what side, and to relate that to their various kinds of relevant expertize. For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I’m open to new data on the balance of opinion and the balance of relevant expertize.
The diagnosis of scope insensitivity presupposes that people are trying to perform a utilitarian calculation and failing. But there is an ordinary sense in which a sufficiently small harm is no wrong. A harm must reach a certain threshold before the victim is willing to bear the cost of seeking redress. Harms that fall below the threshold are shrugged off. And an unenforced law is no law. This holds even as the victims multiply. A class action lawsuit is possible, summing the minuscule harms, but our moral intuitions are probably not based on those.
Now, this is considerably better reasoning—however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded “you among many have to make the following choice...”, I could agree with you. But the current wording implied that it was once-a-universe sort of choice.
The choice doesn’t have to be repeated to present you with the dilemma. Since all elements of the problem are finite—not countless, finite—if you refuse all actions in the chain, you should also refuse the start of the chain even when no future repetitions are presented as options. This kind of reasoning doesn’t work for infinite cases, but it works for finite ones.
One potential counter to the “global heating” example is that at some point, people begin to die who would not otherwise have done so, and that should be the point of refusal. But for the case of dust specks—and we can imagine getting more than one dust speck in your eye per day—it doesn’t seem like there should be any sharp borderline.
We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves.
There is no “special penny” where this logic goes from good to bad. It’s wrong when repeated because it’s also wrong in the individual case. You just have to come to terms with scope sensitivity.
“Swoops in, takes one child, and leaves”… wow. I’d like to say I can’t imagine being so insensitive as to think this would be a good thing to do (even if not worth the money), but I actually can.
And why would you use that horrible example, when the arguement would work just fine if you substituted “A permanent presence devoted to giving one person three square meals a day.”
Actually, that was a poor example because taxing one penny has side effects. I would rather save one life and everyone in the world poked with a stick with no other side effects, because I put a substantial probability on lifespans being longer than many might anticipate. So even repeating this six billion times to save everyone’s life at the price of 120 years of being repeatedly poked with a stick, would still be a good bargain.
Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn’t change the normative shape of the problem (IMHO).
Robin: dare I suggest that one area of relevant expertise is normative philosophy for-@#%(^^$-sake?!
It’s just painful—really, really, painful—to see dozens of comments filled with blinkered nonsense like “the contradiction between intuition and philosophical conclusion” when the alleged “philosophical conclusion” hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts! My model for the torture case is swiftly becoming fifty years of reading the comments to this post.
The “obviousness” of the dust mote answer to people like Robin, Eliezer, and many commenters depends on the following three claims:
a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,
b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)
c) it is a moral fact that we ought to select the world with more pleasure and less pain.
But each of those three claims is hotly, hotly contested. And almost nobody who has ever thought about the questions seriously believes all three. I expect there are a few (has anyone posed the three beliefs in that form to Peter Singer?), but, man, if you’re a Bayesian and you update your beliefs about those three claims based on the general opinions of people with expertise in the relevant area, well, you ain’t accepting all three. No way, no how.
As someone who has studied moral philosophy for many years, I would like to point out that I agree with Robin and Eliezer, and that I know many professional moral philosophers who would agree with them, too, if presented with this moral dilemma. It is also worth noting that, many comments above, Gaverick Matheny provided a link to a paper by a professional moral philosopher, published in one of the two most prestigious moral philosophy journals in the English-speaking world, which defends essentially the same conclusion. And as the argument presented in that paper makes clear, the conclusion that one should torture need not be motivated by a theoretical commitment to some substantive thesis about the nature of pain or aggregation (as Gowder claims), but follows instead by transitivity from a series of comparisons that everyone—including those who deny that conclusion—finds intuitively plausible.
If anyone still has a hard time believing that this is not an unorthodox position among Philosophers, I’d like to recommend Shelly Kagan’s excellent The Limits of Morality, which discusses ‘radical consequentialism’ and defends a similar conclusion.
dozens of comments filled with blinkered nonsense like “the contradiction between intuition and philosophical conclusion” when the alleged “philosophical conclusion” hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts!
You’ve quoted one of the few comments which your criticism does not apply to. I carry no water for utilitarian philosophy and was here highlighting its failure to capture moral intuition.
all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)
Is a consistent and complete preference ordering without this property possible?
“An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases.”
From Peter’s proof, it seems like you should be able to prove that an arbitrarily large (but finite) utility function will be dominated by events with arbitrarily large (but finite) improbabilities.
“Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.”
And so we come to the billion-dollar question: Will scope insensitivity of this type be eliminated under CEV? So far as I can tell, a utility function is arbitrary; there is no truth which destroys it, and so the FAI will be unable to change around our renormalized utility functions by correcting for factual inaccuracy.
“Which exact person in the chain should first refuse?”
The point at which the negative utility of people catching on fire exceeds the positive utility of skydiving. If the temperature is 20 C, nobody will notice an increase of 0.00000001 C. If the temperature is 70 C, the aggregate negative utility could start to outweigh the positive utility. This is not a new idea; see http://en.wikipedia.org/wiki/Tragedy_of_the_commons.
“We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves.”
According to http://www.wider.unu.edu/research/2006-2007/2006-2007-1/wider-wdhw-launch-5-12-2006/wider-wdhw-press-release-5-12-2006.pdf, 10% of the world’s adults, around 400 million people, own 85% of the world’s wealth. Taxing them each one penny would give a total of $4 million, more than enough to mount this kind of a rescue operation. While incredibly wasteful, this would actually be preferable to some of the stuff we spend our money on; my local school district just voted to spend $9 million (current US dollars) to build a swimming pool. I don’t even want to know how much we spend on $200 pants; probably more than $9 million in my town alone.
Elizer: “It’s wrong when repeated because it’s also wrong in the individual case. You just have to come to terms with scope sensitivity.”
But determining whether or not a decision is right or wrong in the individual case requires that you be able to place a value on each outcome. We determine this value in part by using our knowledge of how frequently the outcomes occur and how much time/effort/money it takes to prevent or assuage them. Thus knowing the frequency that we can expect an event to occur is integral to assigning it a value in the first place. The reason it would be wrong in the individual case to tax everyone in the first world the penny to save one African child is that there are so many starving children that doing the same for each one would become very expensive. It would not be obvious, however, if there was only one child in the world that needed rescuing. The value of life would increase because we could afford it to if people didn’t die so frequently.
People in a village might be willing to help pay the costs when someone’s house burns down. If 20 houses in the village burned down, the people might still contribute, but it is unlikely they will contribute 20 times as much. If house-burning became a rampant problem, people might stop contributing entirely, because it would seem futile for them to do so. Is this necessarily scope insensitivity? Or is it reasonable to determine values based on frequencies we can realistically expect?
Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn’t change the normative shape of the problem (IMHO).
Hmm, I see your point. I can’t help like feeling that there are cases where repetition does matter, though. For instance, assuming for a moment that radical life-extension and the Singularity and all that won’t happen, and assuming that we consider humanity’s continued existence to be a valuable thing—how about the choice of having/not having children? Not having children causes a very small harm to everybody else in the same generation (they’ll have less people supporting them when old). Doesn’t your reasoning imply that every couple should be forced into having children even if they weren’t of the type who’d want that (the “torture” option), to avoid causing a small harm to all the others? This even though society could continue to function without major trouble even if a fraction of the population did choose to remain childfree, for as long as sufficiently many others had enough children?
Constant, my reference to your quote wasn’t aimed at you or your opinions, but rather at the sort of view which declares that the silly calculation is some kind of accepted or coherent moral theory. Sorry if it came off the other way.
Nick, good question. Who says that we have consistent and complete preference orderings? Certainly we don’t have them across people (consider social choice theory). Even to say that we have them within individual people is contestable. There’s a really interesting literature in philosophy, for example, on the incommensurability of goods. (The best introduction of which I’m aware consists in the essays in Ruth Chang, ed. 1997. Incommensurability, Incomparability, and Practical Reason Cambridge: Harvard University Press.)
That being said, it might be possible to have complete and consistent preference orderings with qualitative differences between kinds of pain, such that any amount of torture is worse than any amount of dust-speck-in-eye. And there are even utilitarian theories that incorporate that sort of difference. (See chapter 2 of John Stuart Mill’s Utilitarianism, where he argues that intellectual pleasures are qualitatively superior to more base kinds. Many indeed interpret that chapter to suggest that any amount of an intellectual pleasure outweighs any amount of drinking, sex, chocolate, etc.) Which just goes to show that even utilitarians might not find the torture choice “obvious,” if they deny b) like Mill.
Who says that we have consistent and complete preference orderings?
Who says you need them? The question wasn’t to quantify an exact balance. You just need to be sure enough to make the decision that one side outweighs the other for the numbers involved.
By my values, all else equal, for all x between 1 millisecond and fifty years, 10^1000 people being tortured for time x is worse than one person being tortured for time x*2. Would you disagree?
So, 10^1000 people tortured for (fifty years)/2 is worse than one person tortured for fifty years.
Then, 10^2000 people tortured for (fifty years)/4 is worse than one person tortured for fifty years.
You see where I’m going with this. Do something similar with the dust specs and unless I prefer countless people getting countless years of intense dust harassment to one person getting a millisecond of pain, I vote torture.
I recognize this is my opinion and relies on your c) it is a moral fact that we ought to select the world with more pleasure and less pain not being hopelessly outweighed by another criteria. I think this is definitely a worthwhile thing to debate and that your input would be extremely valuable.
Since Robin is interested in data… I chose SPECKS, and was shocked by the people who chose TORTURE on grounds of aggregated utility. I had not considered the possibility that a speck in the eye might cause a car crash (etc) for some of those 3^^^3 people, and it is the only reason I see for revising my original choice. I have no accredited expertise in anything relevant, but I know what decision theory is.
I see a widespread assumption that everything has a finite utility, and so no matter how much worse X is than Y, there must be a situation in which it is better to have one person experiencing X, rather than a large number of people experiencing Y. And it looks to me as if this assumption derives from nothing more than a particular formalism. In fact, it is extremely easy to have a utility function in which X unconditionally trumps Y, while still being quantitatively commensurable with some other option X’. You could do it with delta functions, for example. You would use ordinary scalars to represent the least important things to have preferences about, scalar multiples of a delta function to represent the utilities of things which are unconditionally more important than those, scalar multiples of a delta function squared to represent things that are even more important, and so on.
The qualitative distinction I would appeal to here could be dubbed pain versus inconvenience. A speck of dust in your eye is not pain. Torture, especially fifty years of it, is.
Eliezer, a problem seems to be that the speck does not serve the function you want it to in this example, at least not for all readers. In this case, many people see a special penny because there is some threshold value below which the least bad bad thing is not really bad. The speck is intended to be an example of the least bad bad thing, but we give it a badness rating of one minus .9-repeating.
(This seems to happen to a lot of arguments. “Take x, which is y.” Well, no, x is not quite y, so the argument breaks down and the discussion follows some tangent. The Distributed Republic had a good post on this, but I cannot find it.)
We have a special penny because there is some amount of eye dust that becomes noticeable and could genuinely qualify as the least bad bad thing. If everyone on Earth gets this decision at once, and everyone suddenly gets >6,000,000,000 specks, that might be enough to crush all our skulls (how much does a speck weigh?). Somewhere between that and “one speck, one blink, ever” is a special penny.
If we can just stipulate “the smallest unit of suffering (or negative qualia, or your favorite term),” then we can move on to the more interesting parts of the discussion.
I also see a qualitative difference if there can be secondary effects or summation causes secondary effects. As noted above, if 3^^^3/10^20 people die due to freakishly unlikely accidents caused by blinking, the choice becomes trivial. Similarly, +0.000001C sums somewhat differently than specks. 1 speck/day/person for 3^^^3 days is still not an existential risk; 3^^^3 specks at once will kill everyone.
(I still say Kyle wins.)
Okay, here’s the data: I choose SPECKS, and here is my background and reasons.
I am a cell biologist. That is perhaps not relevant.
My reasoning is that I do not think that there is much meaning in adding up individual instances of dust specks. Those of you who choose TORTURE seem to think that there is a net disutility that you obtain by multiplying epsilon by 3^^^3. This is obviously greater than the disutility of torturing one person.
I reject the premise that there is a meaningful sense in which these dust specks can “add up”.
You can think in terms of biological inputs—simplifying, you can imagine a system with two registers. A dust speck in the eye raises register A by epsilon. Register A also resets to zero if a minute goes by without any dust specks. Torture immediately sets register B to 10. I am morally obliged to intervene if register B ever goes above 1. In this scheme register A is a morally irrelevant register. It trades in different units than register B. No matter how many instances of A*epsilon there are, it does not warrant intervention.
You are making a huge, unargued assumption if you treat both torture and dust-specks in equivalent terms of “disutility”. I accept your question and argue for “SPECKS” by rejecting your premise of like units (which does make the question trivial). But I sympathize with people who reject your question outright.
Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it’s multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.
One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn’t seem like a good description of psychology (b) leads to total loss of that preference in smarter minds.
(B) If the value I assign to the momentary irritation of a dust speck is less than 1/3^^^3 the value of 50 years’ torture, then I will never even bother to blink away the dust speck because I could spend the thought or the muscular movement on my eye on something with a better than 1/3^^^3 chance of saving someone from torture.
(A) People often also think that money, a mundane value, is incommensurate with human life, a sacred value, even though they very definitely don’t attach infinitesimal value to money.
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility, than anyone actually possessing a consistent utility function using hyperreal infinitesimal numbers.
The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and “Can’t Say No” spending.
I’m not sure why surreal/hyperreal numbers result in, essentially, monofocus.
Consider this scale on the surreals:
Omega^2: Utility of universal immortality; dis-utility of an existential risk. Omega utility for potentially omega people.
Omega: Utility of a human life.
1: One traditional utilon.
Epsilon: Dust speck in your eye.
Let’s say you’re a perfectly rational human (*cough cough*). You naturally start on the Omega^2 scale, with a certain finite amount of resources. Clearly, the worth of an omega of human lives is worth more than your own, so you do not repeat do not promptly donate them all to MIRI.
At least, not until you first calculate the approximate probability that your independent existence will make it more likely that someone somewhere will finally defeat death. Even if you have not the intelligence to do it yourself, or the social skills to keep someone else stable while they attack it, there’s still the fact that you can give more to MIRI, over the long run, if you live on just enough to keep yourself psychologically and physiologically sound and then donate the rest to MIRI.
This is, essentially, the “sanity” term. Most of the calculation is done at this step, but because your life, across your lifespan, has some chance of solving death, you are not morally obligated to have yourself processed into Soylent Green.
This step interrupts for one of three reasons. One, you have reached a point where spending further resources, either on yourself or some existential-risk organization, does not predictably affect an existential risk. Two, all existential risks are dealt with, and death itself has died. (Yay!) Three, part of ensuring your own psychological soundness requires it—really, this just represents the fact that sometimes, a dollar (approx. one utilon) or a speck (epsilon utilons) can result in your death or significant misery, but nevertheless such concerns should still be resolved in order of decreasing utility.
At this point, we break to the Omega step, which works much the same way, balancing charity donations against your own life and QoL. Situations where spending money can save lives—say, a hospital or a charity—should be evaluated at this step.
Then we break to the unitary step, which is essentially entirely QoL for yourself or others.
Hypothetically, we might then break to the epsilon step—in practice, since even in a post-scarcity society you will never finish optimizing your unitaries, this step is only evaluated when it or something in it is promoted by causal dependence to a higher step.
So, returning to the original problem: Barring all other considerations, 3^^^3*epsilon is still an epsilon, while 50 years of torture is probably something like 3⁄4 Omega. With two tiers of difference, the result is obvious, and has been resolved with intuition.
I’m going to conclude with something Hermione says in MoR, that I think applies here.
Eliezer: Why does anything have a utility at all? Let us suppose there are some things to which we attribute an intrinsic utility, negative or positive—those are our moral absolutes—and that there are others which only have a derivative utility, deriving from the intrinsic utility of some of their consequences. This is certainly one way to get incommensurables. If pain has intrinsic disutility and inconvenience does not, then no finite quantity of inconvenience can by itself trump the imperative of minimizing pain. But if the inconvenience might give rise to consequences with intrinsic disutility, that’s different.
“The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and “Can’t Say No” spending.”
Could you post a scenario where most people would choose the option which unambiguously causes greater harm, without getting into these kinds of debates about what “harm” means? Eg., where option A ends with shooting one person, and option B ends with shooting ten people, but option B sounds better initially? We have a hard enough time getting rid of irrationality, even in cases where we know what is rational.
Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It’s worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.
I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.
For Robin’s statistics:
Given no other data but the choice, I would have to choose torture. If we don’t know anything about the consequences of the blinking or how many times the choice is being made, we can’t know that we are not causing huge amounts of harm. If the question deliberately eliminated these unknowns- ie the badness was limited to an eyeblink that does not immediately result in some disaster for someone or blindness for another, and you really are the one and only person making the choice ever, then I’d go with the dust—But these qualifications are huge when you consider 3^^^3. How can we say the eyeblink didn’t distract a surgeon and cause a slip of his knife? Given enough trials, something like that is bound to happen.
@Paul, I was trying to find a solution that didn’t assume “b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension).”, but rather established it for the case at hand. Unless it’s specifically stated in the hypothetical that this is a true 1-shot choice (which we know it isn’t in the real world, as we make analogous choices all the time), I think it’s legitimate to assume the aggregate result of the test repeated by everyone. Thus, I’m not invoking utilitarian calculation, but Kantian absolutism! ;) I mean to appeal to your practical intuition by suggesting that a constant barrage of specks will create an experience of a like kind with torture.
@Robin Hanson, what little expertise I have is in the liberal arts and sciences; Euclid and Ptolemy, Aristotle and Kant, Einstein and Sophocles, etc.
Eliezer—I think the issues we’re getting into now require discussion that’s too involved to handle in the comments. Thus, I’ve composed my own post on this question. Would you please be so kind as to approve it?
Recovering irrationalist: I think the hopefully-forthcoming-post-of-my-own will constitute one kind of answer to your comment. One other might be that one can, in fact, prefer huge dust harassment to a little torture. Yet a third might be that we can’t aggregate the pain of dust harassment across people, so that there’s some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it’s not.
For Robin’s statistics:
Torture on the first problem, and torture again on the followup dilemma.
relevant expertise: I study probability theory, rationality and cognitive biases as a hobby. I don’t claim any real expertise in any of these areas.
I think one of the reasons I finally chose specks is because the unlike implied, the suffering does not simply “add up”: 3^^^3 people getting one dust speck in their eye is most certainly not equal to one person getting 3^^^3 dust specks in his eyes. It’s not “3^^^3 units of disutility, total”, it’s one unit of disutility per person.
That still doesn’t really answer the “one person for 50 years or two people for 49 years” question, though—by my reasoning, the second option would be preferrable, while obviously the first option is the preferrable one. I might need to come up with a guideline stating that only experiences of suffering within a few orders of magnitude are directly comparable with each other, or some such, but it does feel like a crude hack. Ah well.
If statistics are being gathered, I’m a second year cognitive science student.
It is my impression that human beings almost universally desire something like “justice” or “fairness.” If everybody had the dust speck problem, it would hardly be percieved as a problem. If one person is beign tortured, both the tortured person and others percieve unfairness, and society has a problem with this.
Actually, we all DO get dust motes in our eyes from time to time, and this is not a public policy issue.
In fact relatively small numbers of people ARE being tortured today, and this is a big problem both for the victims and for people who care about justice.
Beyond the distracting arithmetic lesson, this question reeks of Christianity, positing a situation in which one person’s suffering can take away the suffering of others.
This comment reeks of fuzzy reasoning.
For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I’m open to new data on the balance of opinion and the balance of relevant expertize.
It seems like selection bias might make this data much less useful. (It applied it my case, at least.) The people who chose TORTURE were likely among those with the most familiarity with Eliezer’s writings, and so were able to predict that he would agree with them, and so felt less inclined to respond. Also, voicing their opinion would be publicly taking an unpopular position, which people instinctively shy away from.
Paul: Yet a third might be that we can’t aggregate the pain of dust harassment across people, so that there’s some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it’s not.
My induction argument covers that. As long as, all else equal, you believe:
A googolplex people tortured for time x is worse than one person tortured for time x+0.00001%.
A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect.
A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable.
If a is worse than b and b is worse than c then a is worse than c.
…you can show that all else equal, to reduce suffering you pick TORTURE. As far as I can see anyway, I’ve been wrong before. Again, I acknowledge that it depends on how much you care about reducing suffering compared to other concerns, such as an arbitrary cut-off point, abhoration to using maths to answer such questions, or sacred values, which certainly can have utility by keeping worse irrationalities in check.
A googolplex people being dust speckled every second of their life without further ill effect
I don’t think this is directly comparable, because the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly—if I have to blink every second for an hour, I’ll probably get extremely frustrated on top of the slight discomfort of the specks themselves. I would say that one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.
the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly
That’s why I used a googolplex people to balance the growth. All else equal, do you disagree with: “A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect” for the range concerned?
one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.
I agree. I never said it wasn’t.
Have to run—will elaborate later.
All else equal, do you disagree with: “A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect” for the range concerned?
I agree with that. My point is that agreeing that “A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable” doesn’t oblige me to agree that “A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable”. (Unless “a further ill effect” is meant to exclude not only car accidents but superlinear personal emotional effects, but that would be stupid.)
* 1 billion seconds = 31.7 years
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility
Since real problems never possess the degree of certainty that this dilemma does, holding certain heuristics as absolutes may be the utility-maximizing thing to do. In a realistic version of this problem, you would have to consider the results of empowering whatever agent is doing this to torture people with supposedly good but nonverifiable results. If it’s a human or group of humans, not such a good idea; if it’s a Friendly AI, maybe you can trust it but can’t it figure out a better way to achieve the result? (There is a Pascal’s Mugging problem here.)
One more thing for TORTURErs to think about: if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer? I lean towards SPECK on considering this, although I’m less sure about the case of torturing 3^^^3 people for a minute each vs. 1 person for 50 years.
Just thought I’d comment that the more I think about the question, the more confusing it becomes. I’m inclined to think that if we consider the max utility state of every person having maximal fulfilment, and a “dust speck” as the minimal amount of “unfulfilment” from the top a person can experience, then two people experiencing a single “dust speck” is not quite as bad as a sigle person two “dust specks” below optimal. I think the reason I’m thinking that is that the second speck takes away more proportionally than the first speck did.
Oh, one other thing. I was assuming for my replies both here and in the other thread that we’re only talking about the actual “moment of suffering” caused by a dust speck event, with no potential “side effects”
If we consider that those can have consequences, I’m pretty sure that on average those would be negative/harmful, and when the law of large numbers is invoked via stupendously large numbers, well, in that case I’m going with TORTURE.
For the moment at least. :)
I agree with that. My point is that agreeing that “A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable” doesn’t oblige me to agree that “A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable”.
Neither would I, you don’t need to. :-)
The only reason I can pull this off is because 3^^^3 is such a ludicrous number of people, allowing me to actually divide my army by a googolplex a silly number of times. You couldn’t cut the series up fine enough with a mere six billion people.
If you agree with my first two statements listed, you can use them (and your vast googolplex-cutter-proof army) to infer a series of small steps from each of Eliezer’s options, meeting in the middle at my third statement in the list. You then have a series of steps when a is worse than b, b than c, c than d, all the way from SPECS to my third statement to TORTURE.
If for some reason you object to one of the first 3 statements, my 3^^^3 vast hoard of minions will just cut the series up even finer.
If that’s not clear it’s probably my fault—I’ve never had to explain anything like this before.
if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer?
I sure would, but I wouldn’t ask 3^^^3 others to.
ok, without reading the above comments… (i did read a few of them, including robin hanson’s first comment—don’t know if he weighed in again).
dust specks over torture.
the apparatus of the eye handles dust specks all day long. i just blinked. it’s quite possible there was a dust speck in there somewhere. i just don’t see how that adds up to anything, even if a very large number is invoked. in fact with a very large number like the one described it is likely that human beings would evolve more efficient tear ducts, or faster blinking, or something like that. we would adapt and be stronger.
torturing one person for fifty years however puts a stain on the whole human race. it affects all of us, even if the torture is carried out fifty miles underground in complete secrecy.
Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I’m not sure why moral judgments have to be transitive.
Next, I’d deny the second-to-last premise (for one thing, I don’t know what it means to be horribly tortured for the shortest period possible—part of the tortureness of torture is that it lasts a while).
Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.
If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more distinct states than the 2^N distinctions your computational process can make.
Now, the reason for adopting additivity comes from the intuition that 1) hurting two people is at least as bad as hurting one, and 2) that people are morally equal, so that it doesn’t matter which people are hurt. Note that these intuitions mathematically only require that harm should be monotone in the number of people with dust specks in their eyes. Furthermore, this requirement is compatible with the finite computation requrements—it implies that there is a finite number of specks beyond which disutility does not increase.
If we want to generalize away from the specific number N of bits we have available, we can take an order-theoretic viewpoint, and simply require that all increasing chains of utilities have limits. (As an aside, this idea lies at the heart of the denotational semantics of programming languages.) This forms a natural restriction on the domain of utility functions, corresponding to the idea that utility functions are bounded.
It’s truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called “motivated continuation”.
One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I’ll use his text for reference:
“Unproblematically” vastly overstates what is required here. The question doesn’t require unproblematic aggregation; any slight tendency of aggregation will do just fine. We could stipulate that pain aggregates as the hundredth root of N and the question would still have the same answer. That is an insanely modest assumption, ie that it takes 2^100 people having a dust mote before we can be sure there is twice as much suffering as for one person having a dust mote.
“b” is actually inapplicable to the stated question and it’s “a” again anyways—just add “type” or “mode” to the second conjunction in “a”.
I see only three possibilities for challenging this, none of which affects the question at hand.
Favor a desideratum that roughly aligns with “pleasure” but not quite, such as “health”. Not a problem.
Focus on some special situation where paining others is arguably desirable, such as deterrence, “negative reinforcement”, or retributive justice. ISTM that’s already been idealized away in the question formulation.
Just don’t care about others’ utility, eg Rand-style selfishness.
The “Rand-style selfishness” mars an otherwise sound comment.
Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I’m not sure why moral judgments have to be transitive.
I acknowledged it won’t hold for every moral. There are some pretty barking ones out there. I say it holds for choosing the option that creates less suffering. For finite values, transitivity should work fine.
Next, I’d deny the second-to-last premise (for one thing, I don’t know what it means to be horribly tortured for the shortest period possible—part of the tortureness of torture is that it lasts a while).
Fine, I still have plenty of googolplex-divisions left. Cut the series as fine as you like. Have billions of intervening levels of discomfort from spec->itch->ouch->”fifty years of reading the comments to this post.” The point is if you slowly morph from TORTURE to SPEC in very small steps, every step gets worse because the population multiplies enormously while the pain differs by a incredibly tiny amount.
Recovering irrationalist, I hadn’t thought of things in precisely that way—just “3^^4 is really damn big, never mind 3^^7625597484987”—but now that you point it out, the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.
It only requires 3^^5 = 3^(3^7625597484987) to get more googolplex factors than you can shake a stick at. But why not use a googol instead of a googolplex, so we can stick with 3^^4? If anything, the case is more persuasive with a googol because a googol is more comprehensible than a googolplex. It’s all about scope neglect, remember—googolplex just fades into a featureless big number, but a googol is ten thousand trillion trillion trillion trillion trillion trillion trillion trillion.
Tom, your claim is false. Consider the disutility function
D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))
Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your “slight aggregation” requirement. However, it’s also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.
The basic objection to this kind of functional form is that it’s not additive. However, it’s wrong to assume an additive form, because that assumption mandates unbounded utilities, which are a bad idea, because they are not computationally realistic and admit Dutch books. With bounded utility functions, you have to confront the aggregation problem head-on, and depending on how you choose to do it, you can get different answers. Decision theory does not affirmatively tell you how to judge this problem. If you think it does, then you’re wrong.
Again, not everyone agrees with the argument that unbounded utility functions give rise to Dutch books. Unbounded utilities only admit Dutch books if you do allow a discontinuity between infinite rewards and the limit of increasing finite awards, but you don’t allow a discontinuity between infinite planning and the limit of increasing finite plans.
Oh geez. Originally I had considered this question uninteresting so I ignored it, but considering the increasing devotion to it in later posts, I guess I should give my answer.
My justification, but not my answer, depends upon what how the change is made.
-If the offer is made to all of humanity before being implemented (“Do you want to be the ‘lots of people get specks race’ or the ‘one guy gets severe torture’ race?”) I believe people could all agree to the specks by “buying out” whoever eventually gets the torture. For an immeasurably small amount, less than the pain of a speck, they can together amass funds sufficient to return the torture to the indivdual’s indifference curve. OTOH, the person getting the torture couldn’t possibly buy out that many people. (In other words, the specks are Kaldor-Hicks efficient.)
-If the offer, at my decision, would just be thrown onto humanity without possibly of advance negotation, I would still take the specks because even if only people who feel bad for the tortured make a small contribution, it will still comparable to what they had to offer in the above paragraph, such is the nature of large numbers of people.
I don’t think this is the result of my revulsion toward the torture, although I have that. I think my decision stems from how such large (and superlinearly increasing) utility differences imply the possibility of “evening it out” through some transfer.
the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.
You just warmed my heart for the day :-)
But why not use a googol instead of a googolplex
Shock and awe tactics. I wanted a featureless big number of featureless big numbers, to avoid wiggle-outs, and scream “your intuition ain’t from these parts”. In my head, FBNs always carry more weight than regular ones. Now you mention it, their gravity could get lightened by incomprehensibility, but we we’re already counting to 3^^^3.
Googol is better. Less readers will have to google it.
@Neel.
Then I only need to make the condition slightly stronger: “Any slight tendency to aggregation that doesn’t beg the question.” Ie, that doesn’t place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:
D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))
...doesn’t meet this test.
Contrary to what you think, it doesn’t require unbounded utility. Limiting the lower bound of the range to (say) 2 * disutility(torture) will suffice. The rest of your message assumes it does.
For completeness, I note that introducing numbers comparable to 3^^^3 in an attempt to undo the 3^^^3 scaling would cause a formulation to fail the “slight” condition, modest though it is.
With so many so deep in reductionist thinking, I’m compelled to stir the pot by asking how one justifies the assumption that the SPECK is a net negative at all, aggregate or not, extended consequences or not? Wouldn’t such a mild irritant, over such a vast and diverse population, act as an excellent stimulus for positive adaptations (non-genetic, of course) and likely positive extended consequences?
A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today.
Hrm… Recovering’s induction argument is starting to sway me toward TORTURE.
More to the point, that and some other comments are starting to sway me away from the thought that disutility of single dust speck events per person becomes sublinear as people experiencing it increases (but total population is held constant)
I think if I made some errors, they were partly was caused by “I really don’t want to say TORTURE”, and partly caused by my mistaking the exact nature of the nonlinearity. I maintain “one person experiencing two dust specks” is not equal to, and actually worse, I think, than two people experiencing one dust speck, but now I’m starting to suspect that two people each experiencing one dust speck is exactly twice as bad as one person experiencing one dust speck. (Assuming, as we shift number of people experiencing DSE that we hold the total population constant.)
Thus, I’m going to tentatively shift my answer to TORTURE.
“A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today.”
Although only one person, I’ve already begun, and have entered in my inventor’s notebook some apparently novel thinking on not only dust, but mites, dog hair, smart eyedrops, and nanobot swarms!
Tom, if having an upper limit on disutility(Specks) that’s lower than disutility(Torture1) is begging the question in favour of SPECKS then why isn’t not* having such an upper limit begging the question in favour of TORTURE?
I find it rather surprising that so many people agree that utility functions may be drastically nonlinear but are apparently completely certain that they know quite a bit about how they behave in cases as exotic as this one.
It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There’s no such element in the second.
I think we may be at cross purposes; my apologies if we are and it’s my fault. Let me try to be clearer.
Any particular utility function (if it’s real-valued and total) “begs the question” in the sense that it either prefers SPECKS to TORTURE, or prefers TORTURE to SPECKS, or puts them exactly equal. I don’t see how this can possibly be considered a defect, but if it is one then all utility functions have it, not just ones that prefer SPECKS to TORTURE.
Saying “Clearly SPECKS is better than TORTURE, because here’s my utility function and it says SPECKS is better” would be begging the question (absent arguments in support of that utility function). I don’t see anyone doing that. Neel’s saying “You can’t rule out the possibility that SPECKS is better than TORTURE by saying that no real utility function prefers SPECKS, because here’s one possible utility function that says SPECKS is better”. So far as I can tell you’re rejecting that argument on the grounds that any utility function that prefers SPECKS is ipso facto obviously unacceptable; that is begging the question.
g: that’s exactly what I’m saying. In fact, you can show something stronger than that.
Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.
Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn’t tell you what it is—it may be any value within the bounds.
Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I’ll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:
U(T, S) = A(T/(T+1)) + B(S/(S+1))
satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.
Likewise, Robin and Eliezer seem to have an implicit disutility function of the form
U_ER(T, S) = AT + BS
If you normalize to get [0,1] bounds, you can make something up like
U’(T, S) = (AT + BS)/(AT + BS + 1).
Now, note U’ also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U’ will still always go to one and exceed A/(A+1). So that’s why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E’s argument—but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)
Now, consider yet another disutility function,
U″(T,S) = (ST + S)/ (ST + S + 1)
This is also minimally ethical, and doesn’t have any of the free parameters that Tom didn’t like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.
Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.
I have argued in previous comments that the utility of a person should be discounted by his or her measure, which may be based on algorithmic complexity. If this “torture vs specks” dilemma is to have the same force under this assumption, we’d have to reword it a bit:
Would you prefer that the measure of people horribly tortured for fifty years increases by x/3^^^3, or that the measure of people who get dust specks in their eyes increases by x?
I argue that no one, not even a superintelligence, can actually face such a choice. Because x is at most 1, x/3^^^3 is at most 1/3^^^3. But how can you increase the measure of something by more than 0 but no more than 1/3^^^3? You might, perhaps, generate a random number between 0 and 3^^^3 and do something only if that random number is 0. But algorithmic information theory says that for any program (even a superintelligence), there are pseudorandom sequences that it cannot distinguish from truly random sequences, and the prior probability that your random number generator is generating such a pseudorandom sequence is much higher than 1/3^^^3. Therefore the probability of that “random” number being 0 (or being any other number that you can think of) is actually much larger than 1/3^^^3.
Therefore, if someone tells you “measure of … increases by x/3^^^3″, in your mind you’ve got to be thinking ”… increases by y” for some y much larger than 1/3^^^3. I think my theories explains both those who answer SPECKS and those who say no answer is possible.
A consistent utilitarian would choose the torture, but I don’t think it’s the moral choice.
Let’s bring this a little closer to home. Hypothetically, let’s say you get to live your life again 3^^^3 times. Would you prefer to have an additional dust speck in your eye in each of your future lives, or else be tortured for 50 years in a single one of them?
Any takers for the torture?
Man that’s a good one. It’s certainly interesting to know that my ability to override intuition when it comes to large numbers is far less effective when the question is applied to me personally. I’m assuming that this question assumes no other ill effects from the specks. And I know I should pick the torture. I know that if the torture is the best outcome for other people, it’s the best outcome for myself. But if I was given that choice in real life, I don’t think I would as of writing this comment.
I have some correcting to do.
Actually, I ended up resolving this at some point. I would in fact pick the dust specks in this case, because the situations aren’t identical. I’d spend a lot of time in my 3^^^3 lives worrying if I’m going to start being tortured for 50 years, but I wouldn’t worry about the dust specks. Technically, the disutility of the dust specks is worse, but my brain can’t comprehend the number “3^^^3”, so it would worry more about the torture happening to me. Adding in the disutility of worrying about the torture, even a small amount, across 3^^^3 / 2 lives, and it’s clear that I should pick the dust specks for myself in this situation, regardless of whether or not I choose torture in the original problem.
This is sort of avoiding the question. What if you made the choice, but then had your memory erased about the whole dilemma right afterwards? Assuming you knew before making your choice that your memory would be erased, of course.
Then I choose the torture. I’ve grown a bit more comfortable with overriding intuition in regards to extremely large numbers since my original reply 3 months ago.
I’d take it.
I’ll take it, as long as it’s no more likely to be one of the earliest lives. I don’t trust any universe that can make 3^^^3 of me not to be a simulation that would get pulled early.
Hrm… Recovering’s induction argument is starting to sway me toward TORTURE.
Interesting. The idea of convincing others to decide TORTURE is bothering me much more than my own decision.
I hope these ideas never get argued out of context!
Cooking something for two hours at 350 degrees isn’t equivalent to cooking something at 700 degrees for one hour.
I’d rather accept one additional dust speck per lifetime in 3^^^3 lives than have one lifetime out of 3^^^3 lives involve fifty years of torture.
Of course, that’s me saying that, with my single life. If I actually had that many lives to live, I might become so bored that I’d opt for the torture merely for a change of pace.
Recovering: chuckles no, I meant thinking about that, and rethinking about what the actual properties of what I’d consider to be a reasonable utility function led me to reject my earlier claim of the specific nonlinearity that lead to my assumption that as you increase the number of people that recieve a spec, the disutility is sublinear, and now I believe it to be linear. So huge bigbigbigbiggigantaenormous num specks would, of course, eventually have to have more disutility than the torture. But since to get to that point knuth arrow notation had to be invoked, I don’t think there’s any worry that I’m off to get my “rack winding certificate” :P
But yeah, out of context this debate would sound like complete nonsense… “crazy geeks find it difficult to decide between dust specks and extreme torture.”
I do have to admit though, Andrew’s comment about individual living 3^^^3 times and so on has me thinking again. If “keep memories and so on of all previous lives = yes” (so it’s really one really long lifespan) and “permanent physical and psychological damage post torture = no”) then I may take that. I think. Arrrgh, stop messing with my head. Actually, no, don’t stop, this is fun! :)
I’d take it.
I find your choice/intuition completely baffling, and I would guess that far less than 1% of people would agree with you on this, for whatever that’s worth (surely it’s worth something.) I am a consequentialist and have studied consequentialist philosophy extensively (I would not call myself an expert), and you seem to be clinging to a very crude form of utilitarianism that has been abandoned by pretty much every utilitarian philosopher (not to mention those who reject utilitarianism!). In fact, your argument reads like a reductio ad absurdum of the point you are trying to make. To wit: if we think of things in equivalent, additive utility units, you get this result that torture is preferable. But that is absurd, and I think almost everyone would be able to appreciate the absurdity when faced with the 3^^^3 lives scenario. Even if you gave everyone a one week lecture on scope insensitivity.
So… I don’t think I want you to be one of the people to initially program AI that might influence my life...
No Mike, your intuition for really large numbers is non-baffling, probably typical, but clearly wrong, as judged by another non-Utilitarian consequentialist (this item is clear even to egoists).
Personally I’d take the torture over the dust specks even if the number was just an ordinary incomprehensible number like say the number of biological humans who could live in artificial environments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of terminal entropy dump into a 3K background from 300K, that’s a large budget) energy budget for each of them). It’s totally clear to me that a second of torture isn’t a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
The only considerations that dull my certainty here is that I’m not convinced that my utility function can even encompass these sorts of ordinary incomprehensible numbers, but it seems to me that there is at least a one-in-a-billion chance that it can.
So, if additive utility functions are naive, does that mean I can swap around your preferences at random like jerking around a puppet on a string, just by having a sealed box in the next galaxy over where I keep a googol individuals who are already being tortured for fifty years, or already getting dust specks in their eyes, or already being poked with a stick, etc., which your actions cannot possibly affect one way or the other?
It seems I can arbitrarily vary your “non-additive” utilities, and hence your priorities, simply by messing with the numbers of existing people having various experiences in a sealed box in a galaxy a googol light years away.
This seems remarkably reminiscent of E. T. Jaynes’s experience with the “sophisticated” philosophers who sniffed that of course naive Bayesian probability theory had to be abandoned in the face of paradox #239; which paradox Jaynes would proceed to slice into confetti using “naive” Bayesian theory but with this time with rigorous math instead of the various mistakes the “sophisticated” philosophers had made.
There are reasons for preferring certain kinds of simplicity.
Michael Vassar:
Well, in the prior comment, I was coming at it as an egoist, as the example demands.
It’s totally clear to me that a second of torture isn’t a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
I reject the notion that each (time,utility) event can be calculated in the way you suggest. Successive speck-type experiences for an individual (or 1,000 successive dust specks for 1,000,000 individuals) over the time period we are talking about would easily overtake 50 years of torture. It doesn’t make sense to tally (total human disutility of torture (1 person/50 years in this case))(some quantification of the disutility of a time unit of torture) vs. (total human speck disutility)(some quantification of a unit of speck disutility).
The universe is made up of distinct beings (animals included), not the sum of utilities (which is just a useful contruct.)
All of this is to say:
If we are to choose for ourselves between these scenarios, I think it is incredibly bizarre to prefer 3^^^3 satisfying lives and one indescribably horrible life to 3^^^3 infinitesimally better lives than the alternative 3^^^3 lives. I think doing so ignores basic human psychology, from whence our preferences arise.
To continue this business of looking at the problem from different angles:
Another formulation, complementary to Andrew Macdonald’s, would be: Should 3^^^3 people each volunteer to experience a speck in the eye, in order to save one person from fifty years of torture?
And with respect to utility functions: Another nonlinear way to aggregate individual disutilities x, y, z… is just to take the maximum, and to say that a situation is only as bad as the worst thing happening to any individual in that situation. This could be defended if one’s assignment of utilities was based on intensity of experience, for example. There is no-one actually having a bad experience with 3^^^3 times the badness of a speck in the eye. As for the fact that two people suffering identically turns out to be no worse than just one—accepting a few counterintuitive conclusions is a small price to pay for simplicity, right?
I find it positively bizarre to see so much interest in the arithmetic here, as if knowing how many dust flecks go into a year of torture, just as one knows that sixteen ounces go into one pint, would inform the answer.
What happens to the debate if we absolutely know the equation:
3^^^3 dustflecks = 50 years of torture
or
3^^^3 dustflecks = 600 years of torture
or
3^^^3 dustfleck = 2 years of torture ?
The nation of Nod has a population of 3^^^3. By amazing coincidence, every person in the nation of Nod has $3^^^3 in the bank. (With a money suplly like that, those dollars are not worth much.) By yet another coincidence, the government needs to raise revenues of $3^^^3. (It is a very efficient government and doesn’t need much money.) Should the money be raised by taking $1 from each person, or by simply taking the entire amount from one person?
I take $1 from each person. It’s not the same dilemma.
----
Ri:The idea of convincing others to decide TORTURE is bothering me much more than my own decision.
PK:I don’t think there’s any worry that I’m off to get my “rack winding certificate” :P
Yes, I know. :-) I was just curious about the biases making me feel that way.
individual living 3^^^3 times...keep memories and so on of all previous lives
3^^^3 lives worth of memories? Even at one bit per life, that makes you far from human. Besides, you’re likely to get tortured in googolplexes of those lifetimes anyway.
Arrrgh, stop messing with my head. Actually, no, don’t stop, this is fun! :)
OK here goes… it’s this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
Andrew Macdonald asked:
Any takers for the torture?
Assuming the torture-life is randomly chosen from the 3^^^3 sized pool, definitely torture. If I have a strong reason to expect the torture life to be found close to the beginning of the sequence, similar considerations as for the next answer apply.
Recovering irrationalist asks:
OK here goes… it’s this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
The answer depends on whether I expect to make it through the 50 year ordeal without permanent psychological damage. If I know with close to certainty that I will, the answer is BARNEYLOVE. Otherwise, it’s SONGS; while I might still acquire irreversible psychological damage, it would probably take much longer, giving me a chance to live relatively sane for a long time before then.
Cooking something for two hours at 350 degrees isn’t equivalent to cooking something at 700 degrees for one hour.
Caledonian has made a great analogy for the point that is being made on either side. May I over-work it?
They are not equivalent, but there is some length of time at 350 degrees that will burn as badly as 700 degrees. In 3^^^3 seconds, your lasagna will be … okay, entropy will have consumed your lasagna by then, but it turns into a cloud of smoke at some point.
Correct me if I am wrong here, but I don’t think there is any length of time at 75 or 100 degrees that will burn as badly as one hour at 700 degrees. It just will not cook at all. Your food will sit there and rot, rather than burning.
There must be some minimum temperature at which various things can burn. Given enough time at that temperature, it is the equivalent of just setting it on fire. Below that temperature, it is qualitatively different. You do not get bronze no matter how long you leave copper and tin at room temperature.
(Or maybe I am wrong there. Maybe a couple of molecules will move properly at room temperature over a few centures, so the whole mass becomes bronze in less than 3^^^3 seconds. I assume that anything physically possible will happen at some point in 3^^^3 seconds.)
Are there any SPECKS advocates who say we should pick two people tortured for 49.5 years rather than one for 50 years? If there is any degree of summation possible, 3^^^3 will get us there.
But, SPECKS can reply, there can be levels across with summation is not possible. If lasagna physically cannot burn at 75 degrees, even letting it “cook” for 33^^^^33 seconds, then it will never be as badly burned as one hour at 700 degrees.
“Did I say 75?” TORTURE replies. “I meant whatever the minimum possible is for lasagna to burn, plus 1/3^^3 degrees.” SPECKS must grant victory in that case, but wins at 2/3^^3 degrees lower.
Which just returns the whole thing back to the primordial question-begging on either side, whether specks can ever sum to torture. If any number of beings needing to blink ever adds to 10 seconds of torture, TORTURE is in a very strong position, unless you are again arguing that 10 seconds of TORTURE is like 75 degrees, and there is some magic penny somewhere.
(Am I completely wrong? Aren’t physics and chemistry full of magic pennies like escape velocities and temperatures needed for physical reactions?)
TORTURE must argue that yes, it is the sort of thing that adds. SPECKS must argue that it is like asking how many blades of grades you must add to get a battleship. “Mu.”
Zubon, we could formalize this with a tiered utility function (one not order-isomorphic to the reals, but containing several strata each order-isomorphic to the reals).
But then there is a magic penny, a single sharp divide where no matter how many googols of pieces you break it into, it is better to torture 3^^^3 people for 9.99 seconds than to torture one person for 10.01 seconds. There is a price for departing the simple utility function, and reasons to prefer certain kinds of simplicity. I’ll admit you can’t slice it down further than the essentially digital brain; at some point, neurons do or don’t fire. This rules out divisions of genuine googolplexes, rather than simple billions of fine gradations. But if you admit a tiered utility function, it will sooner or later come down to one neuron firing.
And I’ll bet that most Speckists disagree on which neuron firing is the magical one. So that for all their horror at us Unspeckists, they will be just as horrified at each other, when one of them claims that thirty seconds of waterboarding is better than 3^^^3 people poked with needles, and the other disagrees.
...except that, if I’m right about the biases involved, the Speckists won’t be horrified at each other.
If you trade off thirty seconds of waterboarding for one person against twenty seconds of waterboarding for two people, you’re not visibly treading on a “sacred” value against a “mundane” value. It will rouse no moral indignation.
Indeed, if I’m right about the bias here, the Speckists will never be able to identify a discrete jump in utility across a single neuron firing, even though the transition from dust speck to torture can be broken up into a series of such jumps. There’s no difference of a single neuron firing that leads to the feeling of a comparison between a sacred and an unsacred value. The feeling of sacredness, itself, is quantitative and comes upon you in gradual increments of neurons firing—even though it supposedly describes a utility cliff with a slope higher than 3^^^3.
The prohibition against torture is clearly very sacred, and a dust speck is clearly very unsacred, so there must be a cliff sharper than 3^^^3 between them. But the distinction between one dust speck and two dust specks doesn’t seem to involve a comparison between a sacred and mundane value, and the distinction between 50 and 49.99 years of torture doesn’t seem to involve a comparison between a sacred and a mundane value...
So we’re left with cyclical prefrences. The one will trade 3 people suffering 49.99 years of torture for 1 person suffering 50 years of torture; after having previously traded 9 people suffering 49.98 years of torture for 3 people suffering 49.99 years of torture; and so on back to the starting point where it’s better for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck; right after, a moment before, having traded one person suffering 50 years of torture for 3^1000000000 people feeling one dust speck.
I think it’s worst for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck. After all the next step is that it is worst for 3^1000000000 people to feel one dust speck than for 3^1000000001 people to feel less than one dust speck, which seem right.
I think that we “speckists” see injuries as poisons: they can destroy people lives only if they reach a certain concentration. So a greater but far more diluted pain can be less dangerous than a smaller but more concentrated one. 50 and 49 years of torture are still far over the threshold. One or two dust specks, on the other hand, are far below.
Assuming that there are 3^^^3 distinct individuals in existence, I think the answer is pretty obvious- pick the torture. However, the fact that we cannot possibly hope to visualize so many individuals it’s a pointlessly large number. In fact, I would go so low as one quadrillion human beings with dust specks in their eyes outweighs one individual’s 50 years of torture. Consider- one quadrillion seconds of minute but noticeable pain versus a scant fifty years of tortured hell. One quadrillion seconds is about 31,709,792 years. Let’s just go with 32 million years. Then factor in the magnitudes- torture is far worse than dust specks- 50 years versus 32 million good enough odds for you?
However, that being said, the question is yet another installment of lifeboat ethics, and has little bearing on the real world. If we are ever forced to make such a decision, that’s one thing, but in the meantime let’s work through systemic issues that might lead to such a situation instead.
My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.
However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don’t you?
If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car accidents would occur? Heavy machinery accidents? Workplace accidents? Even if the chance is vanishingly small—let’s say 6 accidents occur on Earth because everyone got a dust speck in their eye. That’s one in a billion.
That’s one accident for every 10e9 people. Now, what percentage of those are fatal? Transport Canada currently lists the 23.7 of car accidents in 2003 as resulting in a fatality, which is 1 in 4. Let’s be nice, and assume that everywhere else on earth safer, and take that down to 1 in 100 accidents being fatal.
Now, if everyone in existence gets a dust speck in their eye because of my decision, assuming the hypothetical 3^^^3 people live in something approximating the lifestyles on Earth, I’ve conceivably doomed 1 in 10e11 people to death.
That is, my cloud of dust specks have killed 3^^^3 / 10e11 people.
It is cheating to answer this by using worse individual consequences than the dust specks themselves.
The very point of the question is the infinitesimality of each individual disutility.
The more I think about the question, the more I’m convinced that it attempts to demonstrate the commensurability of disutility by invoking the commensurability of disutility.
I don’t see how it’s attempting to demonstrate the commensurability of disutility at all; it seems to be using the assumed commensurability of disutility to challenge intuitions about disutility. Can you say more about what is convincing you?
If the OP’s challenging a moral intuition that doesn’t at some point reduce to commensurability, then I don’t know what it is. It asks us to imagine the worst thing that could happen to a random person, and then the least perceptibly bad thing that could happen, and seems to be making the argument that an unimaginably huge number of the latter would trump a single instance of the former. What’s that a reductio for, if not the assumption that torture (or anything comparably bad) carries a special kind of disutility?
On the other hand I’m not sure what the post was written in response to, if anything, so there might be some contextual information there that I’m missing.
I’m… puzzled by this exchange.
But, yes, agreed that a lot of objections to this post implicitly assert that torture is incommensurable with dust-specks, and EY is challenging that intuition.
I have a question/answer in relation to this post that seems to be off-topic for the forum. Click on my name if interested.
If the poor bastard being tortured is G. W. Bush, I’m all for it . . .
Since I would not be one of the people affected I would not consider myself able to make that decision alone. In fact my preferences are irrelevant in that situation even if I consider situation to be obvious.
To have situation with 3^^^3 people we must have at least that many people capable of existing in some meaningful way. I assume we cannot query them about their preferences in any meaningful (omniscient) way. As I cannot choose who will be tortured or who gets dust specks I have to make collective decission.
I think that my solution would be to take three different groups of randomly chosen people. First group would be asked that question and given chance to discuss and change their minds. Second group would be asked would they save 3^^^3 people from dust specks by accepting torture. Third group would be asked would they agree to be dust specked giving person to be tortured 1/3^^^3 chance to be saved.
If one of the latter tests would show significant preference over one of the situations I would assume it is for some reason more acceptable given chance to choose. If it would seem that people are either willing to change scenario given chance in both situations or not willing to change situation in either scenario I would rely on their stated preference from first group and go by that.
I do not think this solution is good enough.
Evolution seems to have favoured the capacity for empathy (the specks choice) over the capacity for utility calculation, even though utility calculation would have been a ‘no brainer’ for the brain capacity we have.
The whole concept reminds me of the Turing test. Turing, as a mathematician, just seems to have completely failed to understand that we don’t assign rationality, or sentience, to another object by deduction. We do it by analogy.
I know that this is only a hypothetical example, but I must admit that I’m fairly shocked at the number of people indicating that they would select the torture option (as long as it wasn’t them being tortured). We should be wary of the temptation to support something unorthodox for the effect of: “Hey, look at what a hardcore rationalist I can be.” Real decisions have real effects on real people.
And we should be wary to select something orthodox for fear of provoking shock and outrage. Do you have any reason to believe that the people who say they prefer TORTURE to SPECKS are motivated by the desire to prove their rationalist credentials, or that they don’t appreciate that their decisions have real consequences?
Jeffrey, on one of the other threads, I volunteered to be the one tortured to save the others from the specks.
As for “Real decisions have real effects on real people,” that’s absolutely correct, and that’s the reason to prefer the torture. The utility function implied by preferring the specks would also prefer lowering all the speed limits in the world in order to save lives, and ultimately would ban the use of cars. It would promote raising taxes by a small amount in order to reduce the amount of violent crime (including crimes involving torture of real people), and ultimately would promote raising taxes on everyone until everyone could barely survive on what remains.
Yes, real decisions have real effects on real people. That’s why it’s necessary to consider the total effect, not merely the effect on each person considered as an isolated individual, as those who favor the specks are doing.
Following your heart and not your head—refusing to multiply—has also wrought plenty of havoc on the world, historically speaking. It’s a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture.
I think you’ve constructed your utility wrong in this instance. Without losing track of scope, we have 3^^^3 motes of dust in 3^^^3 eyes. And yes, that outweighs 50 years of torture, if and only if people have zero tolerance. But people don’t break down into sobbing messes at the (literally at least) slightest provocation. There is a small threshold of badness that can happen to someone without them caring, and as long as all 3^^^3 of them only get epsilon below that, the total suffering for all 3^^^3 of them summed is exactly 0. We have 3^^^3 people, and 3^^^3 motes of dust, but also 3^^^3 separate emotional shock absorbers that take that speck of dust without flinching.
It is non-linear. If you keep adding dust, eventually it starts breaking people’s shock absorbers. And once those 3^^^3 people start experiencing nonzero suffering, it would quickly add up to more than fifty man-years of torture. Then the equation stops favoring dust motes. And here I hope I have some other recourse, because “If you ever find yourself thinking that torture is the right thing to do,” is one of my warnings. I hope I can come out clever enough to take a third option where nobody gets tortured.
But Eliezer’s original description said this:
It’s an essential part of the setup that the disutility of a “dust speck” is not zero.
Let me change “noticing” to “caring” then. Thank you for the correction.
I wish I could upvote this 3^^^3 times.
“Following your heart and not your head—refusing to multiply—has also wrought plenty of havoc on the world, historically speaking. It’s a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture.”
I’m not really convinced that multiplication of the dust-speck effect is relevant. Subjective experience is restricted to individuals, not collectives. To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B?
If the goal is to be a utilitarian ethicist with the well-being of humanity as your highest priority; then something may be wrong with your model when the vast majority of humans would choose the option that you wouldn’t. (As I suspect they would). Utility isn’t all that matters to most people. Is utilitarianism the only “real” ethics?
My criticisms can sometimes come across the wrong way. (And I know that you actually do care about humanity, Eli.) I don’t mean to judge here, just strongly disagree. Not that I retract what I wrote; I don’t.
Jeffrey wrote: To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B? Gosh. The only justification I can see for that equivalence would be some general belief that badness is simply independent of numbers. Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?
In any case: since you now appear to be conceding that it’s possible for someone to prefer TORTURE to SPECKS for reasons other than a childish desire to shock, are you retracting your original accusation and analysis of motives? … Oh, wait, I see you’ve explicitly said you aren’t. So, you know that one leading proponent of the TORTURE option actually does care about humanity; you agree (if I’ve understood you right) that utilitarian analysis can lead to the conclusion that TORTURE is the less-bad option; I assume you agree that reasonable people can be utilitarians; you’ve seen that one person explicitly said s/he’d be willing to be the one tortured; but in spite of all this, you don’t retract your characterization of that view as shocking; you don’t retract your implication that people who expressed a preference for TORTURE did so because they want to show how uncompromisingly rationalist they are; you don’t retract your implication that those people don’t appreciate that real decisions have real effects on real people. I find that … well, “fairly shocking”, actually.
(It shouldn’t matter, but: I was not one of those advocating TORTURE, nor one of those opposing it. If you care, you can find my opinions above.)
Jeffrey, do you really think serial killing is no worse than murdering a single individual, since “Subjective experience is restricted to individuals”?
In fact, if you kill someone fast enough, he may not subjectively experience it at all. In that case, is it no worse than a dust speck?
“Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?”
My criticism was for this specific initial example, which yes did seem “obvious” to me. Very few, if any, ethical opinions can be generalized over any situation and still seem reasonable. At least by my definition of “reasonable”.
Notice that I didn’t single anyone out as being “bad”. Morality is subjective and I don’t dispute that. “Every man is right by his own mind”. I cautioned that we shouldn’t allow a desire to stand-out factor into a decision such as this. I know well that theatrics isn’t an uncommon element on mailing lists/blogs. This example shocked me because toy decisions can become real decisions. I have a hunch that I wouldn’t be the only person shocked by this. If this specific example were put before all of humanity, I imagine that the people who were not shocked by it, would be the minority. I don’t think that I’m being unreasonable.
I can see myself spending too much time here, so I’m going to finish-up and ya’ll can have the last word. I’ll admit that it’s possible that one or more of you actually would sacrifice yourself to save others from a dust speck. Needless to say, I think it would be a huge mistake on your part. I definitely wouldn’t want you to do it on my behalf, if for nothing more than selfish reasons: I don’t want it weighing on my conscience. Hopefully this is a moot point anyway, since it should be possible to avoid both unwanted dust specks and unwanted torture (eg. via a Friendly AI). We should hope that torture dies-away with the other tragedies of our past, and isn’t perpetuated into our not-yet-tarnished future.
I know you’re all getting a bit bored, but I’m curious what you think about a different scenario:
What if you have to choose between (a) for the next 3^^^3 days, you get an extra speck in your eye per day than normally, and 50 years you’re placed in stasis, or (b) you get the normal amount of specks in your eyes, but during the next 3^^^3 days you’ll pass through 50 years of atrocious torture.
Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) the specks are guaranteed to not cause any physical effects other than those mentioned in the original post (i.e., you’re minimally annoyed and blink once more each day; there are no “tricks” about hidden consequences of specks), (iii) any other occurrence of specks in the eye (yours or others’) or torture (you or others) will happen exactly the same for either choice, (iv) the 50 years of either stasis or torture would happen at the same points and (v) after the end of the 3^^^3 days the state of the world is exactly the same except for you (e.g., the genie doesn’t come back with something tricky).
Also assume that the 3^^^3 days you are human-shaped and human-minded, except for the change that your memory (and ability to use it) is stretched to work over the duration as a typical human’s does during a typical life.
Does your answer change if either:
A) it’s guaranteed that everything else is perfectly equal (e.g., the two possible cases will magically be forbidden to interfere with any of your decisions during the 3^^^3 days, but afterwards you’ll remember them; in the case of torture, any remaining trauma will remain until healed “physically”. More succinctly, there are no side effects during the 3^^^3 days, and none other than the “normal” ones afterwards).
B) the 50 years of torture happen at the start, end, or distributed throughout the period.
C) we replace the life period with either (i) your entire lifespan or (ii) infinity, and/or the period of torture with (i) any constant length larger than one year or (ii) any constant fraction of the lifespan discussed.
D) you are magically justified to put absolute certain trust in the offer (i.e., you’re sure the genie isn’t tricking you).
E) replace “speck in the eye” by “one hair on your body grows by half the normal amount” for each day.
Of course, you don’t have to address every variation mentioned, just those that you think relevant.
OK, I see I got a bit long-winded. The interesting part of my question is if you’d take the same decision if it’s about you instead of others. The answer is obvious, of course ;-)
The other details/versions I mentioned are only intended to explore the “contour of the value space” of the other posters. (: I’m sure Eliezer has a term for this, but I forget it.)
Bogdan’s presented almost exactly the argument that I too came up with while reading this thread. I would choose the specks in that argument and also in the original scenario (as long as I am not committing to the same choice being repeated an arbitrary number of times, and I am not causing more people to crash their cars than I cause not to crash their cars; the latter seems like an unlikely assumption, but thought experiments are allowed to make unlikely assumptions, and I’m interested in the moral question posed when we accept the assumption). Based on the comments above, I expect that Eliezer is perfectly consistent and would choose torture, though (as in the scenario with 3^^^3 repeated lives).
Eliezer and Marcello do seem to be correct in that, in order to be consistent, I would have to choose a cut-off point such that n dust specks in 3^^^3 eyes would be less bad than one torture, but n+1 dust specks would be worse. I agree that it seems counterintuitive that adding just one speck could make the situation “infinitely” worse, especially since the speckists won’t be able to agree exactly where the cut-off point is.
But it’s only the infinity that’s unique to speckism. Suppose that you had to choose between inflicting one minute of torture on one person, or putting n dust specks into that person’s eye over the next fifty years. If you’re a consistent expected utility altruist, there must be some n such that you would choose n specks, but not n+1 specks. What makes the n+1st speck different? Nothing, it just happens to be the cut-off point you must choose if you don’t want to choose 10^57 specks over torture, nor torture over zero specks. If you make ten altruists consider the question independently, will they arrive at exactly the same value of n? Prolly not.
The above argument does not destroy my faith in decision theory, so it doesn’t destroy my provisional acceptance of speckism, either.
I came across this post only today, because of the current comment in the “recent comments” column. Clearly, it was an exercise that drew an unusual amount of response. It further reinforces
my impression of much of the OB blog, posted in August, and denied by email.
I think you should ask everyone until you have at least 3^^^3 people whether they would consent to having a dust speck fly into their eye to save someone from torture. When you have enough people just put dust specks into their eyes and save the others.
The question is, of course, silly. It is perfectly rational to decline to answer. I choose to try to answer.
It is also perfectly rational to say “it depends”. If you really think “a dust speck in 3^^^3 eyes” gives a uniquely defined probability distribution over different subsets of possibilityverse, you are being ridiculous. But let’s pretend it did—let’s pretend we had 3^^^^3 parallel Eleizers, standing on flat golden surfaces in 1G and one atmosphere, for just long enough to ask each other enough enough questions to define the problem properly. (I’m sorry, Eleizer, if by stating that possibility, I’ve increased the “true”ness of that part of the probabilityverse by ((3^^^3+1)/3^^^3) :) ).
You can or “I’ve thought about it, but I don’t trust my thought processes”. That is not my position.
My position is that this question does not, in fact, have an answer. I think that that fact is very important.
It’s not that the numbers are meaningless. 3^^^3 is a very exact number, and you can prove any number of things about it. A different question using ridiculous numbers—say, would you rather torture 4^^^4 people for 5 minutes or 3^^^3 of them for 50 years—has a single correct answer which is very clear (of course, the 3^^^3 ones; 4^^^4 >>> (3^^^3)^2). (Unless there were very bizarre extra conditions on the problem.)
It’s just that there is no universal moral utility function which inputs a probability distribution over a finite subset of the possibilityverse and outputs a number. It’s more like relativistic causality—substitute “better” for “after”. A is after B and B is a spacelike distance from C, but C can also be spacelike from A. The dust specks and the torture are incomparable, a spacelike distance.
I think that, philosophically, that makes a big difference. If you pilosophically can’t always go around morally comparing near-infinite sets, then it’s silly to try to approximate how you would behave if you could. Which means you consider the moral value of the consequences which you could possibly anticipate. So yeah, if you are working on AI, you are morally obligated to think about FAI, because that’s intentional action, and you would have to be a monster to say you didn’t care. But you don’t get to use FAI and the singularity to trump the here-and-now, because in many ways they’re just not comparable.
Which means, to me, for instance, that people can understand the singularity idea and believe it has a non-0 probability, and have abilities or resources that would be meaningful to the FAI effort, and still morally choose to simply live as “good people” in a more traditional sense (have a good life in which they make the people with whom they interact overall happier). It’s not just a lack of ability to trace the consequences; it’s also the possibility that the consequences of this or that outcome will be literally incomparable by any finite halting algorithm, whereas even our desperately-limited brains have decent approximations of algorithms for morally comparing the effect of, say, posting on OB versus washing the dishes.
Going to wash the dishes now.
Tim: You’re right—if you are a reasonably attractive and charismatic person. Otherwise, the question (from both sides) is worse than the dust speck.
(Asking people also puts you in the picture. You must like to spend eternity asking people a silly question, and learning all possible linguistic vocalizations in order to do so. There are many fewer vocalizations than possible languages, and many fewer possible human languages than 3^^^3. You will be spending more time going from one person of the SAME language to another, at 1 femtosecond per journey, than you would spend learning all possible human languages. That would be true even if the people were fully shuffled by language—just 1 femtosecond each for all the times when coincidence gives you two of the same language in a row. 3^^^3 is that big.)
Torture is not the obvious answer, because torture-based suffering and dust-speck-based suffering are not scalar quantities with the same units.
To be able to make a comparison between two quantities, the units must be the same. That’s why we can say that 3 people suffering torture for 49.99 years is worse than 1 person suffering torture for 50 years. Intensity Duration Number of People gives us units of PainIntensity-Person-Years, or something like that.
Yet torture-based suffering and dust-speck-based suffering are not measured in the same units. Consequently, we cannot solve this question as a simple math problem. For example, the correct units of torture-based suffering might involve Sanity-Destroying-Pain. There is no reason to believe that we can quantitatively compare Easily-Recoverable-Pain to Sanity-Destroying-Pain; at least, the comparison is not just a math problem.
To be able to do the math, we would have to convert both types of suffering to the same units of disutility. Some folks here seem to think that no matter what the conversion functions are, 3^^^3 is just so big that the converted disutility of 3^^3 dust specs is greater than the converted disutility of 50 years of torture for one person. But determination of the correct disutility conversion functions is itself a philosophical problem that cannot be waved away, and it’s impossible to evaluate that claim until those conversion functions have at least been hinted at.
One way to get different types of suffering to have the same units would be to represent them as vectors, and find a way to get the magnitude of those vectors.
The torture position seems to do the math by using pain intensity as a scalar. Yet there is no reason to to believe that suffering is a scalar quantity, or that the disutility accorded to suffering is a scalar quantity. Even pain intensity is case where “quantity has a quality all of its own”: as you increase it, the suffering goes through qualitative changes. For example, if just a 10% increase in pain duration/intensity causes Post-Traumatic Stress Disorder, that pain is more than 10% worse, and it’s because a qualitatively different type of suffering. The units change.
Suffering may well be better represented as a vector. Other dimensions in the vector might include variables such as chance of Post-Traumatic Stress Disorder (0 in the case of dust specks which are uncomfortable but not traumatic, and approaching 100% in the case of torture), non-recovery chance (0% in the case of dust specks, approaching 100% in the case of torture), recovery time (<1 second in the case of dust specks, approaching infinity in the case of 50 years of torture), insanity, human rights violation, career-destruction, mental-health destruction, life destruction...
Choice of pain intensity only over other variables relevant to suffering is begging the question. We could cherry-pick another dimension out of the vector to get a different result, such as life destruction. LifeDestructionChance(50YearsOfTorture) could be greater than LifeDestructionChance(DustSpeck) * 3^^^3 (I might be committing scope insensitivity saying this, but the point is that the answer isn’t self-evident). Of course, life destruction isn’t the only relevant variable to the calculation of suffering, but neither is pain intensity.
Now, if there is a way to take the magnitude of a suffering vector (another philosophical problem), it’s not at all self-evident that Magnitude( SpeckVector ) * 3^^^3 > Magnitude( 50YearsOfTortureVector), because the SpeckVector has virtually all its dimensions approaching 0 while the TortureVector has many dimensions approaching infinity or their max value (which I think reflects why people think torture is so bad). That would depend on what the dimensions of those vectors are and how the magnitude function works.
You seem to have gotten hung up on 3^^^3, which is really just a placeholder for “some finite number so large it boggles the mind”. If you accept that all types of pain can be measured on a common disutility scale, then all you need is a non-zero conversion factor, and the repugnant conclusion follows (for some mind-bogglingly large number of specks). I think that if a line of argument that rescues your rebuttal exists, it involves lexicographic preferences.
There is a false choice being offered, because every person in every lifetime is going to experience getting something in their eye, I get a bug flying into my eye on a regular basis whenever I go running (3 of them the last time!) and it’ll probably have happened thousands of times to me at the end of my life. It’s pretty much a certainty of human experience (Although I suppose it’s statistically possible for some people to go through life without ever getting anything in their eyes).
Is the choice being offered to make all humanities eyes for all eternity immune to small inconveniences such as bugs, dust or eyelashes? Otherwise we really aren’t being offered anything at all.
Although if we factor in consequences, say… being distracted by a dust speck in the eye while driving or doing any other such critical activity then statistically those trillions of dust specks have the potential to cause untold amounts of damage and suffering
Doesn’t “harm”, to a consequentialist, consist of every circumstance in which things could be better, but aren’t ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?
If you accept consequentialism, isn’t it morally right to torture someone to death so long as enough people find it funny ?
I’m picking on this comment because it prompted this thought, but really, this is a pervasive problem: consequentialism is a gigantic family of theories, not just one. They are all still wrong, but for any single counterexample, such as “it’s okay to torture people if lots of people would be thereby amused”, there is generally at least one theory or subfamily of theories that have that counterexample covered.
Isn’t it paradoxical to argue against consequentialism based on its consequences?
The reason you can’t torture people is that those members of your population who aren’t as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.
I typically argue against consequentialism based on appeals to intuition and its implications, which are only “consequences” in the sense used by consequentialism if you do some fancy equivocating.
Pfft. It is trivially easy to come up with thought experiments where this isn’t the case. You can increase the ratio of bricks-to-brights until doing the arithmetic leads to the result that you should go ahead and torture folks. You can choose folks to torture on the basis of well-publicized, uncommon criteria, so that the vast majority of people rightly expect it won’t happen to them or anyone they care about. You can outright lie to the population, and say that the people you torture are all volunteers (possibly even masochists who are secretly enjoying themselves) contributing to the entertainment of society for altruistic reasons. Heck, after you’ve tortured them for a while, you can probably get them to deliver speeches about how thrilled they are to be making this sacrifice for the common morale, on the promise that you’ll kill them quicker if they make it convincing.
All that having been said, there are consequentialist theories that do not oblige or permit the torture of some people to amuse the others. Among them are things like side-constraints rights-based consequentialisms, certain judicious applications of deferred-hedon/dolor consequentialisms, and negative utilitarianism (depending on how the entertainment of the larger population cashes out in the math).
It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating—assuming facts not in evidence.
We don’t know anything about (1) the side-effects of picking SPECKS (such as car crashes); and definitely don’t know that (2) the torture victim can “acclimate”. (2) in particular seems like cheating in a big way—especially given the statement “without hope or rest”.
There’s nothing rational about posing a hypothetical and then adding in additional facts in your answer. However, that’s a great way to avoid the question presented.
I’ve received minus 2 points (that’s bad I guess?) with no replies, which is very illuminating… I suppose I’m just repeating the above points on lexicographic preferences.
Any answer to the question involves making value choices about the relative harms associated with torture and specks, I can’t see how there’s an “obvious” answer at all, unless one is arrogant enough to assume their value choices are universal and beyond challenge.
Unless you add facts and assumptions not stated, the question compares torture x 50 years to 1 dust speck in an infinite number people’s eyes, one time. Am I missing something? Because it seems It can’t be answered without reference to value choices—which to anyone who doesn’t share those values will naturally appear irrational.
“I’ve received minus 2 points (that’s bad I guess?) with no replies, which is very illuminating… ”
I think this is mainly because your comment seemed uninformed by the relevant background but was presented with a condescending and negative tone. Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses.
“It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating—assuming facts not in evidence.”
http://lesswrong.com/lw/2k/the_least_convenient_possible_world/
“Any answer to the question involves making value choices”
Yes it does.
“compares torture x 50 years to 1 dust speck in an infinite number people’s eyes”
3^^^3 is a (very large) finite number.
“It can’t be answered without reference to value choices—which to anyone who doesn’t share those values will naturally appear irrational.”
Moral anti-realists don’t have to view differences in values as reflecting irrationality.
I’d just like to note that comments informed by the relevant background but condescending and negative are often voted down as well. Though Annoyance seems to have relatively high karma anyway.
I agree. See DS3618 for a crystal-clear example.
I don’t think that case is crystal-clear, could you explain this a bit more?
Looking at DS3618′s comments, he (I estimate gender based on writing style and the demographics of this forum and of the CMU PhD program he claims to have entered) had some good (although obvious) points regarding peer-review and Flare. Those comments were upvoted.
The comments that were downvoted seem to have been very negative and low in informed content.
He claimed that calling intelligent design creationism “creationism” was “wrong” because ID is logically separable from young earth creationism and incorporates the idea of ‘irreducible complexity.’ However, arguments from design, including forms of ‘irreducible complexity’ argument, have been creationist standbys for centuries. Rudely chewing someone out for not defining creationism in a particular narrow fashion, the fashion advanced by the Discovery Institute as part of an organized campaign to evade court rulings, does deserve downvoting. Suggesting that the Discovery Institute, including Behe, isn’t a Christian front group is also pretty indefensible given the public info on it (e.g. the “wedge strategy” and numerous similar statements by DI members to Christian audiences that they are a two-faced organization).
This comment implicitly demanded that no one note limitations of the brain without first building AGI, and was lacking in content.
DS3618 also claims to have a stratospheric IQ, but makes numerous spelling and grammatical errors. Perhaps he is not a native English speaker, but this does shift probability mass to the hypothesis that he is a troll or sock puppet.
He says that he entered the CMU PhD program without a bachelor’s degree based on industry experience. This is possible, as CMU’s PhD program has no formal admissions requirements according to its document. However, given base rates, and the context of the claim, it is suspiciously convenient and shifts further probability mass towards the troll hypothesis. I suppose one could go through the CMU Computer Science PhD student directory to find someone without a B.S. and with his stated work background to confirm his identity (only reporting whether there is such a person, not making the anonymous DS3618′s identity public without his consent).
I strongly doubt that person counts as “informed by the relevant background”.
I considered that, which is why I said that the responses would be “better.”
Fair enough, apologies for the tone.
But if answering the question involves making arbitrary value choices I don’t understand how there can possibly be an obvious answer.
There isn’t for agents in general, but most humans will in fact trade off probabilities of big bads (death, torture, etc) against minor harms, and so preferring SPECKS indicates a seeming incoherency of values.
Thanks for the patient explanation.
The obvious answer is that torture is preferable.
If you have to pick yourself a chance of 1/3^^^3 of 50 years torture vs the dust spec you will pick the torture.
We actually do this every day: we eat foods that can poison us rather than be hungry, we cross the road rather than stay at home, etc.
Imagine there is a safety improvement to your car that will cost 0.0001 cent but will save you from an event that will happen once in 1000 universe lifetimes would you pay for it?
I don’t think it’s very controversial that TORTURE is the right choice if you’re maximizing overall net utility (or in your example, maximizing expected utility). But some of us would still choose SPECKS.
Very-Related Question: Typical homeopathic dilutions are 10^(-60). On average, this would require giving two billion doses per second to six billion people for 4 billion years to deliver a single molecule of the original material to any patient.
Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?
If anyone convinces me that yes, I might accept to be a Torturer. Otherwise, I assume that the negligibility of the speck, plus people’s resilience, would make no lasting effects. Disutility would vanish in miliseconds. If they wouldn’t even notice or have memory of the specks after a while, it’d equate to zero disutility.
It’s not that I can’t do the maths. It’s that the evil of the speck seems too diluted to do harm.
Just like homeopathy is too diluted to do good.
That’s not really the point. The “dust speck” just means the mildest possible harm that a person can suffer; if you don’t think a dust speck with no long-term consequences can be harmful, you should mentally substitute a stubbed toe (with no long-term consequences) or the like.
Easily. 3^^^3 = 3^^27 = 3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3 is so much larger than 10^60 that it is almost certain that many people will receive significant doses of vitamin C. Heck, 3^3^3^3^3^3 ~= 8.719e115 >> 10^60, and that’s merely 3^^6. If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance (which I believe you imply for the purposes of the question), then a tremendous number of people will be protected from the flu—much, much in excess of 3^^26.
Not what I said.
Each person will receive vitamin C diluted in the ratio of 10^(-60) (see reference here). The amount is the same for everyone, constant. Strictly one dose per person (as it was one speck per person).
But the number of persons are all people alive in the next 3^^^3 generations.
...which wouldn’t mean it is linear at all. Above a certain dose can be lethal; below, can have no effect.
Does it sound reasonable that if you eat one nanogram of bread during severe starvation, it would retard your death in precisely zero seconds?
No. You use energy at some finite rate (I’ll assume 2000 kilocalories/day, dunno how much starvation affects this). A nanogram of bread contains a nonzero amount of energy (~2.5 microcalories). So it increases your life expectancy by a nonzero time (~100 nanoseconds). A similar analysis can be performed for anything down to and including a single molecule.
But each patient receives less than 10^60 molecules—one must assume some probability distribution on the number of molecules if we are to suppose any medication is delivered at all. Assuming the dilutions are performed as prescribed in a typical homeopathic preparation, a minuscule fraction will randomly have significantly more than the expected concentration, but even so at least the logarithm of the fraction will be on an order of magnitude with the logarithm of 10^-60 -- and therefore will still multiply to a tremendous number in 3^^^3 cases.
That said, even if you assume that the distribution is exactly as even as possible—every patient receives either zero or one molecule of vitamin C—there will be a minuscule probability that the effect of that one molecule will be at the tipping point. Truly minuscule—probably on the order of 10^-20 to 10^-25, a few in one Avogadro’s number—but this still corresponds to aiding 1 in 10^80 to 10^85 people, which multiplies to a tremendous number in 3^^^3 cases.
Mathematically, I have to agree with your reply: you either have no molecules or at least one. And then, your calculations hold true. And I’m wrong.
Physiologically, though, my argument is that the “nanoutility” that this molecule would add would have such a negligible effect that nothing would change in the person’s life measured by any practical purposes. It will pass completely unnoticed (zero!) — for each person in the 3^^^3 generations.
I assume a fuzzy scale of flu, so that no single molecule would turn sure-flu to sure-non-flu. As I assumed with the specks.
Even if you perform the more sophisticated analysis, the probability of the flu should shift slightly—and that slightly will be on the order of 10^-23, as before. And that times 3^^^3...
I doubt anybody’s going to read a comment this far down, but what the heck.
Perhaps going from nothing to a million dust specks isn’t a million times as bad as going from nothing to one dust speck. One thing is certain though: going from nothing to a million dust specks is exactly as bad as going from nothing to one dust speck plus going from one dust speck to two dust specks etc.
If going from nothing to one dust speck isn’t a millionth as bad as nothing to a million dust specks, it has to be made up somewhere else, like going from 999,999 to a million dust specks being more than a millionth as bad.
What if the 3^^^3 were also horribly tortured for fifty years? Would going from that to that plus a dust speck change everything? It’s now the worst dust speck you’re adding, right?
Ask this to yourself to make the question easier. What would you prefer, getting 3^^^3 dust specks in your eye or being hit with a spiked whip for 50 years.
You must live long enough to feel the 3^^^3 specks in your eye, and each one lasts a fraction of a second. You can feel nothing else but that speck in your eye.
So, it boils down to this question. Would you rather be whipped for 50 years or get specks in your eye for over a googleplex of years.
If I could possible put a marker of the utility of bad that a speck of dust in the eye is, and compare that to the negative utility that a year of depression could be, or being whipped once or having arms broken, it seems impossible that the 50 years of torture could give a more negative value.
I asked this here.
In the real world the possiblity of torture obviously hurts more people than just the person being tortured. By theorizing about the utility of torture you are actually subjecting possibly billions of people to periodic bouts of fear and pain.
Forgive me if this has been covered before. The internet here is flaking out and it makes it hard to search for answers.
What is the correct answer to the following scenario: Is it preferable to have one person be tortured if it gives 3^^^3 people a miniscule amount of pleasure?
The source of this question was me pondering the claim, “Pain is temporary; a good story lasts forever.”
Yes.
Great question, and if it has been covered before on this site, I haven’t seen it. Philosophers have discussed whether or not “sadistic” pleasure from others’ suffering should be included in utilitarian calculations, and in fact this is one of the classic arguments against (some types of) utilitarianism, along with the utility monster and the organ lottery.
One possible answer is that utilitarians should maximize other terminal values besides just pleasure, and that sadistic pleasures like this go against the total of our terminal values, so utilitarians shouldn’t allow these to cancel out torture.
So, I’m very late into this game, and not through all the sequences (where the answer might already be given), but still, I am very interested in your positions (probably nobody answers, but who knows):
Is there a natural number N for which you’d kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
Do you “integrate” utility over time (or “experience-moments”, as per timeless bla), or is it better to just maximize the “final” point, however one got there?
Does breaking up the utility function into several categories really allow dutch-booking, as is indicated in one of the comments? (I hope you understand what I mean with the categories; you’ve a total strict-order for them, with no two identical, elements within categories “add up”, but not even an infinite number of “bad” things in one category can add up to a single one in the next higher one)
If “no” for 3, then: For a (current) human we only have neurons, and a real break-point can probably not be determined; but a re-engineered person could implement such a thing. Is it then preferable?
I expect “yes” for 1, and I have to expect “yes” for 3 (I personally do not see this, but I’m bad at math, and have to trust the comments anyway). If “no” for 3, I still expect “no” for 4, per simplicity-argument, retold many times.
I’m very curious for answer on question 2. Once Eliezer quoted “the end does not justify the means”, but this sentence is so very much re-interpretable that it’s worthless (even if he said otherwise). But as per updating: why should the order of when information is revealed change the final result? Whatever.
When the answers of these questions are somewhere in the sequences, just ignore this, I will sooner or later get to them.
I don’t think this question (or one discussed in the OP) admit meaningful answers. It seems a pity to just ‘pour cold water over them’ but I don’t know what else to say—whatever ‘moral truths’ there are in the world simply don’t reach as far as such absurd scenarios.
Depends what game you’re playing, surely. If you’re playing ‘Invest For Retirement’ and the utility function measures the size of your retirement fund, then naturally the ‘final’ point is what matters.
On the other hand, if you’re playing ‘Enjoy Your Retirement’ and the utility function measures how much money you have to spend on a monthly basis, then what’s important is the “integrated” utility.
Two points of interest here:
(1) Final utility in ‘Invest for retirement’ equals integrated utility in ‘Enjoy your retirement’ (modulo some faffing around with discount rates).
(2) The game of ‘Enjoy your retirement’ is notable insofar as it’s a game with a guaranteed final utility of zero (or -infinity if you prefer).
I’d gladly get a speck of dust in my eye as many times as I can, and I’m sure those 3^^^3 people would join me, to keep one guy from being tortured for 50 years.
Maybe you will indeed, but should you?
This seems to work nearly as well for any harm less than being tortured for 50 years — say, being tortured for 25 years.
I wouldn’t volunteer for 25 years of torture to save a random person from 50. A relative, maybe.
Suppose some fraction of the 3^^^3 dropped out. How many dust specks would you be willing to take? Two? Ten? A thousand? A million? A billion? That’s half a millimeter in diameter, now, and we’re only at 10^9. How about 10^12? 10^15? 10^18? We’re around half a meter in diameter now, approaching or exceeding the size of a football, and we’ve not even reached 3^^4 - and remember that 3^^^3 is 3^^3^^3 = 3^^7,625,597,484,987.
What, you think that all of the 3^^^3 will go for it? All of them, chipping in to save one person who was getting 50 years of torture? In a universe with 3^^^3 people in it, how many people do you think are being tortured? Our planet has had around 10^11 human beings in history. If we say that only one of those 10^11 people were ever tortured for 50 years in history—or even that there were a one-in-a-thousand chance of it, one in 10^14 - how many people would be tortured for 50 years among the more than 3^^^3 we are positing? And do you think that all 3^^^3 will choose the same one you did?
Would you consider think that, perhaps, one dust speck is a bit much to pay to save one part in 3^^^3 of a victim?
When multiple agents coordinate, their decision delivers the whole outcome, not a part of it. Depending on what you decide, everyone who reasons similarly will decide. Thus, you have the absolute control over what outcome to bring about, even if you are only one of a gazillion like-minded voters.
Here, you decide whether to save one person, at the cost of harming 3^^^3 people. This is not equivalent to saving 1/3^^^3 of a person at the cost of harming one person, because the saving of 1/3^^^3 of a person is not something that actually could happen, it is at best utilitarian simplification, which you must make explicit and not confuse for a decision-theoretic construction.
If it were a one-shot deal with no cheaper alternative, I could see agreeing. But that still leaves the other 3^^^3/10^14 victims and this won’t scale to deal with those.
“Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?
I think the answer is obvious. How about you?”
Yes, Eliezer, the answer is obvious. The answer is that this is a false dilemma, and that I should go searching for the third alternative, with neither 3^^^3 dust specks nor 50 years of torture. These are not optimal alternatives.
Construct a thought experiment in which every single one of those 3^^^3 is asked whether he would accept a dust speck in the eye to save someone from being tortured, take the answers as a vote. If the majority would deem it personally acceptable, then acceptable it is.
This doesn’t work at all. If you ask each of them to make that decision you are asking to compare their one dust speck, with somebody else’s one instance of torture. Comparing 1 dust speck with torture 3^^^3 times is not even remotely the same as comparing 3^^^3 dust specks with torture.
If you ask me whether 1 is greater than 3 I will say no. If you ask me 5 times I will say no every time. But if you ask me whether 5 is greater than 3 I will say yes.
The only way to make it fair would be to ask them to compare themselves and the other 3^^^3 − 1 getting dust specks with torture, but I don’t see why asking them should get you a better answer than asking anyone else.
Compare two scenarios: in the first, the vote is on whether every one of the 3^^^3 people are dust-specked or not. In the second, only those who vote in favour are dust-specked, and then only if there’s a majority. But these are kind of the same scenario: what’s at stake in the second scenario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the question “would you vote in favour of 3^^^3 people, including yourself, being dust-specked?” is the same as “would you be willing to pay one dust-speck in your eye to save a person from 50 years of torture, conditional on about 3^^^3 other people also being willing?”
Let me try and get this straight, you are presenting me with a number of moral dilemmas and asking me what I would do in them.
1) Me and 3^^^^3 − 1 other people all vote on whether we get dust specks in the eye or some other person gets tortured.
I vote for torture. It is astonishingly unlikely that my vote will decide, but if it doesn’t then it doesn’t matter what I vote, so the decision is just the same as if it was all up to me.
2) Me and 3^^^^3 − 1 other people all vote on whether everyone who voted for this option gets a dust speck in the eye or some other person gets tortured.
This is a different dilemma, since I have to weigh up three things instead of two, the chance that my vote will save about 3^^^^3 people from being dust-specked if I vote for torture, the chance that my vote will save on person from being tortured if I vote for dust specks and the (much higher) chance that my vote will save me and only me from being dust-specked if I vote for torture.
I remember reading somewhere that the chance of my vote being decisive in such a situation is roughly proportional to the square root of the number of people (please correct me if this is wrong). Assuming this is the case then I still vote for torture, since the term for saving everyone else from dust specks still dwarfs the other two.
3) I have to choose whether I will receive a dust speck or whether someone else will be tortured, but my decision doesn’t matter unless at least half of 3^^^^3 − 1 other people would be willing to choose the dust speck.
Once again the dilemma has changed, this time I have lost my ability to save other people from dust specks and the probability of me successfully saving someone from torture has massively increased. I can safely ignore the case where the majority of others choose torture, since my decision doesn’t matter then. Given that the others choose dust specks, I am not so selfish as to save myself from a dust speck rather than someone else from torture.
You try to make it look like scenarios 2 and 3 are the same, but they are actually very, very different.
The bottom line is that no amount of clever wrangling you do with votes or conditionals can turn 3^^^^3 people into one person. If it could, I would be very worried, since it would imply that the number of people you harm doesn’t matter, only the amount of harm you do. In other words, if I’m offered the choice between one person dying and ten people dying, then it doesn’t matter which I pick.
Assuming a roughly 50-50 split the inverse square-root rule is right. Now my issue is why you incorporate that factor in scenario 2, but not scenario 3. I honestly thought I was just rephrasing the problem, but you seem to see it differently? I should clarify that this isn’t you unconditionally receiving a speck if you’re willing to, but only if half the remainder are also so willing.
The point of voting, for me, is not an attempt to induce scope insensitivity by personalizing the decision, but to incorporate the preferences of the vast majority (3^^^^3 out of 3^^^^3 + 1) of participants about the situation they find themselves in, into your calculation of what to do. The Torture vs. Specks problem in its standard form asks for you to decide on behalf of 3^^^^3 people what should happen to them; voting is a procedure by which they can decide.
[Edit: On second thought, I retract my assertion that scenario 1) and 2) have roughly the same stakes. That in scenario 1) huge numbers of people who prefer not to be dust-specked can get dust-specked, and in scenario 2) no one who prefers not to be dust-specked is dust-specked, makes much more of a difference than a simple doubling of the number of specks.]
By the way, the problem as stated involves 3^^^3, not 3^^^^3, people, but this can’t possibly matter so nevermind.
There are actually two differences between 2 and 3. The first is that in 2 my chance of affecting the torture is negligible, whereas in 3 it is quite high. The second difference is that in 2 I have the power to save huge numbers of others from dust specks, and it is this difference which is important to me, since when I have that power it dwarfs the other factors so much as to be the only deciding factor in my decision. In your ‘rephrasing’ of it you conveniently ignore the fact that I can still do this, so I assumed I no longer could, which made the two scenarios very different.
I think also, as a general principle, any argument of the type you are formulating which does not pay attention to the specific utilities of torture and dust-specks, instead just playing around with who makes the decision, can also be used to justify killing 3^^^^3 people to save one person from being killed in a slightly more painful manner.
How about each of those 3^^^3 is asked whether they would accept a dust speck in the eye to save someone from 1/3^^^3 of 50 years of torture, and everyone’s choice is granted? (i.e. the ones who say they’d accept a dust speck get a dust speck, and the person is tortured for an amount of time proportional to the number of people who refused.)
I’m not quite sure what I’d expect to have happen in that case. That’s harder than the moral question because we have to imagine a world that actually contains 3^^^3 different (i.e. not perfectly decision-theoretically correlated) people, and any kind of projection about that kind of world would pretty much be making stuff up. But as for the moral question of what a person in this situation should say, I’d say the reasoning is about the same — getting a dust speck in your eye is worse than 50/3^^^3 years of torture, so refuse the speck.
(That’s actually an interesting way of looking at it, because we could also put it in terms of each person choosing whether they get specked or they themselves get tortured for 50/3^^^3 years, in which case the choice is really obvious — but if you’re still working with 3^^^3 people, and they all go with the infinitesimal moment of torture, that still adds up to a total 50 years of torture.)
Edit: Actually, for that last scenario, forget 50/3^^^3 years, that’s way less than a Planck interval. So let’s instead multiply it by enough for it to be noticeable to a human mind, and reduce the intensity of the torture by the same factor.
The point of Torture vs. Dust Specks is that our moral intuition dramatically conflicts with strict utilitarianism.
Your thought experiment helps express your moral intuition, but it doesn’t do anything to resolve the conflict.
Although, come to think of it, I think there’s an argument to be made that the majority would answer no. If we interpret 3^^^3 people to mean qualitatively distinct individuals, there’s not enough room in humanspace for all of those people to be human—the vast majority will be nonhumans. It can be argued, at least, that if you pick a random nonhuman individual, that individual will not be altruistic towards humans.
Interesting question. I think a similar real-world situation is when people cut in line.
Suppose there is a line of 100 people, and the line is moving at a rate of 1 person per minute.
Is it ok for a new person to cut to the front of the line, because it only costs each person 1 extra minute, or should the new person stand at the back of the line and endure a full 100 minute wait?
Of course, not everyone in line endures the same wait duration; a person near the front will have a significantly shorter wait than a person near the back. To address that issue one could average the wait times of everyone in line and say that there is an average wait time of 49.5 minutes per person in line [Avg(n) = (n-1) + Avg(n-1)].
Is it ok for a second person to also cut to the front of the line? How many people should be allowed to cut to the front, and which people of those who could possibly cut to the front should be allowed to do so?
This is one of the reasons why utilitarianism makes me cringe. “We can do first-order calculations and come up with a good answer! What could go wrong?”
I would prefer the dust motes, and strongly. Pain trumps inconvenience.
And yet...we accept automobiles, which kill tens of thousands of people per year, to avoid inconvenience. (That is, automobiles in the hands of regular people, not just trained professionals like ambulance drivers.) But it’s hard to calculate the benefits of having a vehicle.
Reducing the national speed limit to 30mph would probably save thousands of lives. I would find it unconscionable to keep the speed limit high if everyone were immortal. At present, such a measure would trade lives for parts of lives, and it’s a matter of math to say which is better...though we could easily rearrange our lives to obviate most travel.
I had to read that twice before I realised you meant “immortal like an elf” rather than “immortal like Jack Harkness and Connor MaCleod”.
Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one’s eye.
Idea 2: dust specks, because most people arbitrarily place bad things into incomparable categories. The death of your loved one is deemed to be infinitely worse than being stuck in an airport for an hour. It is incomparable; any amount of 1 hour waits are less bad than a single loved one dying.
How much would you have to decrease the amount of torture, or increase the number of dust specks, before the dust specks would be worse?
I don’t know. I don’t suppose you claim to know at which point the number of dust specks is small enough that they are preferable to 50 years of torture?
(which is why I think that Idea 2 is a better way to reason about this)
I think it might be interesting to reflect on the possibility that among the 3^^^3 dust speck victims there might be a smaller-but-still-vast number of people being subjected to varying lengths of “constantly-having-dust-thrown-in-their-eyes torture”. Throwing one more dust speck at each of them is, up to permuting the victims, like giving a smaller-but-still-vast number of people 50 years of dust speck torture instead of leaving them alone.
(Don’t know if anyone else has already made this point—I haven’t read all the comments.)
These ethical questions become relevant if we’re implementing a Friendly AI, and they are only of academic interest if I interpret them literally as a question about me.
If it’s a question about me, I’d probably go with the dust specs. A small fraction of those people will have time to get to me, and of those, none of those people are likely to bother me if it’s just a dust speck. If I were to advocate the torture, the victim or someone who knows him might find me and try to get revenge. I just gave you a data point about the psychology of one unmodified human, which is relatively useless, so I don’t think that’s the question you really wanted answered.
Perhaps the question is really what a non-buggy omnipotent Friendly AI would do. If it has been constructed to care equally about that absurd number of people, IMO it should choose torture. If it’s not omnipotent, then it has to consider revenge of the victim, so the correct answer depends on the details of how omnipotent it isn’t.
I wonder if some people’s aversion to “just answering the question” as Eliezer notes in the comments many times has to do with the perceived cost of signalling agreement with the premises.
It’s straightforward to me that answering should take the question at face value; it’s a thought experiment, you’re not being asked to commit to a course of action. And going by the question as asked the answer for any utilitarian is “torture”, since even a very small increment of suffering multiplied by a large enough number of people (or an infinite number) will outweigh a great amount of suffering by one person.
Signalling that would be highly problematic for some people because of what might be read into our answer—does Eliezer expect that signalling assent here means signalling assent to other, as-yet-unknown conclusions he’s made about (whatever issue where that bears some resemblance)? Does Eliezer intend to codify the terms of this premise into the basis for a decision theory underlying the cognitive architecture of a putative Friendly AI? Does Eliezer think that the real world, in short, maps to his gedankenexperiment sufficiently well that the terms of this scenario can meaningfully stand in for decisions made in that domain by real actors (human or otherwise)?
For my own part I’d be very, very hesitant to signal any of that. Hence I find it difficult to answer the question as asked. It’s analogous to my discomfort with the Ticking Time Bomb scenario—by a straight reading of the premise you should trade a finite chance of finding and disabling the bomb, thereby saving a million lives, for the act of torturing the person who planted it. The logic is internally-consistent, but it doesn’t map to any real-world situation I can plausibly imagine (where torture is not terribly effective in soliciting confessions, and the scenario of a “ticking time bomb with a single suspect unwilling to talk mere minutes beforehand” has AFAIK never happened as presented, and would be extremely difficult to set up).
I recognize the internal consistency, yet I’m troubled by my uncertainty about what the author thinks I’m signing up for when I reply.
I choose the specks. My utility function u(what happens to person 1, what happens to person 2, …, what happens to person N) doesn’t equal f_1(what happens to person 1) + f_2(what happens to person 2) + … + f_N(what happens to person N) for any choice of f_1, …, f_N, not even allowing them to be different; in particular, u(each of n people gets one speck in their eye) doesn’t approaches a finite limit as n approaches infinity, and this limit is less negative than u(one person gets tortured than 50 years)
I spent quite a while thinking about this one, and here is my “answer”.
My first line of questioning is “can we just multiply and compare the sufferings ?” Well, no. Our utility functions are complicated. We don’t even fully know them. We don’t exactly know what are terminal values, and what are intermediate values in them. But it’s not just “maximize total happiness” (with suffering being negative happiness). My utility function also values things like fairness (it may be because I’m a primate, but still, I value it). The “happiness” part of my utility function will be higher for torture, the “fairness” part of it, lower. Since I don’t know the exact coefficient of those two parts, I can’t really “shut up and multiply”.
But… well… 3^^^3 is well… really a lot. I can’t get out this way, even adding correcting terms, even if it’s not totally linear, even taking into account fairness, well, 3^^^3 is still going to trump over 1.
So for any realistic computation I would make of my utility function, it seems that “torture” will score higher than “dust speck”. So I should chose torture ? Well, not sure yet. For I’ve ethical rules. What’s an ethical rule ? It’s an internal law (somehow, a cached thought, from my own computation, or coming from outside) that says “don’t ever do that”. It includes “do not torture !” it includes “nothing can ever justify torturing someone for 50 years”. Why are those rules for ? There are here to protect myself from doing mistakes. Because I can’t trust myself fully. I’ve biases. I don’t have full knowledge. I’ve limited amount of time to take decisions, and I only run at 100Hz. So I need safeguards. I need rules, that I’ll follow even when my computation tells me I shouldn’t. Those rules can be overridden. But they need to be overridden by something with almost absolute certitude, and to be of the same (or higher) level. No amount of dust speck can trigger an override of the “no torture” rule. I know my history well enough to know that when you allow yourself to torture, because you’re “sure” that if you don’t something worse will happen, well, you end up becoming the worse. I’ve high ideals, I’ve the will to change the world for better—therefore I need rules to prevent me from becoming Stalin or the Holy Inquisition. And that’s typically the case. 3^^^3 persons will receive dust speck ? Well, too bad. Sure, it’ll be a less optimal utility function than allowing just one person to be tortured. But I don’t trust myself to sentence that person to be tortured. So I’ll chose dust specks for me and everyone.
If you allow me to argue by fictional evidence, well, that reminds me to the end of Asimov Robot cycle (Robots and Empire, mostly). Warning: spoilers coming. If you didn’t read it, go read it, and skip the rest of that paragraph ;) So… when the two robots, Daneel and Giskard, realize the limitations of the First Law: « A robot may not injure a human being or, through inaction, allow a human being to come to harm. », and try to craft the Law Zero: « A robot may not harm humanity, or, by inaction, allow humanity to come to harm. », they end up facing a very difficult problem—one for which they’ll need Psychohistory to solve, and even then, only partially. It’s relatively easy to know that a human being is in danger,or suffering, and how to help me. It’s much, much harder to know that humanity is in danger and how to help it. That’s a deep reason behind ethical rules : torture someone is just plain wrong. I may think it’s good in that situation, because it’ll prevent a Terror Attack, or help me win the war against that horrible Enemy, or because it’ll deter crime, or because it’ll save 3^^^3 persons from a dust speck. But I just don’t trust myself enough to go as far as to torture someone because I computed it would do good overall.
And the last important point on the issue is social rules. There is, in XXIest century western societies at least, a strong taboo on torture. That taboo is a shield. It means than when a president of the USA uses torture, he loses the elections (of course, it’s much more complicated, but I think it did play a role). It makes using torture a very, very costly strategy. We have the same with political violence. When the cops attacked a anti-war protest at the Charonne metro station on Feb 8, 1962, killing 9 demonstrators including a 16 years old boy, that was the end of Algeria war. Of course, it wasn’t just that. De Gaulle was already trying to stop war, it was lost. But the uproar (nearly half a million of people attended their burial) was so high that the political cost of still supporting the war was made much bigger, so that the end of the war was hastened.
I won’t take the responsibility of weakening those taboo (against torture, against political violence, …) by breaking them myself. The consequences on society, on further people using more torture later on, are too scary.
So, to conclude, I’ll chose dust specks. Not because my utility function scores higher on dust speck. But because I can’t trust myself enough to wield something as horrible as torture (I’ve ethical rules, and I’ll follow them, even when my computations tell me to do otherwise, for it’s the only safeguard I know against becoming Stalin) and because I value way too much the societal taboo against torture to take the responsibility of lowering it.
Now… I’ve a feeling of discontent for reaching that conclusion, because it coincide with my initial gut-level reaction to the post. It somehow feel like I wrote the bottom line, and then the rationalization. But… I did my best, I did overcome the first “excuse” (non-linearity and valuing fairness) my mind gave me. But I don’t find flaws in the two others. And well, reversed stupidity is not intelligence. Reaching the same conclusion that I had intuitively doesn’t always mean it’s a wrong conclusion.
Let me attempt to shut up and multiply.
Let’s make the assumption that a single second of torture is equivalent to 1 billion dust specks to the eye. Since that many dust specks is enough to sandblast your eye, it seems reasonable approximation.
This means that 50 years of this torture is equivalent to giving 1 single person (50 365.25 24 60 60 * 1,000,000,000) dust specks to the eye.
According to Google’s calculator,
(50 365.25 24 60 60 1,000,000,000)/(3^39) = 0.389354356 (50 365.25 24 60 60 1,000,000,000)/(3^38) = 1.16806307
Ergo, If someone convinces you 50 years of Torture, or 3^^3(3^27) people get Specks, pick Specks.
But If someone convinces you 50 years of Torture, or (3^50) people get Specks, pick Torture,
This appears to be a fair attempt to shut up and multiply.
However, 3^^^3 is incomprehensibly bigger than any of that.
You could turn every atom into the observable universe into a speck of dust. At wikipedia’s almost 10^80 atoms, that is still not enough dust. http://en.wikipedia.org/wiki/Observable_universe
You could turn every planck length in the obseravble universe into a speck of dust. At Answerbag’s 2.5 x 10^184 cubic planck lengths, that’s still not enough dust. http://www.answerbag.com/q_view/33135
At this point, I thought maybe that another universe made of 10^80 computronium atoms is running universes like ours as simulation on individual atoms. That means 10^80 2.5 x 10^184 cubic planck lengths of dust. But that’s still not enough dust. Again. 2.510^264 specks of dust is still WAY less than 3^^^3
At this point, I considered checking if I could get enough dust specks if I literally converted everything in all Everett branches since the big bang beginning of time into dust, but my math abilities fail me. I’ll try coming back to this later.
Edit: My multiplication symbols were getting turned into Italics. Should be fixed now.
I tentatively like to measure human experience with logarithms and exponentials. Our hearing is logarithmic, loudness wise, hence the unit dB. Human experiences are rarely linear, thus is it is almost never true that f(x*a) = f(x)*a.
In the above hypothetical, we can imagine the dust specks and the torture. If we propose that NO dust speck ever does anything other than cause mild annoyance, never one enters the eye of a driver who blinks at an inopportune time and crashes; then I would propose we can say: awfulness(pain) = k^pain.
A dust speck causes approximately Dust = epsilon Dols (unit of pain, think the opposite of hedons) while intense, effective torture causes possibly several kiloDols per second. Now it is simply a matter to say Torture = W kDol/s * 50 years, for some reasonable W; Lastly compare k^Dust * 3^^^3 ⇔ k^Torture.
My utility function has non-zero terms for preferences of other people. If I asked each one of the 3^^^3 people whether they would prefer a dust speck if it would save someone a horrible fifty-year torture, they (my simulation of them) would say YES in 20*3^^^3-feet letters.
If I asked each of a million people if they would give up a dollar’s worth of value if it would give an arbitrarily selected person ten thousand dollars’ worth, and they each said yes, would it follow that destroying a million dollar’s worth of value in exchange for ten thousand dollars’ worth was a good idea?
If, additionally, my utility function had non-zero terms for the preferences of other people, would it follow then?
It wouldn’t follows that it is a good idea, or efficient idea. But it would follow that it is the preferred idea, as calculated by my utility function that has non-zero terms for preferences of other people.
Fortunately, my simulation of other people doesn’t suddenly wish to help an arbitrary person by donating a dollar with 99% transaction cost.
Hm. As with Maelin’s comment above, I seem to agree with every part of this comment, but I don’t understand where it’s going. Perhaps I missed your original point altogether.
My point was that the “SPECKS!!” answer to the original problem, which is intuitively obvious to (I think) most people here, is not necessarily wrong. It can directly follow from expected utility maximization, if the utility function values the choice of people, even if the choice is “economically” suboptimal.
A substantial part of talking about utility functions is to assert we are trying to maximize something about utility (total, average, or whatnot). It seems very strange to say that we can maximize utility by being inefficient in our conversion of other resources into utility. If your goal is to avoid certain “efficient” conversations for other reasons, then it doesn’t make a lot of sense to say that you are really trying to implement a utility function.
In other words, Walzer’s Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function. To the extent that he (or I) might be modeled by a utility function, there are inevitably going to be errors in what the function predicts I would want or very strange discontinuities in the function.
But I am trying to maximize the total utility, just a different one.
Ok, let me put it this way. I will drop the terms for other people’s preferences from my utility function. It is now entirely self-centered. But it still values the good feeling I get if I’m allowed to participate in saving someone from fifty years of torture. The value of this feeling if much more than the miniscule negative utility of a dust speck. Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!
Now an objection can be stated that by the conditions of the problem, I cannot change the utilities of the 3^^^3 people. They are given and are equal to a miniscule negative value corresponding to the small speck of dust. Evil forces give me the sadistic choice and don’t allow me to share the good news with everyone. Ok. But I can still imagine what the people would have preferred if given a choice. So I add a term for their preference to my utility function. I’m behaving like a representative of people in a government. Or like a Friendly AI trying to implement their CEV.
My arguments have nothing to do with Walzer’s Spheres of Justice concept, AFAICT.
The point of picking a number the size of 3^^^3 is that it is so large that this statement is false. Even if 99% are like you, I can keep adding ^ and falsify the statement. If utility is additive at all, torture is the better choice.
My reference to Walzer was simply to note that many interesting moral theories exist that do not accept that utility is additive. I don’t accept that utility is additive.
Why would it ever be false, no matter how large the number?
Let S = negated disutility of speck, a small positive number. Let F = utility of good feeling of protecting someone from torture. Let P = the fraction of people who are like me (for whom F is positive), 0 < P ⇐ 1. Then the total utility for N people, no matter what N, is N(PF—S), which is >0 as long as P*F > S.
Well, we can agree that utility is complicated. I think it’s possible to keep it additive by shifting complexities to the details of its calculation.
This knowledge among the participants is adding to the thought experiment. The original question:
You are asking:
Notice how your formulation has 3^^^3 in both options, while the original question does not.
Yes, I stated and answered this exact objection two comments ago.
I have come to believe that—like a metaphorical Groundhog Day—every conversation on this topic is the same lines from the same play, with different actors.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
In the end, the lesson of the problem is not about the badness of torture or what things count as positive utility, but about learning what commitments you make with various assertions about the way moral decisions should be made.
I don’t realize it either; I’m not sure that it’s true. Forgive me if I’m missing something obvious, but:
gRR wants to include the preferences of the people getting dust-specked in his utility function.
But as you point out, he can’t; the hypothetical doesn’t allow it.
So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.
You can argue that that’s a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that’s a different objection.
If you want to answer a question that isn’t asked by the hypothetical, you are fighting the hypo. That’s basically the paradigmatic example of “fighting the hypo.”
I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer’s question, he’s incorrect.
I’m not sure why do you think I’m asking a different question. Do you mean to say that in the original Eliezer’s problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:
“Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?”
Besides, if this is indeed what Eliezer meant, then his choice of “torture” for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It’s not obvious to me that any moral theory is committed to that position.
And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to “pleasure” would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don’t see why it is relevant.
If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There’s nothing to be committed about. But in this case, “torture” is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
Well, OK, sure, but… can’t anything follow from expected utility maximization, the way you’re approaching it? For all (X, Y), if someone chooses X over Y, that can directly follow from expected utility maximization, if the utility function values X more than Y.
If that means the choice of X over Y is not necessarily wrong, OK, but it seems therefore to follow that no choice is necessarily wrong.
I suspect I’m still missing your point.
Given: a paradoxical (to everybody except some moral philosophers) answer “TORTURE” appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn’t want to lose them, even if sometimes they give “inefficient” answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent—even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn’t make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn’t wish to throw it away. So, maybe there’s another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold—yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don’t know and will never know about the situation—because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different—a large negative value from knowing about the tortured person’s plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
Result: SPECKS!! Hurray! :)
OK. I think I understand you now. Thanks for clarifying.
I feel like this is misinterpreting gRR’s comment. gRR is not claiming that nonutilitarian choices are preferable because the utility function has non-zero terms for preferences of other people. It is a necessary condition, but not a sufficient one.
My model of other people says that a significantly smaller percentage of people would accept losing a dollar in order to grant one person ten grand, than would accept a dust speck in order to save one person 50 years of torture.
As does mine.
That’s consistent with my understanding of their claim as well.
Can you expand further on why you feel like this?
Sure, although updating upon reading your response, I now suspect that I have misinterpreted your comment. But I’ll explain how I saw things when I first commented.
Basically it looked like you were perceiving gRR’s argument as a specific instance of the following general argument:
You were then trying to reveal the fault in gRR’s general argument by presenting a different example ($1m → $10k) and asking if the same argument would still hold there (which you presume it wouldn’t). Then you suggested throwing another premise, (1b) I have nonzero terms for others’ preferences, and presumably replacing (2a) by (2b) which adds the requirement of (1b), and asking if that would make the argument hold.
But gRR was not asserting that general argument—in particular, not premise (2a)/(2b). So it seemed like you seemed to be trying to tear down an argument that gRR was not constructing.
Conversely, if you asked somebody if they’d be willing to be tortured for 50 years in order to save 3^^^3 people from getting each a dust speck in the eye, they’d likely say NO FREAKIN’ WAY!!!.
BTW, welcome to Less Wrong—you can introduce yourself in the welcome thread.
The mathematical object to use for the moral calculations needs not be homologous to real numbers.
My way of seeing it is that the speck of dust barely noticeable will be strictly smaller than torture no matter how many instances of speck of dust happen. That’s just how my ‘moral numbers’ operate. The speck of dust equals A>0, the torture equals B>0, the A*N<B holds for any finite N . . I forbid infinities (the number of distinct beings is finite).
If you think that’s necessarily irrational you have a lot of mathematics to learn. You can start with ordinal numbers.
edit: note, i am ignoring consequences of the specks in the eyes as I think they are not the point of the exercise and only obfuscate everything plus one has to make assumptions like specks ending up in eyes of people whom are driving.
If I understand correctly, then I agree with you. But this viewpoint has consequences.
The linked post still assumes that discomfort space is one dimensional, which it needs not be. The decision outcomes do need to behave like comparison does (if a>b and b>c it must follow that a>c) but that’s about it.
Bottom line is, we can’t very well reflect on how we think about this issue, so its hard to come up with some model that works the same as your head, and which you can reflect on, calculate with computer, etc.
By the way, consider a being made of 10^30 parts with 10^30 states each. That’s quite big being, way bigger than human. The number of distinct states of such being is (10^30)^(10^30) = 10^(30*10^30) , which is unimaginably smaller than 3^^^3 . You can pick beings that are to humans as humans are to amoeba, repeated many times, and still be waaay short of 3^^^3. The guys who chose torture, congrats on also having a demonstrable reasoning failure when reasoning about huge numbers.
edit: embarrassing math glitch of my own. It is difficult to reason about huge numbers and easy to miss something, such as number of ‘people’ exceeding number of possible human mind states by unimaginably far.
Choosing TORTURE is making a decision to condemn someone to fifty years of torture, while knowing that 3^^^3 people would not want you to do so, would beg you not to, would react with horror and revulsion if/when they knew you did it. And you must do it for the sake of some global principle or something. I’d say it puts one at least into Well-intentioned Extremist / KnightTemplar category, if not outright villain.
If an AI had made a choice like that, against known wishes of practically everyone, I’d say it was rather unfriendly.
ADDED: Detailed
People who choose torture, if the question was instead framed as the following would you still choose torture?
“Assuming you know your lifespan will be at least 3^^^3 days, would you choose to experience 50 years worth of torture, inflicted a day at a time at intervals spread evenly across your life span starting tomorrow, or one dust speck a day for the next 3^^^3 days of your life?”
Clever, but not, I think, very illuminating -- 3^^^3 is just as fantastically, intuition-breakingly huge as it ever was, and using the word “tomorrow” adds a nasty hyperbolic discounting exploit on top of that. All the basic logic of the original still seems to apply, and so does the conclusion: if a dust speck is in any way commensurate with torture (a condition assumed by the OP, but denied by enough objections that I think it’s worth pointing out explicitly), pick Torture, otherwise pick Specks.
One of the frustrating things about the OP is that most of the objections to it are based on more or less clever intuition pumps, while the post itself is essentially making a utilitarian case for ignoring your intuitions. Tends to lead to a lot of people talking past each other.
I’ve heard this rephrasing before but it means less than you might think. Human instinct tells us to postpone the bad as much as possible. Put aside the dustspeck issue for the moment: let’s compare torture to torture. I’d be tempted to choose a 1000 years of torture over a single year of torture, if the 1000 years are a few millions of years in the future, but the single year had to start now.
Does this fact mean I need concede 1000 years of torture are less bad than a single year? Surely not. It just illustrates human hyperbolic discounting.
I would almost undoubtedly choose a dust speck a day for the rest of my life. So would most people.
The question remains whether that would be the right choice… and, if so, how to capture the principles underlying that choice in a generalizable way.
For example, in terms of human intuition, it’s clear that the difference between suffering for a day and suffering for five years plus one day is not the same as the difference between suffering for fifty years and suffering for fifty-five years, nor between zero days and five years. The numbers matter.
But it’s not clear to me how to project the principles underlying that intuition onto numbers that my intuition chokes on.
Could it be that in the 50 years worth of torture would also amount to more than a dust spec of daily discomfort caused by having been psychologically traumatized by the torture, for the remaining 3^^^3 days?
What if the 50 years of torture come at the end of the lifespan?
Istill would rather just take the dust speck now and then though. Nothing forbids me from having a function more nonlinear than 3^^^^[n] 3 , as a messy wired neural network i can easily implement imprecise algebra on the numbers that are far beyond any up arrow notation, or even numbers x,y,z… that are such that any finite integer x < y , any finite integer y < z , and so on . Infinities are not hard to implement at all. Consider comparisons on arrays made like ab[1] . I’m using strings when I need that property in software, so that i can always make some value that will have precedence.
edit: Note that one could think of the comparison between real values in above example as comparisons between a[1]*big number + a[2] , which may seem sensible, and then learn of the uparrows, get mind boggled, and reason that the up-arrows in a[2] will be larger than big number. But they never will change outcome of the comparison as per the actual logic where a[1] always matters more than a[2] .
Sure, if I factor in the knock-on effects of 50 years of torture (or otherwise ignore the original thought experiment and substitute my own) I might come to different results.
Leaving that aside, though, I agree that the nature of my utility function in suffering is absolutely relevant here, and it’s entirely possible for that function to be such that BIGNUMBER x SMALLSUFFERING is worth less than SMALLNUMBER x BIGSUFFERING even if BIGNUMBER >>>>>> SMALLNUMBER.
The key word here is possible though. I don’t really know that it is.
Common sense tells me the torture is worse. Common sense is what tells me the earth is flat. Mathematics tells me the dust specks scenario is worse. I trust mathematics and will damn one person to torture.
This “moral dilemma” only has force if you accept strict Bentham-style utilitarianism, which treats all benefits and harms as vectors on a one-dimensional line, and cares about nothing except the net total of benefits and harms. That was the state of the art of moral philosophy in the year 1800, but it’s 2012 now.
There are published moral philosophies which handle the speck/torture scenario without undue problems. For example if you accepted Rawls-style, risk-averse choice from a position where you are unaware whether you will be one of the speck-victims or the torture victim, you would immediately choose the specks. Choosing the specks maximises the welfare of the least well off (they are subject to a speck, not torture) and, if you don’t know which role you will play, eliminates the risk you might be the torture victim.
(Bentham-style utility calculations are completely risk-neutral and care only about expected return on investment. However nothing about the universe I’m aware of requires you to be this way, as opposed to being risk-averse).
Or for that matter if you held a modified version of utilitarianism that subscribed to some notion of “justice” or “what people deserve”, and cared about how utility was distributed between persons instead of being solely concerned with the strict mathematical sum of all utility and disutility, you could just say that you don’t care how many dust specks you pile up, the degree of unfairness in a distribution where 3^^^3 people get out of a dust speck and one person gets tortured makes the torture scenario a less preferable distribution.
I know Eliezer’s on record as advising people not to read philosophy, but I think this is a case where that advice is misguided.
Rawls’s Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.
If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don’t find it and stop it. Tell me that you think that’s an unlikely hypothesis and I’ll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.
However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don’t know this worse-off person exists nor that they could do anything about it if that person did exist.
That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it’s very hard to avoid the conclusion that we should dedicate 100% of the world’s surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.
This isn’t a problem that Bentham’s hedonistic utilitarianism, nor Eliezer’s gloss on it, handles any more satisfactorily than any other theory as far as I can tell.
The dusk speck is a slight irritation. Hearing about somone being tortured is a bigger irritation. Also, pain depends on greatly on concentration. Something that hurts “twice as much” is actually much worse: lets say it is a hundred times worse. Offcourse this levels off (it is a curve) at some point, but in this case that is not problem as we can say that the torture is very close to the physical max and the speck’s are very close to the physical minimum pain. The difference between the Speck and the torture is immense. Differense in time = 1.5 M. Difference in hurting 2 M. So we can have a huge number (like 2 Million to the power of 24 Mi to the power of 1.5 Mi). This number is going to be huge. Even if this does not add up to our number of specks one can see that one can define perimeters to make either side the better choice. In the end it is just a moral question.
At first, I picked the dust specks as being the preferable answer, and it seemed obvious. What eventually turned me around was when I considered the opposite situation—with GOOD things happening, rather than BAD things. Would I prefer that one person experience 50 years of the most happiness realistic in today’s world, or that 3^^^3 people experience the least good, good thing?
The question has been posed.
Why do you think that there has to be a symmetry between positive and negative utility?
I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.
My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.
In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question—is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?
For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Also, I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
There’s something really odd about characterizing “torture is preferable to this utterly unrealizable thing” as “advocating torture.”
It’s not obviously wrong… I mean, someone who wanted to advocate torture could start out from that kind of position, and then once they’d brought their audience along swap it out for simply “torture is preferable to alternatives”, using the same kind of rhetorical techniques you use here… but it doesn’t seem especially justified in this case. Mostly, it seems like you want to argue that torture is bad whether or not anyone disagrees with you.
Anyway, to answer your question: to a total utilitarian, what matters is total utility-change. That includes knock-on effects, including mental discomfort due to hearing about the torture, and the way torturing increases the likelihood of future torture of others, and all kinds of other stuff. So transmitting information about events is itself an event with moral consequences, to be evaluated by its consequences. It’s possible that keeping the torture a secret would have net positive utility; it’s possible it would have net negative utility.
All of which is why the original thought experiment explicitly left the knock-on effects out, although many people are unwilling or unable to follow the rules of that thought experiment and end up discussing more real-world plausible variants of it instead (as you do here).
Well, in some bizarre sense that’s true. I mean, if I’m being tortured right now, but nobody has any information from which the fact of that torture can be deduced (not even me) a utilitarian presumably concludes that this is not an event of moral significance. (It’s decidedly unclear in what sense it’s an event at all.)
Sure, that seems likely.
I endorse killing someone over allowing a greater amount of bad stuff to happen, if those are my choices. Does that answer your question? (I also reject your implication that killing someone is necessarily worse than torturing them for 50 years, incidentally. Sometimes it is, sometimes it isn’t. Given that choice, I would prefer to die… and in many scenarios I endorse that choice.)
You know, in natural language “x is better than y” often has the connotation “x is good”, and people go at lengths to avoid such wordings if they don’t want that connotation. For example, “‘light’ cigarettes are no safer than regular ones” is logically equivalent to “regular cigarettes are at least as safe as ‘light’ ones”, but I can’t imagine an anti-smoking campaign saying the latter.
Fair enough. For maximal precision I suppose I ought to have said “I reject your characterization of...” rather than “There’s something really odd about characterizing...,” but I felt some polite indirection was called for.
Well, assuming the torture is artificially bounded to absolute impactlessness, then yes, it is irrelevant (in fact, it arguably doesn’t even exist). However, a good rationalist utilitarian will retroactively consider future effects of the torture, supposing it is not so bounded, and once the fact of the torture can then be deduced, it does retroactively become a morally significant event in a timeless perspective, if I understand the theory properly.
The point was not necessarily to advocate torture. It’s to take the math seriously.
Just how many people do you expect to hear about the torture? Have you taken seriously how big a number 3^^^3 is? By how many utilons do you expect their disutility to exceed the disutility from the dust specks?
First, I don’t buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I’m just not biting those bullets. I don’t think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer’s recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.
Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!
I get what you’re trying to do here. You’re trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you’re more rational than them by choosing the “right” (naive hyper-rational utilitarian-algebraist) answer. But I don’t think it’s that simple when we’re talking about morality. If it were, the philosophical project that’s lasted 2500 years would finally be over!
You were the one who claimed that the mental discomfort from hearing about torture would swamp the disutility from the dust specks—I assumed from that, that you thought they were commensurable. I thought it was odd that you thought they were commensurable but thought the math worked out in the opposite direction.
I believe Eliezer’s post was not so much directed at folks who disagree with utilitarianism—rather, it’s supposed to be about taking the math seriously, for those who are. If you’re not a utilitarian, you can freely regard it as another reductio.
You don’t have to be any sort of simple or naive utilitarian to encounter this problem. As long as goods are in any way commensurable, you need to actually do the math. And it’s hard to make a case for a utilitarianism in which goods are not commensurable—in practice, we can spend money towards any sort of good, and we don’t favor only spending money on the highest-order ones, so that strongly suggests commensurability.
No. One of those actions, or something different, happens if I take no action. Assuming that neither the one person nor the 3^^^3 people have consented to allow me to harm them, I must choose the course of action by which I harm nobody, and the abstract force harms people.
If you instead offer me the choice where I prevent the harm (and that the 3^^^3+1 people all consent to allow me to do so), then I choose to prevent the torture.
My maximal expected utility is one in which there is a universe in which I have taken zero additional actions without the consent of every other party involved. With that satisfied, I seek to maximize my own happiness. It would make me happier to prevent a significant harm than to prevent an insignificant harm, and both would be preferable to preventing no harm, all other things being equal.
If the people in question consented to the treatment, then the decision is amoral, and I would choose to inflict the insignificant harm.
From a strict utility perspective, if you describe the value the torture as −1, do you describe the value of the speck of dust in one person’s eye as less than −1/(3^^^3)? There is some epsilon for which it is preferable to have harm of epsilon done to any real number of people than to have harm of −1 done to one person. Admitting that does not prohibit you from comparing epsilons, either.
How bad is the torture option?
Let’s say a human brain can have ten thoughts per second; or the rate of human awareness is ten perceptions per second. Fifty years of torture means just over one and a half billion tortured thoughts, or perceptions of torture.
Let’s say a human brain can distinguish twenty logarithmic degrees of discomfort, with the lowest being “no discomfort at all”, the second-lowest being a dust speck, and the highest being torture. In other words, a single moment of torture is 2^19 = 524288 times worse than a dust speck; and a dust speck is the smallest discomfort possible. Let’s call a unit of discomfort a “dol” (from Latin dolor).
In other words, the torture option means 1.5 billion moments × 2^19 dols; whereas the dust-specks option means 3^^^3 moments × 1 dol.
The assumptions going into this argument are the speed of human thought or perception, and the scale of human discomfort or pain. These are not accurately known today, but there must exist finite limits — humans do not think or perceive infinitely fast; and the worst unpleasantness we can experience is not infinitely bad. I have assumed a log scale for discomfort because we use log scales for other senses, e.g. brightness of light and volume of sound. However, all these assumptions can be empirically corrected based on facts about human neurology.
Torture is really, really bad. But it is not infinitely bad.
That said, there may be other factors in the moral calculation of which to prefer. For instance, the moral badness of causing a particular level of discomfort may not be linear in the amount of discomfort: causing three dols once may be worse than causing one dol three times. However, this seems difficult to justify. Discomfort is subjective, which is to say, it is measured by the beholder — and the beholder only has so much brain to measure it with.
I suspect that I would prefer the false memory of having been tortured for five minutes to the false memory of having been tortured for a year, assuming the memories are close replicas of what memories of the actual event would be like. I would relatedly prefer that someone else experience the former rather than the latter, even if I’m perfectly aware the memory is false. This suggests to me that whatever I’m doing to make my moral judgments that torture is bad, it’s not just summing the number of perception-moments… there are an equal number of perception-moments in those two cases, after all. (Specifically, none at all.)
That said, this line of thinking quickly runs aground on the “no knock-on effects” condition of the initial thought experiment.
Actually, from what I read about related research in “Thinking, Fast and Slow”, it’s not clear that you would (or that the difference would be as large as you might expect, at least). It seems that memories of pain depend largely on the most intense moment of pain and on the final moment of pain, not necessarily on duration.
For example, in one experiment (I read the book a week ago and write from memory), subjects were asked to put their hand in a bowl of cold water (a painful experience) for two minutes, then they were asked to put their hands in cold water for two minutes, followed by the water being warmed gradually over another 5 minutes. (There were reasonable controls, obviously.) Then they were asked which experience to repeat. The majority chose experience two, even though intuitively it is strictly worse than experience one.
Of course, you’d have to find the actual related paper(s), check how high the correlation/ignoring-duration effect is, check if there’s significant inter-individual variation (whether maybe you’re an unusual person who cares about duration), but, regardless, there are significant reasons to doubt your intuitions in this scenario.
… huh.
I wonder if we might actually value experiences this way?
Daniel Kahneman suggests that we do. We remember thing imperfectly and optimize for the way we remember things. Wiki has a quick summary.
True — we need a term for moments of discomfort caused by contemplation, not just ones caused by perception.
It seems to me, though, that your brain can only perceive a finite number of gradations of unpleasant contemplation, too. The memory of being tortured for five minutes, the memory of being tortured for a year, and the memory of having gotten a dust speck in your eye could occupy points on this scale of unpleasantness.
I think I have to go with the dust specks. Tomorrow, all 3^^^3 of those people will have forgotten entirely about the speck of dust. It is an event nearly indistinguishable from thermal noise. People, all of them everywhere, get dust specks in their eyes just going about their daily lives with no ill effect.
The torture actually hurts someone. And in a way that’s rather non-recoverable. Recoverability plays a large part in my moral calculations.
But there’s a limit to how many times I can make that trade. 3^^^3 people is a LOT of people, and it doesn’t take a significant fraction of THAT at all before I have to stop saving torture victims, lest everyone everywhere’s lives consist of nothing but a sandblaster to the face.
What you’re doing there is positing a “qualitative threshold” of sorts where the anti-hedons from the dust specks cause absolutely zero disutility whatsoever. This can be an acceptable real-world evaluation within loaded subjective context.
However, the problem states that the dust specks have non-zero disutility. This means that they do have some sort of predicted net negative impact somewhere. If that impact is merely to slow down the brain’s visual recognition of one word by even 0.03 seconds, in a manner that is directly causal and where the dust speck would have avoided this delay, then over 3^^^3 people that is still more man-hours of work lost than the sum of all lifetimes of all humans on Earth to this day ever. If that is not a tragic loss much more dire than one person being tortured, I don’t see what could be. And I’m obviously being generous there with that “0.03 seconds” estimate.
Theoretically, all this accumulated lost time could mean the difference between the extinction or survival of the human race to a pan-galactic super-cataclysmic event, simply by way of throwing us off the particular course of planck-level-exactly-timed course of events that would have allowed us to find a way to survive just barely by a few (total, relatively absolute) seconds too close for comfort.
That last is assuming the deciding agent has the superintelligence power to actually compute this. If calculating from unknown future causal utilities, and the expected utility of a dust speck is still negative non-zero, then it is simple abstraction of the above example and the rational choice is still simply the torture.
If you ask me the slightly different question, where I choose between 50 years of torture applied to one man, or between 3^^^3 specks of dust falling one each into 3^^^3 people’s eyes and also all humanity being destroyed, I will give a different answer. In particular, I will abstain, because my moral calculation would then favor the torture over the destruction of the human race, but I have a built-in failure mode where I refuse to torture someone even if I somehow think it is the right thing to do.
But that is not the question I was asked. We could also have the man tortured for fifty years and then the human race gets wiped out BECAUSE the pan-galactic cataclysm favors civilizations who don’t make the choice to torture people rather than face trivial inconveniences.
Consider this alternate proposal:
Hello Sir and/or Madam:
I am trying to collect 3^^^3 signatures in order to prevent a man from being tortured for 50 years. Would you be willing to accept a single speck of dust into your eye towards this goal? Perhaps more? You may sign as many times as you are comfortable with. I eagerly await your response.
Sincerely,
rkyeun
PS: Do you know any masochists who might enjoy 50 years of torture?
BCC: 3^^^3-1 other people.
We did specify no long-term consequences—otherwise the argument instantly passes, just because at least 3^^7625597484986 people would certainly die in car accidents due to blinking. (3^^^3 is 3 to the power of that.)
If you still use “^” to refer to Knuth’s up-arrow notation, then 3^^^3 != 3^(3^^26).
3^^^3 = 3^^(3^^3) = 3^^(3^27) != 3^(3^^27)
Fixed.
I admit the argument of long-term “side effects” like extinction of the human race was gratuitous on my part. I’m just intuitively convinced that such possibilities would count towards the expected disutility of the dust motes in a superintelligent perfect rationalist’s calculations. They might even be the only reason there is any expected disutility at all, for all I know.
Otherwise, my puny tall-monkey brain wiring has a hard time imagining how a micro-fractional anti-hedon would actually count for anything other than absolute zero expected utility in the calculations of any agent with imperfect knowledge.
Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
Heh.
When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.
Now, rather than thinking the solution obvious, I’m leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.
Otherwise, it’s rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can’t even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.
EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.
If asked independently whether or not I would take an eyeball speck in the eye to spare a stranger 50 years of torture, i would say “sure”. I suspect most people would if asked independently. It should make no difference to each of those 3^^^3 dust speck victims that there are another (3^^^3)-1 people that would also take the dust speck if asked.
It seems then that there are thresholds in human value. Human value might be better modeled by sureals than reals. In such a system we could represent the utility of 50 years of torture as -Ω and represent the utility of a dust speck in one’s eye as −1. This way, no matter how many dust specks end up in eyes, they don’t add up to torturing someone for 50 years. However we would still minimize torture, and minimize dust specks.
The greater problem is to exhibit a general procedure for when we should treat one fate as being infinitely worse than another, vs. treating it as merely being some finite amount worse.
That’s a fairly manipulative way of asking you to make that decision, though. If I were asked whether or not I would take a hard punch in the arm to spare a stranger a broken bone, I would answer “sure”, and I suspect most people would, as well. However, it is pretty much clear to me that 3^^^3 people getting punched is much much worse than one person breaking a bone.
That rests on the assumption that each person only cares about their own dust speck and the possible torture victim. If people are allowed to care about the aggregate quantity of suffering, then this choice might represent an Abilene paradox.
Here’s a suggestion: if someone going through a fate A, is incapable of noticing whether or not they’re going through fate B, then fate A is infinitely worse than fate B.
The other day, I got some dirt in my eye, and I thought “That selfish bastard, wouldn’t go and get tortured and now we all have to put up with this s#@$”.
I don’t see that it’s necessary—or possible, for that matter—for me to assign dust specks and torture to a single, continuous utility function. On a scale of disutility that includes such events as “being horribly tortured,” the disutility of a momentary irritation such as a dust speck in the eye has a value of precisely zero—not 0.000...0001, but just plain 0, and of course, 0 x 3^^^3 = 0.
Furthermore, I think the “minor irritations” scale on which dust specks fall might increase linearly with the time of exposure, and would certainly increase linearly with number of individuals exposed to it. On the other hand, the disutility of torture, given my understanding of how memory and anticipation affect people’s experience of pain, would increase exponentially over time from a range of a few microseconds to a few days, then level off to something less than a linear increase with acclimatization over the range of days to years. It would increase linearly with the number of people suffering a given degree of pain for a given amount of time. (All other things being equal, of course. People’s pain tolerance varies with age, experience, and genetics; it would be much worse to inflict any given amount of pain on a young child than on an adult who’s already gone through, say, Navy S.E.A.L. training, and thus demonstrated a far higher-than-average pain tolerance.)
Thus, it would be enormously worse to inflict X amount of pain on one individual for sixty minutes than on 60 individuals for one minute each, which in turn would be much worse than inflicting the same pain on 3600 individuals for one second each—and if we could spread it out to a microsecond each for 36,000,000 people, the disutility might vanish altogether as the “experience” becomes too brief for the human nervous system to register at all, and thus ceases to be an experience. However, once we get past where acclimatization inflects the curve, it would be much worse to torture 52 people for one week each than to torture one person for an entire year. It might even be worse to torture ten people for one week each than one for an entire year—I’m not sure of the precise values involved in this utility function, and happily, at the fine scale, I’ll probably never need to work them out (the empirical test is possible in principle, of course, but could only be performed in practice by a fiend like Josef Mengele).
There’s also the fact that knowing many people can and have endured a particular pain seems to make it more endurable for others who are aware of that fact. As Spider Robinson says, “Shared joy is increased, shared pain is lessened”—I don’t know if that really “refutes entropy,” but both of those clauses are true individually. That’s part of the reason egalitarianism, as other commenters have pointed out, has positive utility value.
If dust specks have a value of 0, then what’s the smallest amount of discomfort that has a nonzero value instead? Use that as your replacement dust speck.
And of course, the disutility of torture certainly increases in nonlinear ways with time. The 3^^^3 is there to make up for that. 50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people. This in turn is probably not as bad as 12.5 years of torture for a trillion trillion people (sorry my large number vocabulary is lacking). If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse? And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?
If dust specks have a value of 0, then what’s the smallest amount of discomfort that has a nonzero value instead?
I don’t know exactly where I’d make the qualitative jump from the “discomfort” scale to the “pain” scale. There are so many different kinds of unpleasant stimuli, and it’s difficult to compare them. For electric shock, say, there’s probably a particular curve of voltage, amperage and duration below which the shock would qualify as discomfort, with a zero value on the pain scale, and above which it becomes pain (I’ll even go so far as to say that for short periods of contact, the voltage and amperage values lies between those of a violet wand and those of a stun gun). For localized heat, I think it would have to be at least enough to cause a small first-degree burn; for localized cold, enough to cause the beginnings of frostbite (i.e. a few living cells lysed by the formation of ice crystals in their cytoplasm). For heat and cold over the whole body, it would have to be enough to overcome the body’s natural thermostat, initiating hypothermia or heatstroke.
It occurs to me that I’ve purposefully endured levels of discomfort I would probably regard as pain with a non-zero value on the torture scale if it was inflicted on me involuntarily, as a result of working out at the gym (which has an expected payoff in health and appearance, of course), and from wearing an IV for two 36-hour periods in a pharmacokinetic study for which I’d volunteered (it paid $500); I would certainly do so again, for the same inducements. Choice makes a big difference in our subjective experience of an unpleasant stimulus.
50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people.
Of course not; by the scale I posited above, 50 years for one person isn’t even as bad as 25 years for two people.
If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse?
No, but the length has to get pretty tiny (probably somewhere between a millisecond and a microsecond) before we reverse the direction.
And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?
Yes, we do; in fact, we eventually get to a point where each person “tortured” experiences no discomfort at all, because the nervous system is not infinitely fast nor infinitely sensitive. If you’re using temperature for your torture, heat transfer happens at a finite speed; no matter how hot or cold the material that touches your skin, there’s a possible time of contact short enough that it wouldn’t change your skin temperature enough to cause any discomfort at all. Even an electric shock could be brief enough not to register.
The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don’t use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That’s perfectly OK—but then you have to choose where the jumps are.
I think that’s probably more practical than trying to make it continuous, considering that our nervous systems are incapable of perceiving infinitesimal changes.
Yes, we are running on corrupted hardware at about 100 Hz, and I agree that defining broad categories to make first-cut decisions is necessary.
But if we were designing a morality program for a super-intelligent AI, we would want to be as mathematically consistent as possible. As shminux implies, we can construct pathological situations that exploit the particular choice of discontinuities to yield unwanted or inconsistent results.
It could be worse than that: there might not be a way to choose the jumps consistently, say, to include different kinds of discomfort, some related to physical pain and others not (tickling? itching? anguish? ennui?)
In other words, it follows that 1 person being tortured for 50 years is better than 3^^^3 people being tortured for a millisecond.
You’re well on your way to the dark side.
I might have to bring it up to a minute or two before I’d give you that—I perceive the exponential growth in disutility for extreme pain over time during the first few minutes/hours/days as very, very steep. Now, if we posit that the people involved are immortal, that would change the equation quite a bit, because fifty years isn’t proportionally that much more than fifty seconds in a life that lasts for billions of years; but assuming the present human lifespan, fifty years is the bulk of a person’s life. What duration of torture qualifies as a literal fate worse than (immediate) death, for a human with a life expectancy of eighty years? I’ll posit that it’s more than five years and less than fifty, but beyond that I wouldn’t care to try to choose.
Let’s step away from outright torture and look at something different: solitary confinement. How long does a person have to be locked in a room against his or her will before it rises to a level that would have a non-zero disutility you could multiply by 3^^^3 to get a higher disutility than that of a single person (with a typical, present-day human lifespan) locked up that way for fifty years? I’m thinking, off the top of my head, that non-zero disutility on that scale would arise somewhere between 12 and 24 hours.
If getting hit by a dust speck has u = 0, then air pressure great enough to crush you has u = 0.
Nope, that doesn’t follow; multiplication isn’t the only possible operation that can be applied to this scale.
Incidentally, I think that if you pick “dust specks,” you’re asserting that you would walk away from Omelas; if you pick torture, you’re asserting that you wouldn’t.
The kind of person who chooses an individual suffering torture in order to spare a large enough number of other people lesser discomfort endorses Omelas. The kind of individual who doesn’t not only walks away from Omelas, but wants it not to exist at all.
This is exactly what bothered me about the story, actually. You can choose to help the child and possibly doom Omelas, or you can choose not to, for whatever reason. But walking away doesn’t solve the problem!
True. On reflection, it’s patently obvious that the Less Wrong way to deal with Omelas is not to accept that the child’s suffering is necessary to the city’s welfare, and dedicate oneself to finding the third alternative. “Some of them understand why,” so it’s obviously possible to know what the connection is between the child and the city; knowing that, one can seek some other way of providing whatever factor the tormented child provides. That does mean allowing the suffering to go on until you find the solution, though—if you free the child and ruin Omelas, it’s likely too late at that point to achieve the goal of saving both.
Well, it depends on the nature of the problem I’ve identified. If I endorse Omelas, but don’t wish to partake of it myself, walking away solves that problem. (I endorse lots of relationships I don’t want to participate in.)
That’s not a moral objection, that’s a personal preference.
Yes, that’s true. It’s hard to have a moral objection to something I endorse.
It certainly doesn’t. However, it shows more moral perceptiveness than most people have.
Bravo, Eliezer. Anyone who says the answer to this is obvious is either WAY smarter than I am, or isn’t thinking through the implications.
Suppose we want to define Utility as a function of pain/discomfort on the continuum of [dust speck, torture] and including the number of people afflicted. We can choose whatever desiderata we want (e.g. positive real valued, monotonic, commutative under addition).
But what if we choose as one desideratum, “There is no number n large enough such that Utility(n dust specks) > Utility(50 yrs torture).” What does that imply about the function? It can’t be analytic in n (even if n were continuous). That rules out multaplicative functions trivially.
Would it have singularities? If so, how would we combine utility functions at singular values? Take limits? How, exactly?
Or must dust specks and torture live in different spaces, and is there no basis that can be used to map one to the other?
The bottom line: is it possible to consistently define utility using the above desideratum? It seems like it must be so, since the answer is obvious. It seems like it must not be so, because of the implications for the utility function as the arguments change.
Edit: After discussing with my local meetup, this is somewhat resolved. The above desiderata require the