But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.
It’s not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.
Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being’s life in torture.
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
By our universe you do not mean only the observable universe, but include the level I multiverse
then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.
Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don’t see why I should change it. “I don’t like the conclusions!!!” is not a valid objection.
If people in charge reasoned that way we might have harmageddon in no time.
If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we’ll have larger problems than the potential nuking of New York.
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.
Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however… Should we:
a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?
or should we
b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
The trouble is that there is a continuous sequence from
Take $1 from everyone
Take $1.01 from almost everyone
Take $1.02 from almost almost everyone
...
Take a lot of money from very few people (Denmark)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. (emphasis mine)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
I don´t follow the sequence because I don´t know where the critical limit is.
You may not know exactly where the limit is, but the point isn’t that the limit is at some exact number, the point is that there is a limit. There’s some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don’t know it).
But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
To answer that, well yes it MIGHT be the case, I don´t know, therefore I only ask for 1 dollar. Is that making it any clearer?
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can’t just answer that that “might” be the case—if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
What do you mean with “whatever the actual numbers are”. Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don’t know it. Pretend I put the real number there instead of 20.
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don’t know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
Yes, in my last comment I agreed to it. There is such a number. I don’t think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
the number for the amount of money that can be taken without ruining anyone
So you’re saying there exists such a number, such that taking that amount of money from someone wouldn’t ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
(Of course, it’s not always clear which choice the answer is—hence why so many argue over it—but the answer has to be, in principle, either “yes” or “no”.)
No, because temperature is (very close to) a continuum, whereas good/bad is a binary.
First, I’m not talking about temperature, but about categories “hot” and “cold”.
Second, why in the world would good/bad be binary?
“Would an omniscient, moral person choose to take this action?”
I have no idea—I don’t know what an omniscient person (aka God) will do, and in any case the answer is likely to be “depends on which morality we are talking about”.
Oh, and would an omniscient being call that water hot or cold?
First, I’m not talking about temperature, but about categories “hot” and “cold”.
You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” exclusively; I also use terms like “warm” or “cool” or “this might be a great temperature for a swimming pool, but it’s horrible for tea”.)
Also, if you weren’t talking about temperature, why bother mentioning degrees Celsius when talking about “hotness” and “coldness”? Clearly temperature has something to do with it, or else you wouldn’t have mentioned it, right?
Second, why in the world would good/bad be binary?
Because you can always replace a question of goodness with the question “Would an omniscient, moral person choose to take this action?”.
I have no idea—I don’t know what an omniscient person (aka God) will do,
Just because you have no idea what the answer could be doesn’t mean the true answer can fall outside the possible space of answers. For instance, you can’t answer the question of “Would an omniscient moral reasoner choose to take this action?” with something like “fish”, because that falls outside of the answer space. In fact, there are only two possible answers: “yes” or “no”. It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either “yes or “no”, and that holds true even if you don’t know what the answer is.
the answer is likely to be “depends on which morality we are talking about”
There is only one “morality” as far as this discussion is concerned. There might be other “moralities” held by aliens or whatever, but the human CEV is just that: the human CEV. I don’t care about what the Babyeaters think is “moral”, or the Pebblesorters, or any other alien species you care to substitute—I am human, and so are the other participants in this discussion. The answer to the question “which morality are we talking about?” is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I’d rather talk game theory with Clippy than morality—it’s far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
In fact, there are only two possible answers: “yes” or “no”
I don’t think so.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
There is only one “morality” for the participants of this discussion.
Really? Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
but the human CEV is just that: the human CEV
I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
Name a third alternative that is actually an answer, as opposed to some sort of evasion (“it depends”), and I’ll concede the point.
Also, I’m aware that this isn’t your main point, but… how is the argument circular? I’m not saying something like, “It’s binary, therefore there are two possible states, therefore it’s binary”; I’m just saying “There are two possible states, therefore it’s binary.”
Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
Are you human? (y/n)
I have no idea what the human CEV it and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
Which part do you object to? The “coherent” part, the “extrapolated” part, or the “volition” part?
Name a third alternative that is actually an answer
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Which part do you object to?
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
would an omniscient perfectly moral being scratch his/her/its butt?
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally?
Conditional on having a good definition of “action” and on having a good definition of “morally”.
you can generalize to uncertain situations simply by applying probability theory
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
isn’t of much use to most people
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
The assumption that morality boils down to utility is a rather huge assumption :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
on having a good definition of “morally”
Agree.
An omniscient being has no risk and no risk aversion, for example.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Also, why the heck do you think there exist words for “better” and “worse”?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
In this case I do not disagree with you. The number of people on earth is simply not large enough.
But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.
Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don’t think you will change my mind.
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.
If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]
I think Okeymaker was actually referring to all the people in the universe. While the number of “people” in the universe (defining a “person” as a conscious mind) isn’t a known number, let’s do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn’t nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker’s modus tollens and reply that I would prefer to nuke New York.)
Now, do you have any actual argument as to why the ‘badness’ function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don’t think you do. This is why this stuff strikes me as pseudomath. You don’t even state your premises let alone justify them.
You’re right, I don’t. And I do not really need it in this case.
What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “torture for 50 years” and “dust specks” so this generally makes sense at all.
The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no matter how many more people suffer.”
If however the number of possible distinct people should be finite—even after taking into account level II and level III multiverses—due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there’s only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don’t think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn’t feel any stronger because there’s more ‘copies’ of it running in perfect unison, it can’t even tell the difference. It won’t affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it’s going to keep growing without a limit, but that’s simply not true.
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,… such that C(dustspeck,m_j) > jε.
Besides which, even if I had somehow messed up, you’re not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
Well, in my view, some details of implementation of a computation are totally indiscernible ‘from the inside’ and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I’m not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they’re ‘important’, there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn’t make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.
If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.
Okay, here’s a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The “planet” part is obviously not possible, and the “distinct” part may or may not be possible, but for the purposes of a discussion about morality, it’s fine to assume these.)
Now let’s suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who “wins” (or “loses”, more accurately) will be tortured for 50 years. Which would you choose?
If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let’s suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a).
However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c).
Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a).
However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.
And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can’t stop time, so there’s no reason to prefer (c) to (b).
In fact, the probability goes up anyway in that fraction of a second, whether you blink or not.
Ah, sorry; I wasn’t clear. What I meant was that blinking increases your probability of being tortured beyond the normal “baseline” probability of torture. Obviously, even if you don’t blink, there’s still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn’t blinked (since you can’t see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I’m saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.
The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery.
It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone’s eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you’d prefer to be in the present instead of the future.
The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.
However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3.
Both numbers seem basically arbitrarily small (probability 0).
Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.
Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.
Well, I mean, obviously a single person can’t be kidnapped more than once every 50 years (assuming that’s how long each torture session lasts), and certainly not several times a day, since he/she wouldn’t have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I’d say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.
Well, now I know you’re underestimating how big 3^^^3 is (and 5^^^5, too). But let’s say somehow you’re right, and the probability really is 1/5^^^5. All I have to do is modify the thought experiment so that the planet has 5^^^5 people instead of 3^^^3. There, problem solved.
So, new question: would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 5^^^5 people get dust specks in their eyes?
Agree, having lived in chronic pain supposedly worse than untrained childbirth, I’d say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can’t really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this
We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.
I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn’t mean we can’t find exponential factors that dominate it at every point at least along the “less than 50 years” range.
Absolutely. We’re bad at anything that we can’t easily imagine. Probably, for many people, intuition for “torture vs. dust specks” imagines a guy with a broken arm on one side, and a hundred people saying ‘ow’ on the other.
The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn’t take the number of people saved by an intervention into account; we just picture the typical effect on a single person.
What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don’t know how bad being in prison is, but it probably becomes much worse than I imagine if you’re there for 50 years, and we don’t think about that at all when arguing (or voting) about prison sentences.
My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter’s Law: however bad you imagine it to be, it’s worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I’ve yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.
My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment—that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree—you can adjust to being in intense suffering but that doesn’t make the intense suffering go away. That’s why I think its a special class of states of being—one that invokes action. What do people think?
That strikes me as a deliberate set up for a continuum fallacy.
Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?
I’d much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don’t have any mechanism which could compound their suffering. They aren’t even different subjectivities. I don’t see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can’t even tell subjectively how redundant it’s hardware is.
Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something’s still experiencing pain but it’s not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.
It’s not a continuum fallacy because I would accept “There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T” as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.
Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?
I’m not sure what you mean by this. I don’t believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that’s ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.
Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?
I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.
This is where the argument for choosing torture falls apart for me, really. I don’t think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years.
I’m not sure what you mean by this. I don’t believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that’s ridiculous.
I think he’s questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he’s referring to.
Personally, I think the problem stems from dust specks being such a minor inconvenience that it’s basically below the noise threshold. I’d almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don’t cause crashes or anything).
There’s the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly’s statement:
“There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T”
We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that
“For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds.”
Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post…
Well, you basically have to concede that “torture” wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.
The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky’s original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.
In other words, the “torture one person rather than allow 3^^^3 dust specks” wins, quite predictably, if and only if it is true that that the ‘pain’ component of the utility function is measured in one and only one dimension.
So the question is, basically, do you measure your utility function in terms of a single input variable?
If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity… or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.
If you don’t, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.
It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.
One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose “utilons,” and one for… call them “red flags.” As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you’ve accumulated.
The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)… but the overall weighted average of all human moral reasoning suggests that people who think they’ve done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.
Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.
The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.
Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.
Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.
The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.
Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.
Which outcome is worse?
If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...
Well, the “linear harm” theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.
We are forced to conclude that there is a “total extinction” term in our calculation of harm, one that rises very rapidly in an ‘inflationary’ way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.
Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some “big enough” number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.
But I can’t recall a similar argument for nonlinear harm measurement being presented in any of the comments I’ve sampled, so I wanted to mention it.
But I thought it was interesting and couldn’t recall seeing it elsewhere.
I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death.
In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad.
The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.
Not every non-linear relationship can be thwacked with bigger and bigger numbers...
For one thing N=1 T=1 trivially satisfies your condition…
I’m not sure what you mean by this.
I mean, suppose that you got yourself a function that takes in a description of what’s going on in a region of spacetime, and it spits out a real number of how bad it is.
Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That’s much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).
One thing that function can’t do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.
Yes, if this is the case (would be nice if Eliezer confirmed it) I can see where the logic halts from my perspective :)
Explanatory example if someone care:
Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.
I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute.
And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.
Kind of a paradox of the heap. How many seconds of torture are still torture?
And 10^30 is really a lot of people. That’s what Eliezer meant with “scope insensitivity”. And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?
The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.
That’s fighting the hypothetical. Assume that the speck is such that the harm caused by the spec slightly outweighs the benefits.
You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50⁄50.
Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.
In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would’ve gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it’s merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.
If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well… you’re weird.
Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years?
We take much greater risks without a moment’s thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it’s probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.
But still, WHY is torture better? What is even the problem with the speck dusts? Some of the people who get speck dust in their eyes will die in accidents caused by the dust particles? Is this why speck dust is so bad? But then, have we considered the fact that speck dust may save an equal amount of people, who would otherwise die? I really don´t get it and it bothers me alot.
It’s not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.
Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being’s life in torture.
Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.
If
Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
By our universe you do not mean only the observable universe, but include the level I multiverse
then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.
Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don’t see why I should change it. “I don’t like the conclusions!!!” is not a valid objection.
If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we’ll have larger problems than the potential nuking of New York.
Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.
Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however… Should we:
a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?
or should we
b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?
Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.
The trouble is that there is a continuous sequence from
Take $1 from everyone
Take $1.01 from almost everyone
Take $1.02 from almost almost everyone
...
Take a lot of money from very few people (Denmark)
If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1⁄20 the population of the world is good, but taking $20.01 each from slightly less than 1⁄10 the population of the world is bad. Can you say that?
Typo here?
I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.
You may not know exactly where the limit is, but the point isn’t that the limit is at some exact number, the point is that there is a limit. There’s some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?
Yes I do.
So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don’t know it).
But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.
No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.
Yes, you do. You just admitted it, although the number might not be 20. And whether you admit it or not it logically follows from what you said up above.
Maybe I didn´t understand you the first time.
Your belief about $1 versus bankruptcy logically implies a similar belief about $20.00 versus $20.01 (or whatever the actual numbers are). You can’t just answer that that “might” be the case—if your original belief is as described, that is the case. You have to be willing to defend the logical consequence of what you said, not just defend the exact words that you said.
What do you mean with “whatever the actual numbers are”. Numbers for what? For the amount that takes to ruin someone? As long as the individual donations doesn´t ruin the donators I accept a higher donation from a smaller population. Is that what you mean?
I just wrote 20 because I have to write something, but there is a number. This number has a value, even if you don’t know it. Pretend I put the real number there instead of 20.
Yes, but still, what number? IF it is as I already suggested, the number for the amount of money that can be taken without ruining anyone, then I agree that we could take that amount of money instead of 1 dollar.
I don’t think you understand.
Yout original statement about $1 versus bankruptcy logically implies that there is a number such that that it is okay to take exactly that amount of money from a certain number of people, but wrong to take a very tiny amount more. Even though you don’t know exactly what this number is, you know that it exists. Because this number is a logical consequence of what you said, you must be able to justify having such a number.
Yes, in my last comment I agreed to it. There is such a number. I don’t think you understand my reasons why, which I already explained. It is wrong to take a tiny amoint more, since that will ruin them. I can’tknow ecactly what that is since global and local economy isn`t that stable. Tapping out.
So you’re saying there exists such a number, such that taking that amount of money from someone wouldn’t ruin them, but taking that amount plus a tiny bit more (say, 1 cent) would?
YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.
If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.
My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.
No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
(Of course, it’s not always clear which choice the answer is—hence why so many argue over it—but the answer has to be, in principle, either “yes” or “no”.)
First, I’m not talking about temperature, but about categories “hot” and “cold”.
Second, why in the world would good/bad be binary?
I have no idea—I don’t know what an omniscient person (aka God) will do, and in any case the answer is likely to be “depends on which morality we are talking about”.
Oh, and would an omniscient being call that water hot or cold?
You’ll need to define your terms for that, then. (And for the record, I don’t use the words “hot” and “cold” exclusively; I also use terms like “warm” or “cool” or “this might be a great temperature for a swimming pool, but it’s horrible for tea”.)
Also, if you weren’t talking about temperature, why bother mentioning degrees Celsius when talking about “hotness” and “coldness”? Clearly temperature has something to do with it, or else you wouldn’t have mentioned it, right?
Because you can always replace a question of goodness with the question “Would an omniscient, moral person choose to take this action?”.
Just because you have no idea what the answer could be doesn’t mean the true answer can fall outside the possible space of answers. For instance, you can’t answer the question of “Would an omniscient moral reasoner choose to take this action?” with something like “fish”, because that falls outside of the answer space. In fact, there are only two possible answers: “yes” or “no”. It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either “yes or “no”, and that holds true even if you don’t know what the answer is.
There is only one “morality” as far as this discussion is concerned. There might be other “moralities” held by aliens or whatever, but the human CEV is just that: the human CEV. I don’t care about what the Babyeaters think is “moral”, or the Pebblesorters, or any other alien species you care to substitute—I am human, and so are the other participants in this discussion. The answer to the question “which morality are we talking about?” is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I’d rather talk game theory with Clippy than morality—it’s far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.
I don’t think so.
You’re making a circular argument—good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.
Really? Either I’m not a participant in this discussion or you’re wrong. See: a binary outcome :-D
I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it’s reality.
Name a third alternative that is actually an answer, as opposed to some sort of evasion (“it depends”), and I’ll concede the point.
Also, I’m aware that this isn’t your main point, but… how is the argument circular? I’m not saying something like, “It’s binary, therefore there are two possible states, therefore it’s binary”; I’m just saying “There are two possible states, therefore it’s binary.”
Are you human? (y/n)
Which part do you object to? The “coherent” part, the “extrapolated” part, or the “volition” part?
“Doesn’t matter”.
First of all you’re ignoring the existence of morally neutral questions. Should I scratch my butt? Lessee, would an omniscient perfectly moral being scratch his/her/its butt? Oh dear, I think we’re in trouble now… X-D
Second, you’re assuming atomicity of actions and that’s a bad assumption. In your world actions are very limited—they can be done or not done, but they cannot be done partially, they cannot be slightly modified or just done in a few different ways.
Third, you’re assuming away the uncertainty of the future and that also is a bad assumption. Proper actions for an omniscient being can very well be different from proper actions for someone who has to face uncertainty with respect to consequences.
Fourth, for the great majority of dilemmas in life (e.g. “Should I take this job?”, “Should I marry him/her?”, “Should I buy a new phone?”) the answer “what an omniscient moral being would choose” is perfectly useless.
The concept of CEV seems to me to be the direct equivalent of “God’s will”—handwaveable in any direction you wish while retaining enough vagueness to make specific discussions difficult or pretty much impossible. I think my biggest objection is to the “coherent” part while also having great doubts about the “extrapolated” part as well.
(Side note: this conversation is taking a rather strange turn, but whatever.)
If its butt feels itchy, and it would prefer for its butt to not feel itchy, and the best way to make its butt not feel itchy is to scratch it, and there are no external moral consequences to its decision (like, say, someone threatening to kill 3^^^3 people iff it scratches its butt)… well, it’s increasing its own utility by scratching its butt, isn’t it? If it increases its own utility by doing so and doesn’t decrease net utility elsewhere, then that’s a net increase in utility. Scratch away, I say.
Sure. I agree I did just handwave a lot of stuff with respect to what an “action” is… but would you agree that, conditional on having a good definition of “action”, we can evaluate “actions” morally? (Moral by human standards, of course, not Pebblesorter standards.)
Agreed, but if you come up with a way to make good/moral decisions in the idealized situation of omniscience, you can generalize to uncertain situations simply by applying probability theory.
Again, I agree… but then, knowledge of the Banach-Tarski paradox isn’t of much use to most people.
Fair enough. I don’t have enough domain expertise to really analyze your position in depth, but at a glance, it seems reasonable.
The assumption that morality boils down to utility is a rather huge assumption :-)
Conditional on having a good definition of “action” and on having a good definition of “morally”.
I don’t think so, at least not “simply”. An omniscient being has no risk and no risk aversion, for example.
Morality is supposed to be useful for practical purposes. Heated discussions over how many angels can dance on the head of a pin got a pretty bad rap over the last few centuries… :-)
It’s not an assumption; it’s a normative statement I choose to endorse. If you have some other system, feel free to endorse that… but then we’ll be discussing morality, and not meta-morality or whatever system originally produced your objection to Jiro’s distinction between good and bad.
Agree.
Well, it could have risk aversion. It’s just that risk aversion never comes into play during its decision-making process due to its omniscience. Strip away that omniscience, and risk aversion very well might rear its head.
I disagree. Take the following two statements:
Morality, properly formalized, would be useful for practical purposes.
Morality is not currently properly formalized.
There is no contradiction in these two statements.
But they have a consequence: Morality currently is not useful for practical purposes.
That’s… an interesting position. Are you willing to live with it? X-)
You can, of course define morality in this particular way, but why would you do that?
By that definition, almost all actions are bad.
Also, why the heck do you think there exist words for “better” and “worse”?
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
In this case I do not disagree with you. The number of people on earth is simply not large enough.
But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.
Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don’t think you will change my mind.
Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.
If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.
That’s ridiculous. So mild pains don’t count if they’re done to many different people?
Let’s give a more obvious example. It’s better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.
Scaling down, we can say that it’s better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.
Keep repeating this in your head(see how consistent it feels, how it makes sense).
Now just extrapolate to the instance that it’s better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn’t good enough because pain.[ (people on earth) (pain from hair rip) ] < pain.[(people in New York) (pain of being nuked) ]. The math doesn’t add up in your straw man example, unlike with the actual example given.
As a side note, you are also appealing to consequences.
I think Okeymaker was actually referring to all the people in the universe. While the number of “people” in the universe (defining a “person” as a conscious mind) isn’t a known number, let’s do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn’t nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker’s modus tollens and reply that I would prefer to nuke New York.)
Now, do you have any actual argument as to why the ‘badness’ function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?
I don’t think you do. This is why this stuff strikes me as pseudomath. You don’t even state your premises let alone justify them.
You’re right, I don’t. And I do not really need it in this case.
What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own—where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to “torture for 50 years” and “dust specks” so this generally makes sense at all.
The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don’t think there should ever be a point where you can go “Meh, not much of a big deal, no matter how many more people suffer.”
If however the number of possible distinct people should be finite—even after taking into account level II and level III multiverses—due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.
Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there’s only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).
I don’t think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn’t feel any stronger because there’s more ‘copies’ of it running in perfect unison, it can’t even tell the difference. It won’t affect the subjective experience if the CPUs running the same computation are slightly physically different).
edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you’ll never exceed it with dust specks.
Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it’s going to keep growing without a limit, but that’s simply not true.
I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.
No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m_1,m_2,… such that C(dustspeck,m_j) > jε.
Besides which, even if I had somehow messed up, you’re not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.
Well, in my view, some details of implementation of a computation are totally indiscernible ‘from the inside’ and thus make no difference to the subjective experiences, qualia, and the like.
I definitely don’t care if there’s 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I’m not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?
Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they’re ‘important’, there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.
Okeymaker, I think the argument is this:
Torturing one person for 50 years is better than torturing 10 persons for 40 years.
Torturing 10 persons for 40 years is better than torturing 1000 persons for 10 year.
Torturing 1000 persons for 10 years is better than torturing 1000000 persons for 1 year.
Torturing 10^6 persons for 1 year is better than torturing 10^9 persons for 1 month.
Torturing 10^9 persons for 1 month is better than torturing 10^12 persons for 1 week.
Torturing 10^12 persons for 1 week is better than torturing 10^15 persons for 1 day.
Torturing 10^15 persons for 1 day is better than torturing 10^18 persons for 1 hour.
Torturing 10^18 persons for 1 hour is better than torturing 10^21 persons for 1 minute.
Torturing 10^21 persons for 1 minute is better than torturing 10^30 persons for 1 second.
Torturing 10^30 persons for 1 second is better than torturing 10^100 persons for 1 millisecond.
Torturing for 1 millisecond is exactly what a dust speck does.
And if you disagree with the numbers, you can add a few millions. There is still plenty of space between 10^100 and 3^^^3.
Torturing a person for 1 millisecond is not necessarily even a possibility. It doesn’t make any sense whatsoever; in 1 millisecond no interesting feedback loops can even close.
If we accept that torture is some class of computational processes that we wish to avoid, the badness definitely could be eating up your 3^^^3s in one way or the other. We have absolutely zero reason to expect linearity when some (however unknown) properties of a set of computations are involvd. And the computational processes are not infinitely divisible into smaller lengths of time.
Okay, here’s a new argument for you (originally proposed by James Miller, and which I have yet to see adequately addressed): assume that you live on a planet with a population of 3^^^3 distinct people. (The “planet” part is obviously not possible, and the “distinct” part may or may not be possible, but for the purposes of a discussion about morality, it’s fine to assume these.)
Now let’s suppose that you are given a choice: (a) everyone on the planet can get a dust speck in the eye right now, or (b) the entire planet holds a lottery, and the one person who “wins” (or “loses”, more accurately) will be tortured for 50 years. Which would you choose?
If you are against torture (as you seem to be, from your comment), you will presumably choose (a). But now let’s suppose you are allowed to blink just before the dust speck enters your eye. Call this choice (c). Seeing as you probably prefer not having a dust speck in your eye to having one in your eye, you will most likely prefer (c) to (a).
However, 3^^^3 just so unimaginably enormous that blinking for even the tiniest fraction of a second increases the probability that you will be captured by a madman during that blink and tortured for 50 years by more than 1/3^^^3. But since the lottery proposed in (b) only offers a 1/3^^^3 probability of being picked for the torture, (b) is preferable to (c).
Then, by the transitivity axiom, if you prefer (c) to (a) and (b) to (c), you must prefer (b) to (a).
Q.E.D.
And the time spent setting up a lottery and carrying out the drawing also increases the probability that someone else gets captured and tortured in the intervening time, far more than blinking would. In fact, the probability goes up anyway in that fraction of a second, whether you blink or not. You can’t stop time, so there’s no reason to prefer (c) to (b).
Ah, sorry; I wasn’t clear. What I meant was that blinking increases your probability of being tortured beyond the normal “baseline” probability of torture. Obviously, even if you don’t blink, there’s still a probability of you being tortured. My claim is that blinking affects the probability of being tortured so that the probability is higher than it would be if you hadn’t blinked (since you can’t see for a fraction of a second while blinking, leaving you ever-so-slightly more vulnerable than you would be with your eyes open), and moreover that it would increase by more than 1/3^^^3. So basically what I’m saying is that P(torture|blink) > P(torture|~blink) + 1/3^^^3.
Let me see if I get this straight:
The choice comes down to dust specks at time T or dust specks at time T + dT, where the interval dT allows you time to blink. The argument is that in the interval dT, the probability of being captured and tortured increases by an amount greater than your odds in the lottery.
It seems to me that the blinking is immaterial. If the question were whether to hold the lottery today or put dust in everyone’s eyes tomorrow, the argument should be unchanged. It appears to hinge on the notion that as time increases, so do the odds of something bad happening, and therefore you’d prefer to be in the present instead of the future.
The problem I have is that the future is going to happen anyway. Once the interval dT passes, the odds of someone being captured in that time will go up regardless of whether you chose the lottery or not.
This seems pretty unlikely to be true.
I think you underestimate the magnitude of 3^^^3 (and thereby overestimate the magnitude of 1/3^^^3).
Both numbers seem basically arbitrarily small (probability 0).
Since the planet has so many distinct people, and they blink more than once a day, you are essentially asserting that on that planet, multiple people are kidnapped and tortured for more than 50 years several times a day.
Well, I mean, obviously a single person can’t be kidnapped more than once every 50 years (assuming that’s how long each torture session lasts), and certainly not several times a day, since he/she wouldn’t have finished being tortured quickly enough to be kidnapped again. But yes, the general sentiment of your comment is correct, I’d say. The prospect of a planet with daily kidnappings and 50-year-long torture sessions may seem strange, but that sort of thing is just what you get when you have a population count of 3^^^3.
I worked it out back of the envelope, and the probability of being kidnapped when you blink is only 1/5^^^5.
Well, now I know you’re underestimating how big 3^^^3 is (and 5^^^5, too). But let’s say somehow you’re right, and the probability really is 1/5^^^5. All I have to do is modify the thought experiment so that the planet has 5^^^5 people instead of 3^^^3. There, problem solved.
So, new question: would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 5^^^5 people get dust specks in their eyes?
Agree, having lived in chronic pain supposedly worse than untrained childbirth, I’d say that even an hour has a really seriously different possibility in terms of capacity for suffering than a day, and a day different from a week. For me it breaks down somewhere, even when multiplying between the 10^15 for 1 day and 10^21 for one minute. You can’t really feel THAT much pain in a minute that is comparable to a day, even orders of magnitude? Its just qualitatively different. Interested to hear pushback on this
We could go from a day to a minute more slowly; for example, by increasing the number of people by a factor of a googolplex every time the torture time decreases by 1 second.
I absolutely agree that the length of torture increases how bad it is in nonlinear ways, but this doesn’t mean we can’t find exponential factors that dominate it at every point at least along the “less than 50 years” range.
Obviously. Just important to remember that extremity of suffering is something we frequently fail to think well about.
Absolutely. We’re bad at anything that we can’t easily imagine. Probably, for many people, intuition for “torture vs. dust specks” imagines a guy with a broken arm on one side, and a hundred people saying ‘ow’ on the other.
The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn’t take the number of people saved by an intervention into account; we just picture the typical effect on a single person.
What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don’t know how bad being in prison is, but it probably becomes much worse than I imagine if you’re there for 50 years, and we don’t think about that at all when arguing (or voting) about prison sentences.
My heuristic for dealing with such situations is somewhat reminiscent of Hofstadter’s Law: however bad you imagine it to be, it’s worse than that, even when you take the preceding statement into account. In principle, this recursion should go on forever and lead to you regarding any sufficiently unimaginably bad situation as infinitely bad, but in practice, I’ve yet to have it overflow, probably because your judgment spontaneously regresses back to your original (inaccurate) representation of the situation unless consciously corrected for.
Obligatory xkcd.
That would have been a better comic without the commentary in the last panel.
But the alt text is great X-)
My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment—that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree—you can adjust to being in intense suffering but that doesn’t make the intense suffering go away. That’s why I think its a special class of states of being—one that invokes action. What do people think?
That strikes me as a deliberate set up for a continuum fallacy.
Also, why are you so sure that the number of people increases suffering in a linear way for even very large numbers? What is a number of people anyway?
I’d much prefer to have a [large number of exact copies of me] experience 1 second of headache than for one me to suffer it for a whole day. Because those copies they don’t have any mechanism which could compound their suffering. They aren’t even different subjectivities. I don’t see any reason why a hypothetical mind upload of me running on multiple redundant hardware should be an utility monster, if it can’t even tell subjectively how redundant it’s hardware is.
Some anaesthetics do something similar, preventing any new long term memories, people have no problem with taking those for surgery. Something’s still experiencing pain but it’s not compounding into anything really bad (unless the drugs fail to work, or unless some form of long term memory still works). A real example of a very strong preference for N independent experiences of 30 seconds of pain over 1 experience of 30*N seconds of pain.
It’s not a continuum fallacy because I would accept “There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T” as an answer. If, on the other hand, the comparison goes the other way for any values of N and T, then you have to accept the transitive closure of those comparisons as well.
I’m not sure what you mean by this. I don’t believe in linearity of suffering: that would be the claim that 2 people tortured for 1 day is the same as 1 person tortured for 2 days, and that’s ridiculous. I believe in comparability of suffering, which is the claim that for some value of N, N people tortured for 1 day is worse than 1 person tortured for 2 days.
Regarding anaesthetics: I would prefer a memory inhibitor for a painful surgery to the absence of one, but I would still strongly prefer to feel less pain during the surgery even if I know I will not remember it one way or the other. Is this preference unusual?
This is where the argument for choosing torture falls apart for me, really. I don’t think there is any number of people getting dust specks in their eyes that would be worse than torturing one person for fifty years. I have to assume my utility function over other people is asymptotic; the amount of disutility of choosing to let even an infinity of people get dust specks in their eyes is still less than the disutility of one person getting tortured for fifty years.
I think he’s questioning the idea that two people getting dust specks in their eyes is twice the disutility of one person getting dust specks, and that is the linearity he’s referring to.
Personally, I think the problem stems from dust specks being such a minor inconvenience that it’s basically below the noise threshold. I’d almost be indifferent between choosing for nothing to happen or choosing for everyone on Earth to get dust specks (assuming they don’t cause crashes or anything).
There’s the question of linearity- but if you use big enough numbers you can brute force any nonlinear relationship, as Yudkowsky correctly pointed out some years ago. Take Kindly’s statement:
“There is some pair (N,T) such that (N people tortured for T seconds) is worse than (10^100 N people tortured for T-1 seconds), but I don’t know the exact values of N and T”
We can imagine a world where this statement is true (probably for a value of T really close to 1). And we can imagine knowing the correct values of N and T in that world. But even then, if a critical condition is met, it will be true that
“For all values of N, and for all T>1, there exists a value of A such that torturing N people for T seconds is better than torturing A*N people for T-1 seconds.”
Sure, the value of A may be larger than 10^100… But then, 3^^^3 is already vastly larger than 10^100. And if it weren’t big enough we could just throw a bigger number at the problem; there is no upper bound on the size of conceivable real numbers. So if we grant the critical condition in question, as Yudkowsky does/did in the original post…
Well, you basically have to concede that “torture” wins the argument, because even if you say that [hugenumber] of dust specks does not equate to a half-century of torture, that is NOT you winning the argument. That is just you trying to bid up the price of half a century of torture.
The critical condition that must be met here is simple, and is an underlying assumption of Yudkowsky’s original post: All forms of suffering and inconvenience are represented by some real number quantity, with commensurate units to all other forms of suffering and inconvenience.
In other words, the “torture one person rather than allow 3^^^3 dust specks” wins, quite predictably, if and only if it is true that that the ‘pain’ component of the utility function is measured in one and only one dimension.
So the question is, basically, do you measure your utility function in terms of a single input variable?
If you do, then either you bury your head in the sand and develop a severe case of scope insensitivity… or you conclude that there has to be some number of dust specks worse than a single lifetime of torture.
If you don’t, it raises a large complex of additional questions- but so far as I know, there may well be space to construct coherent, rational systems of ethics in that realm of ideas.
It occurred to me to add something to my previous comments about the idea of harm being nonlinear, or something that we compute in multiple dimensions that are not commensurate.
One is that any deontological system of ethics automatically has at least two dimensions. One for general-purpose “utilons,” and one for… call them “red flags.” As soon as you accumulate even one red flag you are doing something capital-w Wrong in that system of ethics, regardless of the number of utilons you’ve accumulated.
The main argument justifying this is, of course, that you may think you have found a clever way to accumulate 3^^^3 utilons in exchange for a trivial amount of harm (torture ONLY one scapegoat!)… but the overall weighted average of all human moral reasoning suggests that people who think they’ve done this are usually wrong. Therefore, best to red-flag such methods, because they usually only sound clever.
Obviously, one may need to take this argument with a grain of salt, or 3^^^3 grains of salt. It depends on how strongly you feel bound to honor conclusions drawn by looking at the weighted average of past human decision-making.
The other observation that occurred to me is unrelated. It is about the idea of harm being nonlinear, which as I noted above is just plain not enough to invalidate the torture/specks argument by itself due to the ability to keep thwacking a nonlinear relationship with bigger numbers until it collapses.
Take as a thought-experiment an alternate Earth where, in the year 1000, population growth has stabilized at an equilibrium level, and will rise back to that equilibrium level in response to sudden population decrease. The equilibrium level is assumed to be stable in and of itself.
Imagine aliens arriving and killing 50% of all humans, chosen apparently at random. Then they wait until the population has returned to equilibrium (say, 150 years) and do it again. Then they repeat the process twice more.
The world population circa 1000 was about 300 million (roughly,) so we estimate that this process would kill 600 million people.
Now consider as an alternative, said aliens simply killing everyone, all at once. 300 million dead.
Which outcome is worse?
If harm is strictly linear, we would expect that one death plus one death is exactly as bad as two deaths. By the same logic, 300 megadeaths is only half as bad as 600 megadeaths, and if we inoculate ourselves against hyperbolic discounting...
Well, the “linear harm” theory smacks into a wall. Because it is very credible to claim that the extinction of the human species is much worse than merely twice as bad as the extinction of exactly half the human species. Many arguments can be presented, and no doubt have been presented on this very site. The first that comes to mind is that human extinction means the loss of all potential future value associated with humans, not just the loss of present value, or even the loss of some portion of the potential future.
We are forced to conclude that there is a “total extinction” term in our calculation of harm, one that rises very rapidly in an ‘inflationary’ way. And it would do this as the destruction wrought upon humanity reaches and passes a level beyond which the species could not recover- the aliens killing all humans except one is not noticeably better than killing all of them, nor is sparing any population less than a complete breeding population, but once a breeding population is spared, there is a fairly sudden drop in the total quantity of harm.
Now, again, in itself this does not strictly invalidate the Torture/Specks argument. Assuming that the harm associated with human extinction (or torturing one person) is any finite amount that could conceivably be equalled by adding up a finite number of specks in eyes, then by definition there is some “big enough” number of specks that the aliens would rationally decide to wipe out humanity rather than accept that many specks in that many eyes.
But I can’t recall a similar argument for nonlinear harm measurement being presented in any of the comments I’ve sampled, so I wanted to mention it.
But I thought it was interesting and couldn’t recall seeing it elsewhere.
I mentioned duplication. That in 3^^^3 people, most have to be exact duplicates of one another birth to death.
In your extinction example, once you have substantially more than the breeding population, extra people duplicate some aspects of your population (ability to breed) which causes you to find it less bad.
Not every non-linear relationship can be thwacked with bigger and bigger numbers...
For one thing N=1 T=1 trivially satisfies your condition…
I mean, suppose that you got yourself a function that takes in a description of what’s going on in a region of spacetime, and it spits out a real number of how bad it is.
Now, that function can do all sorts of perfectly reasonable things that could make it asymptotic for large numbers of people, for example it could be counting distinct subjective experiences in there (otherwise a mind upload on very multiple redundant hardware is an utility monster, despite having an identical subjective experience to same upload running one time. That’s much sillier than the usual utility monster, which feels much stronger feelings). This would impose a finite limit (for brains of finite complexity).
One thing that function can’t do, is to have a general property that f(a union b)=f(a)+f(b) , because then we just subdivide our space into individual atoms none of which are feeling anything.
Obviously I only meant to consider values of T and N that actually occur in the argument we were both talking about.
Well I’m not sure what’s the point then. What you’re trying to induct from it.
Yes, if this is the case (would be nice if Eliezer confirmed it) I can see where the logic halts from my perspective :)
Explanatory example if someone care:
I disagree. From my moral standpoint AND from my utility function whereas I am a bystander and perceive all humans as a cooperating system and want to minimize the damages to it, I think that it is better for 10^30 persons to put up with 1 second of intense pain compared to a single one who have to survive a whole minute. It is much, much more easy to recover from one second of pain than from being tortured for a minute.
And spec dust is virtually harmless. The potential harm it may cause should at least POSSIBLY be outweighted by the benefits, e.g. someone not being run over by a car because he stopped and scratched his eye.
Okay, so let’s zoom in here. What is preferable?
Torturing 1 person for 60 seconds
Torturing 100 person for 59 seconds
Torturing 10000 person for 58 seconds
etc.
Kind of a paradox of the heap. How many seconds of torture are still torture?
And 10^30 is really a lot of people. That’s what Eliezer meant with “scope insensitivity”. And all of them would be really grateful if you spared them their second of pain. Could be worth a minute of pain?
That’s fighting the hypothetical. Assume that the speck is such that the harm caused by the spec slightly outweighs the benefits.
Or the benefits could slightly outweigh the harm.
You have to treat this option as a net win of 0 then, because you have no more info to go on so the probs. are 50⁄50. Option A: Torture. Net win is negativ. Option B: Spec dust. Net win is zero. Make you choice.
In the Least Convenient Possible World of this hypothetical, every dust speck causes a constant small amount of harm with no knock-on effects(no avoiding buses, no crashing cars...)
I thought the original point was to focus just on the inconvenience of the dust, rather than simply propositioning that out of 3^^^3 people who were dustspecked, one person would’ve gotten something worse than 50 years of torture as a consequence of the dust speck. The latter is not even an ethical dilemma, it’s merely an (entirely baseless but somewhat plausible) assertion about the consequences of dust specks in the eyes.
exactly! No knock-on effects. Perhaps you meant to comment on the grandparent(great-grandparent? do I measure from this post or your post?) instead?
yeah, clicked wrong button.
If I told you that a dust speck was about to float into your left eye in the next second, would you (a) take it full in the eye, or (b) blink to keep it out? If you say you would blink, you are implicitly acknowledging that you prefer not getting specked to getting specked, and thereby conceding that getting specked is worse than not getting specked. If you would take it full in the eye, well… you’re weird.
Consider the flip side of the argument: would you rather get a dust speck in your eye or have a 1 in 3^^^3 chance of being tortured for 50 years?
We take much greater risks without a moment’s thought every time we cross the street. The chance that a car comes out of nowhere and hits you in just the right way to both paralyze you and cause incredible pain to you for the rest of your life may be very small; but it’s probably not smaller than 1 in 10^100, let alone than 1 in 3^^^3.