That is counter-intuitive, but isn’t the anti-torture answer something analogous to sets? That is:
R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.]
To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr—A) is torture if Y(A) would not be torture.
I agree that you can’t say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out.
Logos, those “secondary” effects you point to are the properties that make Y(A) torture (or not).
This is consistent. But it induces further difficulties in the standard utilitarian decision process.
To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.
First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?
Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus “torture”) with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X’ and Y’, where X’ = X with probability p and Y’ = Y with probability p (B - ε) / (B + ε).
Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.
(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)
If it still doesn’t sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can’t fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience—which is nevertheless still not torture, nobody is tortured as far as you know—let’s call this adjustment A). Consider now decisions about money. If W is one’s total wealth, then u(W,A) must be convex with respect to W if it’s value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).
(This isn’t alleviated by a utility gap between torture and non-torture.)
Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one’s decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like “if I buy this banana, would it increase the chance of people getting tortured?”. I don’t think you are striving to consistently apply this decision theory.
(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can’t think of a modern one just this moment).
It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn’t thought about it that way before.
I hear your general point, and I don’t dispute it.
But I think your set theory analogy isn’t quite right. Consider the set R - [0,1] That’s all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.
That is counter-intuitive, but isn’t the anti-torture answer something analogous to sets? That is:
R(0) is the set of all real numbers. We know that it is an uncountable infinity, and therefore larger than any countable infinity. Set R(n) is R(0) with n elements removed. As I understand it, so long as n is a countable infinity or smaller, R(n) is equal in size to R(0). [EDITED TO REMOVE INCORRECT MATH.]
To cash out the analogy, it might be that certain torture scenarios are preferable to other torture scenarios, but all non-torture scenarios are less bad than all torture scenarios. As you increment down the amount of suffering in your example, you eventually remove so much that the scenario is no longer torture. In notation somewhat like yours, Y(50 yr) is the badness of imposing pain as you describe to one person for 50 years. We all seem to agree that Y(50 yr) is torture. I assert something like Y(50 yr—A) is torture if Y(A) would not be torture.
I agree that you can’t say that suffering is non-linear (that is, think that dust-specks is preferable to torture) without believing something like what I laid out.
Logos, those “secondary” effects you point to are the properties that make Y(A) torture (or not).
This is consistent. But it induces further difficulties in the standard utilitarian decision process.
To express the idea that all non-torture scenarios are less bad than all torture scenarios by utility function, there must be some (negative) boundary B between the two sets of scenarios, such that u(any torture scenario) < B and u(any non-torture scenario) > B. Now either B is finite or it is infinite; this matters when probabilities come into play.
First consider the case of B finite. This is the logistic curve approach: it means, that any number of slightly super-boundary inconveniences happening to different people are preferable to a single case of a slightly sub-boundary torture. I know of no natural physiological boundary of such sort; if severity of pain can change continuously, which seems to be the case, the sub-boundary and super-boundary experiences may be effectively indistinguishable. Are you willing to accept this?
Perhaps you are. Now this gets an interesting turn. Consider a couple of scenarios: X, which is slightly sub-boundary (thus “torture”) with utility B - ε (ε positive), and Y, which is non-torture with u(Y) = B + ε. Now utilities may behave non-linearly with respect to the scenario-describing parameters, but expected utilities have to be pretty linear with respect to probabilities; anything else means throwing utilitarianism out from the window. A utility maximiser should therefore be indifferent between scenarios X’ and Y’, where X’ = X with probability p and Y’ = Y with probability p (B - ε) / (B + ε).
Lets say one of the boundary cases is, for sake of concreteness, giving a person 7.5 seconds long electric shock of a given strength. So, you may prefer to give a billion people 7.4999 s shock in order to avoid one person getting a 7.5001 s shock, but in the same time you would prefer, say, 99.98% chance of one person getting 7.5001 s shock to 99.99% chance of one person getting 7.4999 s shock. Thus, although the torture/non-torture boundary seems strict, it can be easily crossed when uncertainty is taken into account.
(This problem can be alleviated by postulating a gap in utilities between the worst non-torture scenario and the best torture scenario.)
If it still doesn’t sound enough crazy, note the fact that if there already are people experiencing an almost boundary (but still non-torturous) scenario, decisions over completely unrelated options get distorted, since your utility can’t fall lower than B, where it already sits. Assume that one has presently utility near B (which must be achievable by adjusting the number of almost tortured people and severity of their inconvenience—which is nevertheless still not torture, nobody is tortured as far as you know—let’s call this adjustment A). Consider now decisions about money. If W is one’s total wealth, then u(W,A) must be convex with respect to W if it’s value is not much different from B, since no everywhere concave function can be bounded from below. Now, this may invert the usual risk aversion due to diminishing marginal utilities! (Even assuming that you can do literally nothing to change A).
(This isn’t alleviated by a utility gap between torture and non-torture.)
Now, consider the second case, B = -∞. Then there is another problem: torture becomes the sole concern of one’s decisions. Even if p(torture) = 1/3^^^3, the expected utility is negative infinity and all non-torturous concerns become strictly irrelevant. One can formulate it mathematically as having a 2-dimensional vector (u1,u2) representing the utility. The first component u1 is the measure of utility from torture and u2 measures the other utility. Now since you have decided to never trade torture for non-torture, you should choose the variant whose expected u1 is greater; only when u1(X) and u1(Y) are strictly equal, whether u2(X) > u2(Y) becomes important. Therefore you would find yourself asking questions like “if I buy this banana, would it increase the chance of people getting tortured?”. I don’t think you are striving to consistently apply this decision theory.
(This is related to distinction between sacred and unsacred values, which is a fairly standard source of inconsistencies in intuitive decisions.)
Your reference to sacred values reminded me of Spheres of Justice. In brief, Walzer argues that the best way of describing our morality is by noting which values may not be exchanged for which other values. For example, it is illicit to trade material wealth for political power over others (i.e. bribery is bad). Or trade lives for relief from suffering. But it is permissible to trade within a sphere (money for ice cream) or between some spheres (dowries might be a historical example, but I can’t think of a modern one just this moment).
It seems like your post is a mathematical demonstration that I cannot believe the Spheres of Justice argument and also be a utilitarian. Hadn’t thought about it that way before.
I hear your general point, and I don’t dispute it.
But I think your set theory analogy isn’t quite right. Consider the set R - [0,1] That’s all real numbers less than 0 or greater than 1. This is still uncountably infinite, and has equal cardinality to R, even though I removed the set [0,1], which is itself uncountably infinite.
Edited to remove improper math. Thanks.