Bob: Sure, if you specify a disutility function that mandates lots-o’-specks to be worse than torture, decision theory will prefer torture. But that is literally begging the question, since you can write down a utility function to come to any conclusion you like. On what basis are you choosing that functional form? That’s where the actual moral reasoning goes. For instance, here’s a disutility function, without any of your dreaded asymptotes, that strictly prefers specks to torture:
U(T,S) = ST + S
Freaking out about asymptotes reflects a basic misunderstanding of decision theory, though. If you’ve got a rational preference relation, then you can always give a bounded utility function. (For example, the function I wrote above can be transformed to U(T,S) = (ST + S)/(ST + S + 1), which always gives you a function in [0,1], and gives rise to the same preference relation as the original.) If you absolutely require unbounded utilities in utility functions, then you become subject to a Dutch book (see Vann McGee’s “An Airtight Dutch Book”). Attempts to salvage unbounded utility pretty much always end up accepting certain Dutch books as rational, which means you’ve rejected the whole decision-theoretic justification of Bayesian probability theory. Now, the existence of bounds means that if you have a monotone utility function, then the limits will be well-defined.
So asymptotic reasoning about monotonically increasing harms is entirely legit, and you can’t rule it out of bounds without giving up on either Bayesianism or rational preferences.
g: that’s exactly what I’m saying. In fact, you can show something stronger than that.
Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.
Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn’t tell you what it is—it may be any value within the bounds.
Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I’ll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:
U(T, S) = A(T/(T+1)) + B(S/(S+1))
satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.
Likewise, Robin and Eliezer seem to have an implicit disutility function of the form
U_ER(T, S) = AT + BS
If you normalize to get [0,1] bounds, you can make something up like
U’(T, S) = (AT + BS)/(AT + BS + 1).
Now, note U’ also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U’ will still always go to one and exceed A/(A+1). So that’s why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E’s argument—but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)
Now, consider yet another disutility function,
U″(T,S) = (ST + S)/ (ST + S + 1)
This is also minimally ethical, and doesn’t have any of the free parameters that Tom didn’t like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.
Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.