I perceive that I’ve neglected to convey the existence of a gigantic body of supporting evidence.
There is much counterevidence in the literature as well, but more importantly the literature does not clearly suggest the extent to which people are scope sensitive when they are (which is often), nor does it suggest what normative sensitivity might look like given the complexities of the decision problems and of human preferences. The literature doesn’t tell us the extent to which self-identifying total-utilitarian-style altruists in particular are scope sensitive, nor what methods of assigning WTP values they use. Whether or not their decisions are normative according to their professed optimization criteria, and more importantly whether their decisions are more or less normative than a naive “shut up and multiply the salient numbers” approach, is unknown.
A naive total utilitarian approach is clearly lacking. There are always hidden and unmentioned complexities like predetermined ecological niche sizes, i.e. 50 saved birds will quickly breed so as to fill a niche whereas 5,000 birds will remain at the limits. The difference between 1,000 out of 50,000 versus 1,000 out of 2,000 human lives saved is a substantial difference: realistic attempts at either will look very different from each other. Logarithmic scaling is common and can be a natural result of (implicit) consideration of conjunctions, exaggerations, credibility calculations (like whether it’d be easy or difficult to fake a positive result), baselines, opportunity costs, and so on; it is unclear what a normative evaluation of disutility from wars of various casualties would look like, but logarithmicness doesn’t seem obviously wrong. (The different framings in the original paper suggest different metrics for evaluation; there’s no reason to expect consistent valuations across levels of organization. “Deaths per day” offers an uncomplicated metric, “magnitude of war” prompts highly complex evaluations where log-normal distributions are significant.) Lives (alleged) to be saved affect utility calculations only additively, less than do estimated probabilities of internal successes or failures. In brief, a substantial amount of information is not represented by the numbers, and so substantial deviations from naive additive WTP values should be expected.
Naive total utilitarianism is a fast and frugal algorithm which ignores many considerations and makes no attempt to reach normative decisions. Whether it’s more or less consistent with total utilitarians’ values than more intuitive approaches is unclear, and which to prefer in the absence of such information is likewise unclear. Finally, don’t forget that meta-level uncertainty about total utilitarianism should be taken into account.
ETA: I should highlight that there is much variance between subjects and between studies. I do not argue that some subjects in some studies don’t simply purchase moral satisfaction or the like (though the research indicates this is uncommon), but I do argue that some non-negligible number of subjects in some non-negligible number of studies might be more effective altruists than any explicitly algorithm/equation-centered approach would allow for.
ETA2: The above analysis assumes that people’s responses to surveys about why/how they made a decision or what affected them isn’t generally correlated much with their actual decision processes. This assumption is reasonable and isn’t necessary but it’s not overwhelmingly disjunctive.
There is much counterevidence in the literature as well, but more importantly the literature does not clearly suggest the extent to which people are scope sensitive when they are (which is often), nor does it suggest what normative sensitivity might look like given the complexities of the decision problems and of human preferences. The literature doesn’t tell us the extent to which self-identifying total-utilitarian-style altruists in particular are scope sensitive, nor what methods of assigning WTP values they use. Whether or not their decisions are normative according to their professed optimization criteria, and more importantly whether their decisions are more or less normative than a naive “shut up and multiply the salient numbers” approach, is unknown.
A naive total utilitarian approach is clearly lacking. There are always hidden and unmentioned complexities like predetermined ecological niche sizes, i.e. 50 saved birds will quickly breed so as to fill a niche whereas 5,000 birds will remain at the limits. The difference between 1,000 out of 50,000 versus 1,000 out of 2,000 human lives saved is a substantial difference: realistic attempts at either will look very different from each other. Logarithmic scaling is common and can be a natural result of (implicit) consideration of conjunctions, exaggerations, credibility calculations (like whether it’d be easy or difficult to fake a positive result), baselines, opportunity costs, and so on; it is unclear what a normative evaluation of disutility from wars of various casualties would look like, but logarithmicness doesn’t seem obviously wrong. (The different framings in the original paper suggest different metrics for evaluation; there’s no reason to expect consistent valuations across levels of organization. “Deaths per day” offers an uncomplicated metric, “magnitude of war” prompts highly complex evaluations where log-normal distributions are significant.) Lives (alleged) to be saved affect utility calculations only additively, less than do estimated probabilities of internal successes or failures. In brief, a substantial amount of information is not represented by the numbers, and so substantial deviations from naive additive WTP values should be expected.
Naive total utilitarianism is a fast and frugal algorithm which ignores many considerations and makes no attempt to reach normative decisions. Whether it’s more or less consistent with total utilitarians’ values than more intuitive approaches is unclear, and which to prefer in the absence of such information is likewise unclear. Finally, don’t forget that meta-level uncertainty about total utilitarianism should be taken into account.
ETA: I should highlight that there is much variance between subjects and between studies. I do not argue that some subjects in some studies don’t simply purchase moral satisfaction or the like (though the research indicates this is uncommon), but I do argue that some non-negligible number of subjects in some non-negligible number of studies might be more effective altruists than any explicitly algorithm/equation-centered approach would allow for.
ETA2: The above analysis assumes that people’s responses to surveys about why/how they made a decision or what affected them isn’t generally correlated much with their actual decision processes. This assumption is reasonable and isn’t necessary but it’s not overwhelmingly disjunctive.