In case you haven’t seen it, there’s an essay on the EA forum about a paper by Tyler Cowen which argues that there’s no way to “get off” the train to crazy town. I.e. it may be a fundamental limitation of utilitarianism plus scope sensitivity, that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else. Some excerpts:
So, the problem is this. Effective Altruism wants to be able to say that things other than utility matter—not just in the sense that they have some moral weight, but in the sense that they can actually be relevant to deciding what to do, not just swamped by utility calculations. Cowen makes the condition more precise, identifying it as the denial of the following claim: given two options, no matter how other morally-relevant factors are distributed between the options, you can always find a distribution of utility such that the option with the larger amount of utility is better. The hope that you can have ‘utilitarianism minus the controversial bits’ relies on denying precisely this claim. …
Now, at the same time, Effective Altruists also want to emphasise the relevance of scale to moral decision-making. The central insight of early Effective Altruists was to resist scope insensitivity and to begin systematically examining the numbers involved in various issues. ‘Longtermist’ Effective Altruists are deeply motivated by the idea that ‘the future is vast’: the huge numbers of future people that could potentially exist gives us a lot of reason to try to make the future better. The fact that some interventions produce so much more utility—do so much more good—than others is one of the main grounds for prioritising them. So while it would technically be asolution to our problem to declare (e.g.) that considerations of utility become effectively irrelevant once the numbers get too big, that would be unacceptable to Effective Altruists. Scale matters in Effective Altruism (rightly so, I would say!), and it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values. …
I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes. …
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
i agree that there doesn’t seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it’s still mostly groping around in the dark. in academia there seems to be a missing mood.
In case you haven’t seen it, there’s an essay on the EA forum about a paper by Tyler Cowen which argues that there’s no way to “get off” the train to crazy town. I.e. it may be a fundamental limitation of utilitarianism plus scope sensitivity, that this moral framework necessarily collapses everything into a single value (utility) to optimize at the expense of everything else. Some excerpts:
i agree that there doesn’t seem to be any sort of rigorous way to get off the crazy train in some principled manner, and that fundamentally it does come down to vibes. but that only makes it worse if people are uncritical/uncurious/uncaring/unrigorous about how said vibes are generated. like, i see angst about it in the ea sphere about the inconsistency/intransitivity, and various attempts to discuss or tackle it, and this seems useful to me even though it’s still mostly groping around in the dark. in academia there seems to be a missing mood.