A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged.
Altruism is a common consequence of utilitarian ideas, but it’s not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it’s the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency.
Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but—with only a couple of exceptions, which include the OP—people just automatically skip to wondering “how could I save the most people?” (decision theory talk), or “what counts as ‘people’, i.e. those units of which I should obviously try to save as many as possible?”. There’s an implicit assumption that any sentient being whatsoever = 1 ‘moral weight unit’, and it’s as simple as that. To me, that’s insane.
Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.
Fair enough; I guess komponisto’s comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.
This simply isn’t true. See, for example, the reception of this post.
Altruism is a common consequence of utilitarian ideas, but it’s not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it’s the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency.
Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but—with only a couple of exceptions, which include the OP—people just automatically skip to wondering “how could I save the most people?” (decision theory talk), or “what counts as ‘people’, i.e. those units of which I should obviously try to save as many as possible?”. There’s an implicit assumption that any sentient being whatsoever = 1 ‘moral weight unit’, and it’s as simple as that. To me, that’s insane.
Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.
Fair enough; I guess komponisto’s comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.