Consequentialism is a really bad model for most people’s altruistic behavior, and especially their compromises between altruistic and selfish ends. To model someone as a thoroughgoing consequentialist, you have two bad options:
They care about themselves >10 million times as much as other people. [...]
They care about themselves <1% as much as everyone else in the whole world put together. [...]
It seems to me that “consequentialism” here refers to total utilitarianism rather than consequentialism in general.
I agree that this is more like the dilemma for modeling someone as a welfarist than a general consequentialist (if they were a total utilitarian then I think they’d already be committed to option 2). But I think you do have similar problems with any attempt to model them as consequentialists.
(if they were a total utilitarian then I think they’d already be committed to option 2)
I should have written “aggregative consequentialism” instead of “total utilitarianism”. (The problem being that a noble who is an aggregative consequentialist would care about themselves <1% as much as n peasants put together, for sufficiently large n.)
But I think you do have similar problems with any attempt to model them as consequentialists.
This makes sense to me if we restrict the discussion to causal reasoning (otherwise, a noble who suspects that they are correlated with many other nobles may donate money to some peasants, even if they care about themselves >10 million times as much as any single peasant.)
Great post!
It seems to me that “consequentialism” here refers to total utilitarianism rather than consequentialism in general.
I agree that this is more like the dilemma for modeling someone as a welfarist than a general consequentialist (if they were a total utilitarian then I think they’d already be committed to option 2). But I think you do have similar problems with any attempt to model them as consequentialists.
I should have written “aggregative consequentialism” instead of “total utilitarianism”. (The problem being that a noble who is an aggregative consequentialist would care about themselves <1% as much as n peasants put together, for sufficiently large n.)
This makes sense to me if we restrict the discussion to causal reasoning (otherwise, a noble who suspects that they are correlated with many other nobles may donate money to some peasants, even if they care about themselves >10 million times as much as any single peasant.)