With respect to why some viscerally reject the idea, I think many see charity as a sort of morally repugnant paternalism that demeans its supposed beneficiaries. (I can sympathize with this, although it seems like a rather less pressing consideration than famine and plague.)
You might actually be able to cut ideologies up—or at least the instinctive attitudes that tend to precede them—according to how comfortable they are with charity and what they see it as encompassing: liberals think charity is great; socialists find charity uncomfortable and think it would be best if the poor took rather than passively received; libertarians either also find charity uncomfortable but extend that feeling to any system that socialists might hope to establish, or think charity is great but that the social democratic stuff liberals like isn’t charity.
It might also be possible to view this unease as stemming from formally representing charity as purchasing status. I give you some money, I feel great, you feel crummy (but eat.) It’s a bit like prostitution: one doesn’t have to deny that both parties are on net better off from any given transaction to hold that something exploitative is going on. For socialists and some libertarians, a world sustained by charity (whatever that is) is intolerable and people should instead take what is theirs (whatever that is.) Others think charity is great because—to put it, well, very uncharitably—it lets them be the johns. (One of Aristotle’s arguments against socialism is that if we owned all things in common, he wouldn’t be able to grow in generosity by lending slaves to his friends.)
I would guess that it is much easier for people to recategorize what falls into the “charity” bucket than to flip their valence on the bucket itself.
I think the problem with charity reflects an ethical question: what exactly does it mean that something is “good”, and if something is “good” what should be the consequences for our behavior?
The traditional answer is that it is proper to reward doing “good” things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house.
On the other hand, doing “bad” things should be punished not only socially but also legally. Stealing things from others is punished not only by losing friends, but also by prison.
What is the source of this asymetry? Why is “bad” not the opposite of “good”, with all consequences? This is especially important for utilitarians, because if we convert everything to utilons, at the end we have a choice between an action A which creates a worldstate with X utilons, and B which creates the worldstate with Y utilons. Knowing that X is greater than Y, should we treat action A as “good”, or action B as “bad”?
My guess is that we have some baseline that we consider the standard behavior. (Minding your own business, neither helping others nor harming them.) A “good” action is change from this baseline towards more utilons, a “bad” action is change from this baseline towards less utilons. Not lowering this baseline is considered more important than increasing it. It makes sense to have a long-term Schelling point.
Problem is that if you change this baseline, you have redefined the boundary between “good” and “bad”. And people disagree about where exactly should this baseline be. If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished.
For example people are socially rewarded for giving money to charity, but they are not punished for not giving to charity, because the baseline is “not giving to charity”. On the other hand, people are punished for not paying taxes, because the baseline is “paying taxes”. Both concepts mean “giving up personal money to improve the society”, but the reactions are different, because the baseline is different.
Giving money to poor people creates some utility, and the question is: where is the baseline? For some people the baseline is “keeping what you have” or “keeping most of what you have, but not all, especially if you have more than your neighbors”. For socialists the baseline is “doing the best thing possible”, because this makes sense for a utilitarian. I guess, for a socialist, voluntary charity is a textbook example of compartmentalization. (“If you think giving money to poor people is the right thing to do, because it creates utility, what not make it a law for everyone, and create a lot more utility? And why not give as much as possible, to create as much as possible utility?”) For a non-socialist, this kind of thinking seems like a huge conjunction fallacy, and also while we value the well-being of others, we usually value our own well-being more, so it makes sense to contribute only to the most urgent causes.
The traditional answer is that it is proper to reward doing “good” things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house.
You’re conflating two different questions here:
What interval of quantified goodness (utility) should the Law actively promote, by distributing punishments or rewards to agents? What are the least good good deeds the Law should care about, and what are the most good good deeds?
Restricting our attention to deeds the Law actively promotes or discourages, how ungood does an act have to be before the Law should discourage it via positive punishment, as opposed to just discouraging it by withholding a reward or by rewarding a somewhat-less-bad alternative action?
You start off speaking as though you’re answering the first question—when should the state be indifferent to supererogation? -- but then you only list punishment (and extremely harsh punishment, at that!) as the mechanism by which Laws can incentivize behavior. This is confusing. Whether the Law should encourage people (e.g., with economic inventives) to save their neighbors from burning houses is quite a different question from whether the Law should punish people who don’t save their neighbors, and that in turn is quite a different question from whether such a punishment should be as harsh as that for, say, manslaughter! A $100 fine is also a punishment. (And a $100 reward is also an incentive.)
If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished.
I don’t agree with this. If two rational and informed people disagree about whether enacting a certain punishment is a good idea, then they don’t have the same utility function—assuming they have utility functions at all.
I think the core problem is that you’re conceiving the Law as a utilometer. You input the goodness or badness of an act’s consequences. (Or its act-type’s foreseeable consequences.) The Law, programmed with a certain baseline, calculates how far those consequences fall below the baseline, and assigns a punishment proportional to the distance below. (If it is at or above the baseline, the punishment is 0.) The Law acts as a sort of karmic justice system, mirroring the world’s distribution of utility. (We could have a similar system that rewards things for going above the baseline, but never mind that.)
In contrast, I think just about any consistent consequentialist will want to think of the Law as a non-map tool. The Law isn’t a way of measuring an act’s badness and outputting a proportional punishment; it’s a lever for getting people to behave better and thereby making the world a more fun place to live in. Questions 1 and 2 above are wrong questions, because the ideal set of Laws almost certainly won’t consistently respond to acts in proportion to the acts’ foreseeable harm. Rather, the ideal set of Laws will respond to acts in whichever way leads to the best outcome. If act A is worse than act B, but people end up overall much better off if we use a harsher punishment against B than against A, then we should use the harsher punishment against B. (Assuming we have to punish both acts at all.)
So no Schelling point is needed. The facts of our psychology should determine how useful it is to rely on punishment vs. reward in different scenarios. It should also determine how useful it is to rely on material rewards vs. social or internal ones in different contexts. Laws are (ideally) a way of making the right thing happen more often, not a way of keeping tabs on exactly how right or wrong individual actions are.
Possibly. Or possibly he’s deciding to go after the weaker claim, or is personally too cowardly to accept the lifestyle consequences of full-on consequentialism, or you should accept at face value his arguments that even on consequentialist grounds high-paying finance jobs are likely to destroy as much as they create. I’m mostly speculating based on my experiences among the kommie krowd and what I like to imagine (though don’t we all) is a developed sympathetic understanding of other tribes as well. This shouldn’t be read as a strong claim or even really a claim at all about Mills specifically. (From your summary it sounds like you found yourself confused by Mills’ arguments, so either it is hopelessly confused, or you might benefit from giving it another go, or there’s simply too much inferential distance at this moment.)
With respect to why some viscerally reject the idea, I think many see charity as a sort of morally repugnant paternalism that demeans its supposed beneficiaries. (I can sympathize with this, although it seems like a rather less pressing consideration than famine and plague.)
You might actually be able to cut ideologies up—or at least the instinctive attitudes that tend to precede them—according to how comfortable they are with charity and what they see it as encompassing: liberals think charity is great; socialists find charity uncomfortable and think it would be best if the poor took rather than passively received; libertarians either also find charity uncomfortable but extend that feeling to any system that socialists might hope to establish, or think charity is great but that the social democratic stuff liberals like isn’t charity.
It might also be possible to view this unease as stemming from formally representing charity as purchasing status. I give you some money, I feel great, you feel crummy (but eat.) It’s a bit like prostitution: one doesn’t have to deny that both parties are on net better off from any given transaction to hold that something exploitative is going on. For socialists and some libertarians, a world sustained by charity (whatever that is) is intolerable and people should instead take what is theirs (whatever that is.) Others think charity is great because—to put it, well, very uncharitably—it lets them be the johns. (One of Aristotle’s arguments against socialism is that if we owned all things in common, he wouldn’t be able to grow in generosity by lending slaves to his friends.)
I would guess that it is much easier for people to recategorize what falls into the “charity” bucket than to flip their valence on the bucket itself.
I think the problem with charity reflects an ethical question: what exactly does it mean that something is “good”, and if something is “good” what should be the consequences for our behavior?
The traditional answer is that it is proper to reward doing “good” things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house.
On the other hand, doing “bad” things should be punished not only socially but also legally. Stealing things from others is punished not only by losing friends, but also by prison.
What is the source of this asymetry? Why is “bad” not the opposite of “good”, with all consequences? This is especially important for utilitarians, because if we convert everything to utilons, at the end we have a choice between an action A which creates a worldstate with X utilons, and B which creates the worldstate with Y utilons. Knowing that X is greater than Y, should we treat action A as “good”, or action B as “bad”?
My guess is that we have some baseline that we consider the standard behavior. (Minding your own business, neither helping others nor harming them.) A “good” action is change from this baseline towards more utilons, a “bad” action is change from this baseline towards less utilons. Not lowering this baseline is considered more important than increasing it. It makes sense to have a long-term Schelling point.
Problem is that if you change this baseline, you have redefined the boundary between “good” and “bad”. And people disagree about where exactly should this baseline be. If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished.
For example people are socially rewarded for giving money to charity, but they are not punished for not giving to charity, because the baseline is “not giving to charity”. On the other hand, people are punished for not paying taxes, because the baseline is “paying taxes”. Both concepts mean “giving up personal money to improve the society”, but the reactions are different, because the baseline is different.
Giving money to poor people creates some utility, and the question is: where is the baseline? For some people the baseline is “keeping what you have” or “keeping most of what you have, but not all, especially if you have more than your neighbors”. For socialists the baseline is “doing the best thing possible”, because this makes sense for a utilitarian. I guess, for a socialist, voluntary charity is a textbook example of compartmentalization. (“If you think giving money to poor people is the right thing to do, because it creates utility, what not make it a law for everyone, and create a lot more utility? And why not give as much as possible, to create as much as possible utility?”) For a non-socialist, this kind of thinking seems like a huge conjunction fallacy, and also while we value the well-being of others, we usually value our own well-being more, so it makes sense to contribute only to the most urgent causes.
You’re conflating two different questions here:
What interval of quantified goodness (utility) should the Law actively promote, by distributing punishments or rewards to agents? What are the least good good deeds the Law should care about, and what are the most good good deeds?
Restricting our attention to deeds the Law actively promotes or discourages, how ungood does an act have to be before the Law should discourage it via positive punishment, as opposed to just discouraging it by withholding a reward or by rewarding a somewhat-less-bad alternative action?
You start off speaking as though you’re answering the first question—when should the state be indifferent to supererogation? -- but then you only list punishment (and extremely harsh punishment, at that!) as the mechanism by which Laws can incentivize behavior. This is confusing. Whether the Law should encourage people (e.g., with economic inventives) to save their neighbors from burning houses is quite a different question from whether the Law should punish people who don’t save their neighbors, and that in turn is quite a different question from whether such a punishment should be as harsh as that for, say, manslaughter! A $100 fine is also a punishment. (And a $100 reward is also an incentive.)
I don’t agree with this. If two rational and informed people disagree about whether enacting a certain punishment is a good idea, then they don’t have the same utility function—assuming they have utility functions at all.
I think the core problem is that you’re conceiving the Law as a utilometer. You input the goodness or badness of an act’s consequences. (Or its act-type’s foreseeable consequences.) The Law, programmed with a certain baseline, calculates how far those consequences fall below the baseline, and assigns a punishment proportional to the distance below. (If it is at or above the baseline, the punishment is 0.) The Law acts as a sort of karmic justice system, mirroring the world’s distribution of utility. (We could have a similar system that rewards things for going above the baseline, but never mind that.)
In contrast, I think just about any consistent consequentialist will want to think of the Law as a non-map tool. The Law isn’t a way of measuring an act’s badness and outputting a proportional punishment; it’s a lever for getting people to behave better and thereby making the world a more fun place to live in. Questions 1 and 2 above are wrong questions, because the ideal set of Laws almost certainly won’t consistently respond to acts in proportion to the acts’ foreseeable harm. Rather, the ideal set of Laws will respond to acts in whichever way leads to the best outcome. If act A is worse than act B, but people end up overall much better off if we use a harsher punishment against B than against A, then we should use the harsher punishment against B. (Assuming we have to punish both acts at all.)
So no Schelling point is needed. The facts of our psychology should determine how useful it is to rely on punishment vs. reward in different scenarios. It should also determine how useful it is to rely on material rewards vs. social or internal ones in different contexts. Laws are (ideally) a way of making the right thing happen more often, not a way of keeping tabs on exactly how right or wrong individual actions are.
This makes sense to me, but then wouldn’t Mills be arguing against the charity component instead of the career component?
Possibly. Or possibly he’s deciding to go after the weaker claim, or is personally too cowardly to accept the lifestyle consequences of full-on consequentialism, or you should accept at face value his arguments that even on consequentialist grounds high-paying finance jobs are likely to destroy as much as they create. I’m mostly speculating based on my experiences among the kommie krowd and what I like to imagine (though don’t we all) is a developed sympathetic understanding of other tribes as well. This shouldn’t be read as a strong claim or even really a claim at all about Mills specifically. (From your summary it sounds like you found yourself confused by Mills’ arguments, so either it is hopelessly confused, or you might benefit from giving it another go, or there’s simply too much inferential distance at this moment.)