There’s an argument I’ve seen a number of times on the internet about the failings of consequentialism as a moral system. … The argument goes roughly like so: Consequentialism tells us that the thing to do is the thing with the best results. But, this is a ridiculously high standard, that nobody can actually live up to. Thus, consequentialism tells us that everybody is bad, and we should all condemn everybody and all feel guilty.
But I find the “good person”/”not a good person” dichotomy helpful. I’m not claiming it objectively exists. I can’t prove anything about ethics objectively exists. And even if there were objective ethical truths about what was right or wrong, that wouldn’t imply that there was an objective ethical truth about how much of the right stuff you have to do before you can go around calling yourself “good”. In the axiology/morality/law trichotomy, I think of “how much do I have to do in order to be a good person” as within the domain of morality. That means it’s a social engineering question, not a philosophical one. The social engineering perspective assumes that “good person” status is an incentive that can be used to make people behave better, and asks how high vs. low the bar should be set to maximize its effectiveness.
Consider the way companies set targets for their employees. At good companies, goals are ambitious but achievable. If the CEO of a small vacuum company tells her top salesman to sell a billion vacuums a year, this doesn’t motivate the salesman to try extra hard. It’s just the equivalent of not setting a goal at all, since he’ll fail at the goal no matter what. If the CEO says “Sell the most vacuums you can, and however many you sell, I will yell at you for not selling more”, this also probably isn’t going to win any leadership awards. A good CEO might ask a salesman to sell 10% more vacuums than he did last year, and offer a big bonus if he can accomplish it. Or she might say that the top 20% of salesmen will get promotions, or that the bottom 20% of salesmen will be fired, or something like that. The point is that the goal should effectively carve out two categories, “good salesman” and “bad salesman”, such that it’s plausible for any given salesman to end up in either, then offer an incentive that makes him want to fall in the first rather than the second.
I think of society setting the targets for “good person” a lot like a CEO setting the targets for “good vacuum salesman”. If they’re attainable and linked to incentives – like praise, honor, and the right to feel proud of yourself – then they’ll make people put in an extra effort so they can end up in the “good person” category. If they’re totally unattainable and nobody can ever be a good person no matter how hard they try, then nobody will bother trying. This doesn’t mean nobody will be good – some people are naturally good without hope for reward, just like some people will slave away for the vacuum company even when they’re underpaid and underappreciated. It just means you’ll lose the extra effort you would get from having a good incentive structure.
So what is the right level at which to set the bar for “good person”? An economist might think of this question as a price-setting problem: society is selling the product “moral respectability” and trying to decide how many units effort to demand from potential buyers in order to maximize revenue. Set the price too low, and you lose out on money that people would have been willing to pay. Set the price too high, and you won’t get any customers. Solve for the situation where you have a monopoly on the good and the marginal cost of production is zero, and this is how you set the “good person” bar.
I like Scott Alexander’s response to this kind of argument, from his Economic Perspective on Moral Standards: