# [Question] Why is this utilitarian calculus wrong? Or is it?

Sup­pose that I value a wid­get at \$30. Sup­pose that the wid­get costs the wid­get-man­u­fac­turer \$20 to pro­duce, but, due to monopoly power on their part, they can charge \$100 per wid­get.

The eco­nomic calcu­lus for this prob­lem is as fol­lows. \$30 (wid­get val­u­a­tion) - \$100 (wid­get price) = -\$70 to me; \$100 (wid­get price) - \$20 (wid­get cost) = \$80 to wid­get pro­duc­ers. \$80 - \$70 = +\$10 to­tal value. Or­di­nar­ily, this wouldn’t im­ply that util­i­tar­i­ans are re­quired to spend all their money on wid­gets be­cause for a func­tion to con­vert dol­lars to utils u(\$), u’(\$)>0, u″(\$)<0 and wid­get-pro­duc­ers usu­ally have higher \$ then wid­get con­sumers.

But sup­pose the wid­get mo­nop­o­list is a poor worker com­mune. The prof­its go di­rectly to the work­ers who, on av­er­age, have lower \$ then I do. It seems like buy­ing wid­gets would be more moral then, say, donat­ing \$80 to the same group of poor peo­ple (\$80 - \$80 = \$0) be­cause the wid­get pur­chase slightly com­pen­sates me for the dona­tion in a way that is greater then the cost of the re­cip­i­ent to pro­duce the wid­get.

And yet, I feel even less moral com­punc­tion to buy wid­gets then I do to donate \$80 to GiveDirectly. Is this just an ar­bi­trary, un­jus­tifi­able, sub­con­scious de­sire to shove eco­nomic trans­ac­tions into a sep­a­rate do­main from char­i­ta­ble dona­tions or is there ac­tu­ally some mis­take in the util­i­tar­ian logic here? If there isn’t a mis­take in the logic, is this some­thing that the Open Philan­thropy Pro­ject should be look­ing at?

[Ques­tion in­spired by a similar ques­tion at the end of chap­ter 7 of Steven Lands­burg’s The Arm­chair Economist]

• On first-or­der effects, it seems that your prefer­ence rank­ings as are fol­lows:

1) You have the wid­get, the com­mune has \$80, your to­tal satis­fac­tion is \$30+80x.

2a) You have noth­ing, the com­mune has \$100, your to­tal satis­fac­tion is \$100x.

2b) You have \$100, the com­mune has noth­ing, your to­tal satis­fac­tion is \$100.

3) You have the wid­get, a monopoly you don’t value has \$80. Your to­tal satis­fac­tion is \$30+80y.

By chang­ing x and y, we rep­re­sent your al­tru­ism to the other par­ties in the situ­a­tion; if x is greater than 1, then you would rather give the com­mune money than have it your­self, but if x is above 1.5 then you’d rather just give the money to the com­mune than have a wid­get for your­self. For ys be­low 7/​8ths, you’d rather not buy the wid­get. (The x and y I in­ferred from the ques­tion are slightly above 1 and slightly above 0, which sug­gests the best op­tion is in­deed 1.)

---

Why do hu­mans have moral in­tu­itions at all? I claim a ma­jor role is to rep­re­sent higher or­der effects as short­hand. When you see a bike you don’t own, you might run the first or­der calcu­la­tions and think it’s worth more to you than it is to who­ever owns it, and so global util­ity is max­i­mized by you steal­ing the bike. But a world in which agents re­flex­ively don’t steal bikes has other benefits to it, such that the low-theft equil­ibrium might have higher global util­ity than the high-theft equil­ibrium. But you can’t get from the high-theft equil­ibrium to the low-theft equil­ibrium by mak­ing small pareto im­prove­ments.

And so if you no­tice you have moral in­tu­itions that rise up when­ever you run the num­bers and de­cide you shouldn’t be up­set that some­one stole your bike, try to figure out what effects those in­tu­itions are try­ing to have.

---

Why put eco­nomic trans­ac­tions in a sep­a­rate do­main from char­i­ta­ble dona­tions? There are a few re­lated things to dis­en­tan­gle.

First, for you per­son­ally, it re­ally doesn’t mat­ter much. If you would rather pay your fa­vorite char­ity \$100 for a t-shirt with their logo on it, even though you nor­mally wouldn’t pay \$100 for a t-shirt, even though you could just give them the \$100, then do it.

Se­cond, for so­ciety as a whole, prices are a in­for­ma­tion-trans­mis­sion mechanism, con­vey­ing how much car­ing some­thing re­quires to pro­duce, and how much peo­ple care about it be­ing pro­duced. Muck­ing with this mechanism to di­vert value flows gen­er­ally de­stroys more than it cre­ates, es­pe­cially since the prices can freely fluc­tu­ate in re­sponse to chang­ing con­di­tions, whereas poli­cies are stick­ier.

• Wait, are you claiming that hu­mans have moral in­tu­itions be­cause it max­i­mizes global util­ity? Surely moral in­tu­itions have been pro­duced by evolu­tion. Why would evolu­tion se­lect for agents with be­havi­our that max­i­mize global util­ity?

• Wait, are you claiming that hu­mans have moral in­tu­itions be­cause it max­i­mizes global util­ity? Surely moral in­tu­itions have been pro­duced by evolu­tion.

No, I’m claiming that moral in­tu­itions re­flect the pre­com­pu­ta­tion of higher-or­der strate­gic con­sid­er­a­tions (of the sort “if I let this per­son get away with steal­ing a bike, then I will be globally worse off even though I seem lo­cally bet­ter off”).

I agree that you should ex­pect evolu­tion to cre­ate agents that max­i­mize in­clu­sive ge­netic fit­ness, which is quite differ­ent from global util­ity. But even if one adopts the frame that ‘util­i­tar­ian calcu­lus is the stan­dard of cor­rect­ness,’ one can still use those moral in­tu­itions as valuable cog­ni­tive guides, by di­rect­ing at­ten­tion to­wards con­sid­er­a­tions that might oth­er­wise be missed.

• By chang­ing x and y, we rep­re­sent your al­tru­ism to the other par­ties in the situ­a­tion; if x is greater than 1, then you would rather give the com­mune money than have it your­self,

Small cor­rec­tion: you want to buy the wid­get as long as x > 78 .

You should also al­most never ex­pect x>1, be­cause that means you should im­me­di­ately spend your money on that cause un­til x be­comes 1 or you run out of credit. x=1 means that some­thing is the best marginal way to al­lo­cate money that you know of right now.

• I think we’re us­ing mar­gins differ­ently. Yes, you shouldn’t ex­pect situ­a­tions with x>1 to be durable, but you should ex­pect x>1 be­fore ev­ery char­i­ta­ble dona­tion that you make. Other­wise you wouldn’t make the dona­tion! And so x=1 is the ‘money in the bank’ val­u­a­tion, in­stead of the up­per bound.

• Firstly, you are con­fus­ing dol­lars and utils.

If you buy this product for \$100, you gain the use of it, at value U[30] to your­self. The work­ers who made it gain \$80, at value U[80] to your­self, be­cause of your util­i­tar­ian prefer­ences. To­tal value U[110]

If the al­ter­na­tive was a product of cost \$100, which you value the use of at U[105], but all the money goes to greedy rich peo­ple to be squan­dered, then you would choose the first.

If the al­ter­na­tive was spend­ing \$100 to do some­thing in­sanely morally im­por­tant, U[3^^^3], you would do that.

If the al­ter­na­tive was a product of cost \$100, that was of value U[100] to your­self, and some of the money would go to peo­ple that weren’t that rich U[15], you would do that.

If you could give the money to peo­ple twice as des­per­ate as the work­ers, at U[160], you would do that.

There are also good rea­sons why you might want to dis­cour­age mo­nop­o­lies. Any de­sire to do so is not in­cluded in the ex­pected value calcu­la­tions. But the ba­sic prin­ci­ple is that util­i­tar­i­anism can never tell you if some ac­tion is a good use of a re­source, un­less you tell it what else that re­source could have been used for.

• Sup­pose that the com­mune sells the wid­gets for \$29. You pur­chase one, gain­ing net \$1 of value; the com­mune gains net \$9 of value. To­tal net gain = \$10. (You seem to be as­sum­ing that util­ity ends up be­ing lin­ear in money, so let’s stick with that as­sump­tion.)

This seems to be ex­actly as good as the sce­nario you de­scribe. Do you agree? And yet my sce­nario does not re­quire any­one to have any moral mo­ti­va­tions, make any sac­ri­fices, etc.; it only re­quires self-in­ter­est.

• > You seem to be as­sum­ing that util­ity ends up be­ing lin­ear in money, so let’s stick with that as­sump­tion.

> For a func­tion to con­vert dol­lars to utils u(\$), u’(\$)>0, u″(\$)<0

^^^non-lin­ear func­tion^^^

^^^U″(\$)!=0^^^

[/​snark]

That is im­por­tant, though. The whole point of the thought ex­per­i­ment was that the strictly self­ish re­sult (I buy \$100 of ice cream) is differ­ent from the Kal­dor-Hicks/​util­ity effi­cient out­come (I over­pay for a wid­get) in a situ­a­tion where my (nor­mally very util­i­tar­ian) moral in­tu­ition backs the self­ish ac­tion. Your sce­nario is only equiv­a­lent if you take the U″(\$)=0 con­di­tion which I ex­plic­itly re­jected.

• I don’t know whether it’s equiv­a­lent but it seems the trans­ac­tion is equiv­a­lent to a \$20-\$30 price point fair deal plus a \$80-\$70 sized gift. In the limit if a dol­lar is worth equal to ev­ery­one then a gift \$X-\$X=0 no net change from gift­ing. The trade part comes from eval­u­a­tion be­ing higher than cost. This would be true even if the benefi­ciary and the cost bearer would be the same party.

• Your ques­tion con­tains some con­tra­dic­tions, or maybe a con­fused use of “value”. Value is in­di­vi­d­ual and marginal. If you value your marginal (“next to spend”) \$30 equal to a marginal (“next to ac­quire”) wid­get, then you won’t buy it at \$100. You’ll keep your money and do with­out the wid­get.

If you buy the wid­get, that’s pretty strong ev­i­dence that you ac­tu­ally val­ued it equal or more than the \$100.

_un­til_ you start car­ing about who’s sel­l­ing the wid­get. If you choose to buy from the col­lec­tive at \$100, but wouldn’t buy from Face­lessWid­getCorp at that price, then you’re ac­tu­ally do­ing some mix of dona­tion and pur­chase. The prob­lem is that you don’t spec­ify what the mix is, and you have no idea if the dona­tion por­tion is be­ing dis­tributed well. Gen­er­ally, you’d be closer to your re­flec­tive prefer­ences to buy (or not buy) the wid­get as effi­ciently as pos­si­ble, and to make your dona­tion as effi­ciently as pos­si­ble.

• Some con­text. I do not, in fact, be­lieve that Kal­dor-Hicks effi­cent ac­tions are in­her­ently moral. But I do think that Kal­dor-Hicks effi­ciency is a pretty good first-pass heuris­tic. This thought ex­per­i­ment was meant to set up a dilemma be­tween Kal­dor-Hicks effi­ciency (which says to buy) and my moral in­tu­itions (which says not to buy). The prob­lem is that I can’t figure out ex­actly what my in­tu­ition is try­ing to tell me about what seems to be a fairly straight­for­ward util­ity-max­i­miz­ing trans­fer. For the pur­poses of this con­trived thought ex­per­i­ment, sup­pose that the only de­ci­sion is whether or not to buy from the com­mune. There isn’t an op­tion to donate some or all of the money to GiveDirectly if I choose not to buy. Just buy a wid­get or buy \$100 worth of ice cream.

• Eh, I don’t put much weight on moral in­tu­itions in deeply bizarre choices. That’s not what they’re evolved/​trained on, and it seems de­signed to give odd re­sponses. Ex­am­in­ing one’s re­ac­tion can some­times be in­ter­est­ing, but it isn’t a good guide to moral truth.

Your ice cream sce­nario isn’t about you spend­ing \$100, it’s about you choos­ing be­tween ice cream and a com­mune-pro­vided wid­get. I don’t see much of in­ter­est in that choice.

• Past a cer­tain point, this is cer­tainly true. But you need a cer­tain de­gree of re­flec­tion be­fore you can tell whether fur­ther re­flec­tion is likely to pro­duce valuable in­sights. Ap­par­ently you hit your limit, but I haven’t yet. If you have some rea­son why you think this is a par­tic­u­larly un­en­light­en­ing thing to think about, I’d love to hear it, but this seems like a mat­ter of differ­ent tastes.

See Vaniver’s com­ments be­low his an­swer for rea­sons I think this is worth think­ing about. I ba­si­cally agree with them.

• It seems like buy­ing wid­gets would be more moral then, say, donat­ing \$80 to the same group of poor peo­ple (\$80 - \$80 = \$0) be­cause the wid­get pur­chase slightly com­pen­sates me for the dona­tion in a way that is greater then the cost of the re­cip­i­ent to pro­duce the wid­get.

It’s cer­tainly not more moral (be­cause the ex­tra benefits flow to you, and that is gen­er­ally not seen as a moral plus). But there are similar ar­gu­ments for micro-loans rather than giv­ing di­rectly: the profit from the microloan means that you can offer more loans af­ter.

I be­lieve the em­piri­cal ev­i­dence is that that ar­gu­ment is wrong, but it’s cer­tainly not wrong in the­ory. Similarly, if you could use the \$30 worth of your wid­get to free up >\$20 worth of value on your side, and donate that, then it would be more moral.

Ba­si­cally trade typ­i­cally adds more to­tal value than dona­tion, but a) dona­tions can be tar­geted in ways trade can’t, and b) the to­tal value added is not rele­vant to the re­ceivers, just to your­self, un­less you use that ex­tra value to trade or donate more.

• the ex­tra benefits flow to you, and that is gen­er­ally not seen as a moral plus

This is cor­rect, but I’m not sure that it should be: there’s no in­trin­sic rea­son for why your well-be­ing wouldn’t be just as im­por­tant morally as ev­ery­one else’s. Em­piri­cally, peo­ple think­ing that their own well-be­ing doesn’t mat­ter and only other peo­ple’s well-be­ing does, seems to be a big fac­tor in do-good­ers burn­ing out.

• Yes, that’s true. But I’ve been kind of treat­ing value-to-your­self as fun­gible. If it isn’t, and if the marginal util­ity gain for you is tiny, then trade is less in­ter­est­ing.

• >It’s cer­tainly not more moral (be­cause the ex­tra benefits flow to you, and that is gen­er­ally not seen as a moral plus).

Not in the calcu­lus as I run it. I con­sider a util to me, a util to War­ren Buffett, and a util to an im­pov­er­ished afri­can farmer to be equiv­a­lent (in­so­far as in­ter­per­sonal util­ity com­par­i­sons are pos­si­ble etc. etc.). The only rea­son I con­sider a dol­lar dona­tion to GiveDirectly>a dol­lar spent on ice cream>a dol­lar dona­tion to War­ren Buffett’s per­sonal check­ing ac­count is be­cause “for a func­tion to con­vert dol­lars to utils u(\$), u’(\$)>0, u″(\$)<0”.

>I be­lieve the em­piri­cal ev­i­dence is that that ar­gu­ment is wrong, but it’s cer­tainly not wrong in the­ory.

What em­piri­cal ev­i­dence? It’s a con­trived thought ex­per­i­ment, not some­thing I’m ac­tu­ally de­bat­ing.

> the to­tal value added is not rele­vant to the re­ceivers, just to your­self, un­less you use that ex­tra value to trade or donate more.

Again, this is just plain wrong. Utili­tar­i­anism!=self-flag­el­la­tion.

• Even with­out self-flag­el­la­tion, if your marginal util­ity per \$ is much lower, and you don’t use your own sur­plus in a fun­gible way to donate/​buy more, donat­ing can be much higher im­pact than trade. First of all, you have more free­dom to tar­get dona­tions that trade, and even if we ig­nore that, cap­tur­ing all your money is bet­ter for the pro­ducer than just cap­tur­ing the pro­ducer sur­plus (and the marginal util­ity of the con­sumer sur­plus to you is suffi­ciently low that adding it on doesn’t bring the sur­pluses to a higher num­ber).

• cap­tur­ing all your money is bet­ter for the pro­ducer than just cap­tur­ing the pro­ducer sur­plus (and the marginal util­ity of the con­sumer sur­plus to you is suffi­ciently low that adding it on doesn’t bring the sur­pluses to a higher num­ber).

By as­sump­tion, the con­sumer sur­plus to me is \$30. Which is high enough to bring the sur­pluses to a higher num­ber. I’m not deny­ing that there are slightly differ­ent con­struc­tions of the prob­lem where dona­tion is the triv­ially more moral ac­tion. That’s not the point, though. The point is that, in this par­tic­u­lar sce­nario (where, by prefer­ence fulfill­ment/​K-H effi­ciency, buy­ing>donat­ing>keep­ing my money) my moral in­tu­ition says to donate or keep the money. You’re mak­ing fur­ther as­sump­tions (U(con­sumer sur­plus) is ap­prox­i­mately 0) which make the prob­lem eas­ier, but less in­ter­est­ing.

• My in­tu­ition here is: Giv­ing some­one money moves wealth around. Creat­ing a wid­get (at \$20 cost, which at least one per­son val­ues at \$30), pro­duces wealth. So [the world where a wid­get gets cre­ated] has more to­tal wealth than [the one where it doesn’t], and so it’s not sur­pris­ing if your moral calcu­lus val­ues it more highly.

• I think the calcu­lus is cor­rect, in the “non-iter­ated” game. The con­clu­sion is cor­rect in the same sense that it would be even bet­ter to donate \$101 and come back the next day with a black sock on your head and steal a wid­get.

There’s just some­thing about the iter­ated ex­er­cise of mix­ing up dona­tions and trans­ac­tions that you don’t like, and I share that in­tu­ition. I think in my case I feel that their strat­egy of sel­l­ing at a high price dis­cour­ages other po­ten­tial trans­ac­tions from peo­ple that value the wid­get be­tween \$20 and \$100 and don’t care about who’s sel­l­ing it. And I am par­tic­u­larly sen­si­tive about tak­ing ad­van­tage of the sim­ple win-win op­por­tu­ni­ties first. Or maybe I just dis­like the lack of trans­parency of dress­ing up the dona­tion as a pur­chase.

In my ex­pe­rience for many peo­ple it’s the other way around. Many are much more will­ing to buy some use­less product at a high price, a product that they value less than its pro­duc­ing cost, in­stead of just donat­ing. I as­sume it’s be­cause they feel it’s bet­ter to pay some­one that’s “work­ing” for their money than to in­cen­tivize beg­ging.

• If the pro­ducer can charge \$100, then you don’t have to buy a wid­get, be­cause well, as you said, he can charge \$100. So some­one seems to be will­ing to buy the stuff at that price.

So the ques­tion should be re­framed and the model ex­panded. There is a firm owned by work­ers (or some­thing like that) and other stan­dard firms. You value the wid­get at \$30. You can get it at \$20. Sup­pose the work­ers of that firm are the poor­est peo­ple in the world. Then you can just buy the wid­get at \$20 at the mar­ket and give them how­ever you like (or di­rectly buy it from them if the price they charge is not higher than \$20 + what you would like to trans­fer). If they are not the poor­est peo­ple in the world, then the harder ques­tion is whether you value their product ex­tra for be­ing from a worker-owned firm etc.