Consequentialism Need Not Be Nearsighted

Sum­mary: If you ob­ject to con­se­quen­tial­ist eth­i­cal the­o­ries be­cause you think they en­dorse hor­rible or catas­trophic de­ci­sions, then you may in­stead be ob­ject­ing to short-sighted util­ity func­tions or poor de­ci­sion the­o­ries.

Recom­mended: De­ci­sion The­ory Para­dox: PD with Three Im­plies Chaos?

Re­lated: The “In­tu­itions” Be­hind “Utili­tar­i­anism”

The sim­ple idea that we ought to choose ac­tions ac­cord­ing to their prob­a­ble con­se­quences, ever since it was for­mu­lated, has gar­nered a rather shock­ing amount of dis­sent. Part of this may be due to causes other than philo­soph­i­cal ob­jec­tions, and some of the ob­jec­tions get into the meta­physics of metaethics. But there’s a fair amount of op­po­si­tion on rather sim­ple grounds: that con­se­quen­tial­ist rea­son­ing ap­pears to en­dorse bad de­ci­sions, ei­ther in the long run or as an effect of col­lec­tive ac­tion.

Every so of­ten, you’ll hear some­one offer a re­duc­tio ad ab­sur­dum of the fol­low­ing form: “Con­sider dilemma X. If we were con­se­quen­tial­ists, then we would be forced to choose Y. But in the long run (or if widely adopted) the strat­egy of choos­ing Y leads to hor­rible con­se­quence Z, and so con­se­quen­tial­ism fails on its own terms.”

There’s some­thing fishy about the ar­gu­ment when you lay it out like that: if it can be known that the strat­egy of choos­ing Y has hor­rible con­se­quence Z, then why do we agree that con­se­quen­tial­ists choose Y? In fact, there are two fur­ther un­stated as­sump­tions in ev­ery such ar­gu­ment I’ve heard, and it is those as­sump­tions rather than con­se­quen­tial­ism on which the ab­sur­dity re­ally falls. But to dis­cuss the as­sump­tions, we need to delve into a bit of de­ci­sion the­ory.

In my last post, I posed an ap­par­ent para­dox: a case where it looked as if a sim­ple rule could trump the most ra­tio­nal of de­ci­sion the­o­ries in a fair fight. But there was a sleight of hand in­volved (which, to your credit, many of you spot­ted im­me­di­ately). I judged Time­less De­ci­sion The­ory on the ba­sis of its long-term suc­cess, but each agent was stipu­lated to only care about its im­me­di­ate chil­dren, not any fur­ther de­scen­dants! And in­deed, the strat­egy of al­low­ing free-rid­ing defec­tors max­i­mizes the num­ber of an agent’s im­me­di­ate chil­dren, albeit at the price of ham­per­ing fu­ture gen­er­a­tions by clut­ter­ing the field with defec­tors.1

If in­stead we let the TDT agents care about their dis­tant de­scen­dants, then they’ll crowd out the defec­tors by only co­op­er­at­ing when both other agents are TDT,2 and profit with a higher sus­tained growth rate once they form a su­per­ma­jor­ity. Not only do the TDTs with prop­erly long-term de­ci­sion the­o­ries beat out what I called Defec­tBots, but they get at least a fair fight against the care­fully cho­sen sim­ple al­gorithm I called CliqueBots. The para­dox van­ishes once you al­low the agents to care about the long-term con­se­quences of their choice.

Similarly, the pur­ported re­duc­tios of con­se­quen­tial­ism rely on the fol­low­ing two tricks: they im­plic­itly as­sume that con­se­quen­tial­ists must care only about the im­me­di­ate con­se­quences of an ac­tion, or they im­plic­itly as­sume that con­se­quen­tial­ists must be causal de­ci­sion the­o­rists.3

Let’s con­sider one of the more fa­mous ex­am­ples, a dilemma posed by Ju­dith Jarvis Thom­son:

A brilli­ant trans­plant sur­geon has five pa­tients, each in need of a differ­ent or­gan, each of whom will die with­out that or­gan. Un­for­tu­nately, there are no or­gans available to perform any of these five trans­plant op­er­a­tions. A healthy young trav­eler, just pass­ing through the city the doc­tor works in, comes in for a rou­tine checkup. In the course of do­ing the checkup, the doc­tor dis­cov­ers that his or­gans are com­pat­i­ble with all five of his dy­ing pa­tients. Sup­pose fur­ther that if the young man were to dis­ap­pear, no one would sus­pect the doc­tor.

First, we can pre­sume that the doc­tor cares about the welfare, not just of the five pa­tients and the trav­eler, but of peo­ple more gen­er­ally. If we drop the last sup­po­si­tion for a mo­ment, it’s clear that a con­se­quen­tial­ist util­i­tar­ian doc­tor shouldn’t kill the trav­eler for his or­gans; if word gets out that doc­tors do that sort of thing, then peo­ple will stay away from hos­pi­tals un­less they’re ei­ther ex­cep­tional al­tru­ists or at the edge of death, and this will re­sult in peo­ple be­ing less healthy over­all.4

But what if the doc­tor is con­fi­dent of keep­ing it a se­cret? Well, then causal de­ci­sion the­ory would in­deed tell her to har­vest his or­gans, but TDT (and also UDT) would strongly ad­vise her against it. Be­cause if TDT en­dorsed the ac­tion, then other peo­ple would be able to de­duce that TDT en­dorsed the ac­tion, and that (whether or not it had hap­pened in any par­tic­u­lar case) their lives would be in dan­ger in any hos­pi­tal run by a time­less de­ci­sion the­o­rist, and then we’d be in much the same boat. There­fore TDT calcu­lates that the cor­rect thing for TDT to out­put in or­der to max­i­mize util­ity is “Don’t kill the trav­eler,”5 and thus the doc­tor doesn’t kill the trav­eler.

The ques­tion that a good con­se­quen­tial­ist ought to be ask­ing them­selves is not “What hap­pens in situ­a­tion Y if I do X?”, nor even “What hap­pens in gen­eral if I do X when­ever I’m in situ­a­tion Y”, but “What hap­pens in gen­eral if ev­ery­one at least as smart as me de­duces that I would do X when­ever I’m in situ­a­tion Y”? That, rather than the oth­ers, is the full ex­plo­ra­tion of the effects of choos­ing X in situ­a­tion Y, and not co­in­ci­den­tally it’s a col­lo­quial ver­sion of Time­less De­ci­sion The­ory. And as with Hofs­tadter’s su­per­ra­tional­ity, TDT and UDT will avoid con­tribut­ing to tragedies of the com­mons so long as enough peo­ple sub­scribe to them (or base their own de­ci­sions on the ex­trap­o­la­tions of TDT and UDT).

In gen­eral, I’d like to offer (with­out proof) the fol­low­ing ra­tio­nal­ist eth­i­cal in­equal­ity:

Your true val­u­a­tion of all con­se­quences + a good de­ci­sion the­ory ≥ any par­tic­u­lar de­on­tol­ogy.

Now, a de­on­tolog­i­cal rule might be eas­ier to calcu­late, and work prac­ti­cally as well in the vast ma­jor­ity of cir­cum­stances (like ap­prox­i­mat­ing real physics with New­to­nian me­chan­ics). But if you have to deal with an edge case or some­thing un­fa­mil­iar, you can get in trou­ble by per­sist­ing with the ap­prox­i­ma­tion; if you’re pro­gram­ming a GPS, you need rel­a­tivity. And as rule util­i­tar­i­ans can point out, you need to get your de­on­tolog­i­cal rules from some­where; if it’s not from a care­ful con­se­quen­tial­ist reck­on­ing, then it might not be as trust­wor­thy as it feels.6

Or it could be that par­tic­u­lar de­on­tolog­i­cal rules are much more re­li­able for run­ning on cor­rupted hard­ware, and that no amount of cau­tion will pre­vent peo­ple from shoot­ing them­selves in the foot if they’re al­lowed to. That is a real con­cern, and it’s be­yond the scope of this post. But what’s ac­tu­ally right prob­a­bly doesn’t in­clude a com­po­nent of mak­ing one­self stupid with re­gard to the ac­tual cir­cum­stances in or­der to pre­vent other parts of one’s mind from hi­jack­ing the de­ci­sion. If we ever out­grow this hard­ware, we ought to leave the de­on­tolo­gies be­hind with it.

Foot­notes:

1. Note that the evolu­tion­ary setup is nec­es­sary to the “para­dox”: if Omega dished out utils in­stead of chil­dren, then the short-term strat­egy is op­ti­mal in the long run too.

2. This is only right in a heuris­tic sense. If the agents sus­pect Omega will be end­ing the game soon, or they have too high a tem­po­ral dis­count rate, this won’t work quite that way. Also, there’s an en­tire gamut of other de­ci­sion the­o­ries that TDT could in­clude in its cir­cle of co­op­er­a­tors. That’s a good fea­ture to have- the CliqueBots from the last post, by con­trast, de­clare war on ev­ery other de­ci­sion the­ory, and this costs them rel­a­tive to TDT in a more mixed pop­u­la­tion (thanks to Jack for the ex­am­ple).

3. One more im­plicit as­sump­tion about con­se­quen­tial­ism is the false di­chotomy that con­se­quen­tial­ists must choose ei­ther to be perfectly al­tru­is­tic util­i­tar­i­ans or perfectly self­ish he­do­nists, with no mid­dle ground for car­ing about one­self and oth­ers to differ­ent pos­i­tive de­grees. Oddly enough, few peo­ple ob­ject to the de­on­tolog­i­cal rules we’ve de­vel­oped to avoid helping dis­tant oth­ers with­out in­cur­ring guilt.

4. I’m as­sum­ing that in the world of the thought ex­per­i­ment, it’s good for your health to see a doc­tor for check-ups and when you’re ill. It’s a differ­ent ques­tion whether that hy­po­thet­i­cal holds in the real world. Also, while my re­ply is vuln­er­a­ble to a least con­ve­nient pos­si­ble world ob­jec­tion, I hon­estly have no idea how my moral in­tu­itions should trans­late to a world where (say) peo­ple gen­uinely didn’t mind know­ing that doc­tors might do this as long as it max­i­mized the lives saved.

5. The sort of epistemic ad­van­tage that would be nec­es­sary for TDT to con­clude oth­er­wise is im­plau­si­ble for a hu­man be­ing, and even in that case, there are de­ci­sion the­o­ries like UDT that would re­fuse nonethe­less (for the sake of other wor­lds where peo­ple sus­pected doc­tors of hav­ing such an epistemic ad­van­tage).

6. The rea­son that moral­ity feels like de­on­tol­ogy to us is an evolu­tion­ary one: if you haven’t yet built an ex­cel­lent con­se­quen­tial­ist with a proper de­ci­sion the­ory, then hard-coded rules are much more re­li­able than ex­plicit rea­son­ing.