[link] Choose your (preference) utilitarianism carefully – part 1

Sum­mary: Utili­tar­i­anism is of­ten ill-defined by sup­port­ers and crit­ics al­ike, prefer­ence util­i­tar­i­anism even more so. I briefly ex­am­ine some of the axes of util­i­tar­i­anism com­mon to all pop­u­lar forms, then look at some axes unique but es­sen­tial to prefer­ence util­i­tar­i­anism, which seem to have re­ceived lit­tle to no dis­cus­sion – at least not this side of a pay­wall. This way I hope to clar­ify fu­ture dis­cus­sions be­tween he­do­nis­tic and prefer­ence util­i­tar­i­ans and per­haps to clar­ify things for their crit­ics too, though I’m aiming the dis­cus­sion pri­mar­ily at util­i­tar­i­ans and util­i­tar­ian-sym­pa­thisers.

http://​​valence-util­i­tar­i­anism.com/​​?p=8

I like this es­say par­tic­u­larly for the way it breaks down differ­ent forms of util­i­tar­i­anism to var­i­ous axes, which have rarely been dis­cussed on LW much.

For util­i­tar­i­anism in gen­eral:

Many of these axes are well dis­cussed, per­ti­nent to al­most any form of util­i­tar­i­anism, and at least rea­son­ably well un­der­stood, and I don’t pro­pose to dis­cuss them here be­yond high­light­ing their salience. Th­ese in­clude but prob­a­bly aren’t re­stricted to the fol­low­ing:

  • What is util­ity? (for the sake of easy refer­ence, I’ll give each axis a sim­ple ti­tle – for this, the util­ity axis); eg hap­piness, fulfilled prefer­ences, beauty, in­for­ma­tion(PDF)

  • How dras­ti­cally are we try­ing to ad­just it?, aka what if any is the crite­rion for ’right’ness? (suffi­ciency axis); eg satis­fic­ing, max­imis­ing[2], scalar

  • How do we bal­ance trade­offs be­tween pos­i­tive and nega­tive util­ity? (weight­ing axis); eg, nega­tive, nega­tive-lean­ing, pos­i­tive (as in fully dis­count­ing nega­tive util­ity – I don’t think any­one ac­tu­ally holds this), ‘mid­dling’ ie ‘nor­mal’ (of­ten called pos­i­tive, but it would benefit from a dis­tinct ad­jec­tive)

  • What’s our pri­mary men­tal­ity to­ward it? (men­tal­ity axis); eg act, rule, two-level, global

  • How do we deal with chang­ing pop­u­la­tions? (pop­u­la­tion axis); eg av­er­age, total

  • To what ex­tent do we dis­count fu­ture util­ity? (dis­count­ing axis); eg zero dis­count, >0 discount

  • How do we pin­point the net zero util­ity point? (bal­anc­ing axis); eg Tännsjö’s test, ex­pe­rience tradeoffs

  • What is a utilon? (utilon axis) [3] – I don’t know of any ex­am­ples of se­ri­ous dis­cus­sion on this (other than generic dis­mis­sals of the ques­tion), but it’s ul­ti­mately a ques­tion util­i­tar­i­ans will need to an­swer if they wish to for­mal­ise their sys­tem.

For prefer­ence util­i­tar­i­anism in par­tic­u­lar:

Here then, are the six most salient de­pen­dent axes of prefer­ence util­i­tar­i­anism, ie those that de­scribe what could count as util­ity for PUs. I’ll re­fer to the poles on each axis as (axis)0 and (axis)1, where any in­ter­me­di­ate view will be (axis)X. We can then for­mally re­fer to sub­types, and also ex­clude them, eg ~(F0)R1PU, or ~(F0 v R1)PU etc, or rep­re­sent a range, eg C0..XPU.

How do we pro­cess mis­in­formed prefer­ences? (in­for­ma­tion axis F)

(F0 no ad­just­ment /​ F1 ad­just to what it would have been had the per­son been fully in­formed /​ FX some­where in be­tween)

How do we pro­cess ir­ra­tional prefer­ences? (ra­tio­nal­ity axis R)

(R0 no ad­just­ment /​ R1 ad­just to what it would have been had the per­son been fully ra­tio­nal /​ RX some­where in be­tween)

How do we pro­cess malformed prefer­ences? (malfor­ma­tion axes M)

(M0 Ig­nore them /​ MF1 ad­just to fully in­formed /​ MFR1 ad­just to fully in­formed and ra­tio­nal (short­hand for MF1R1) /​ MFxRx ad­just to some­where in be­tween)

How long is a prefer­ence rele­vant? (du­ra­tion axis D)

(D0 Dur­ing its ex­pres­sion only /​ DF1 Dur­ing and fu­ture /​ DPF1 Dur­ing, fu­ture and past (short­hand for DP1F1) /​ DPxFx Some­where in be­tween)

What con­sti­tutes a prefer­ence? (con­sti­tu­tion axis C)

(C0 Phenom­e­nal ex­pe­rience only /​ C1 Be­havi­our only /​ CX A com­bi­na­tion of the two)

What re­solves a prefer­ence? (re­s­olu­tion axis S)

(S0 Phenom­e­nal ex­pe­rience only /​ S1 Ex­ter­nal cir­cum­stances only /​ SX A com­bi­na­tion of the two)

What dis­t­in­guishes these cat­e­gori­sa­tions is that each cat­e­gory, as far as I can per­ceive, has no analo­gous axis within he­do­nis­tic util­i­tar­i­anism. In other words to a he­do­nis­tic util­i­tar­ian, such axes would ei­ther be mean­ingless, or have only one log­i­cal an­swer. But any well-defined and con­sis­tent form of prefer­ence util­i­tar­i­anism must sit at some point on ev­ery one of these axes.

See the ar­ti­cle for more de­tailed dis­cus­sion about each of the axes of prefer­ence util­i­tar­i­anism, and more.