Simple and composite partial preferences

Par­tial prefer­ences are prefer­ences that ex­ist within a hu­man’s in­ter­nal model, eg “this paint­ing is bet­ter than that one”, “I don’t like get­ting punched”, “that was em­bar­rass­ing”, “I’d pre­fer to fund this char­ity than that one”, and so on.

In or­der to elicit these par­tial prefer­ences, we can use one-step hy­po­thet­i­cals: brief de­scrip­tions of a hy­po­thet­i­cal situ­a­tion where caus­ing the hu­man to model it and reach a prefer­ence.

But some­times our re­ac­tion to a hy­po­thet­i­cal are more com­plex. Con­sider:

Dur­ing the Notre-Dame Cathe­dral fire, a relic said to be part of the True Cross was threat­ened by the fire. Ten fire­fighters could go in and save it; how­ever, there is a that all of them might die. Is it bet­ter that they go in or not?

When con­fronted with some­thing like that, I might rea­son thusly:

Well, it’s cer­tainly not part of the True Cross, even if that ex­isted. How­ever, it is an im­por­tant me­dieval arte­fact with prob­a­bly a lot of his­tory (I know that be­cause it was in the most im­por­tant cathe­dral in France). I value things like that quite highly, and don’t want them to be de­stroyed*. On the other hand, be­ware sta­tus quo bias**: I don’t par­tic­u­larly miss his­tor­i­cal items that were de­stroyed in the past*; but this arte­fact would be ap­pre­ci­ated by prob­a­bly mil­lions in the fu­ture, and that mat­ters to me*. Lots of re­li­gious peo­ple, French peo­ple, and peo­ple with a re­spect for his­tory would not want it de­stroyed, and I value their satis­fac­tion to some ex­tent*. A chance of ten deaths should be con­sid­ered similar, in my es­ti­ma­tion, to 1 death with cer­tainty** (how­ever, with­out the con­no­ta­tions of “this per­son speci­fi­cally must die”**). Fire­fighters are pro­fes­sion­als, who speci­fi­cally ac­cept to take risks in these kinds of situ­a­tions, so they’re not like “in­no­cent vic­tims” who I would give ex­tra weight to*. They are likely pretty young, and I care about the amount of life they could lose*.

So, would I pay one life for a very-but-not-fan­tas­ti­cally valuable arte­fact that would be val­ued and en­joyed by mil­lions? But those peo­ple would en­joy the cathe­dral and its art even with­out this par­tic­u­lar relic. Es­ti­mat­ing how much value such an arte­fact could cre­ate, com­pared with the value re­main­ing in a hu­man life (this is a rele­vant com­par­i­son for me**), I’d guess that it’s not worth sav­ing in this cir­cum­stance* (but I would change my mind if the risk of death were lower or the arte­fact more valuable/​ap­pre­ci­ated).

Now, this is very far from a sin­gle model with a sin­gle in­stinc­tive an­swer! In­stead it’s the com­bi­na­tion of many differ­ent sim­ple par­tial prefer­ences; I’ve tried to in­di­cate base level par­tial prefer­ence in that rea­son­ing with a *, and par­tial meta-prefer­ences with a **.

So I don’t think we should call this a par­tial prefer­ence. In­stead, we should call it a col­lec­tion of par­tial prefer­ences, tied to­gether by a rea­son­ing pro­cess which it­self is a par­tial meta-prefer­ence (in what it con­sid­ers and what it ig­nores).

Thus that hy­po­thet­i­cal can elicit a lot of differ­ent par­tial prefer­ences in its an­swer; but the an­swer it­self should not be con­sid­ered a par­tial prefer­ence, as it doesn’t ex­ist as a prefer­ence in a sin­gle model.

Sim­ple par­tial preferences

In con­trast, sim­ple par­tial prefer­ences would in­stead have taken the form of:

It’s never worth sac­ri­fic­ing a hu­man life to save a mere ob­ject with only sen­ti­men­tal value.

Or:

Millions have wor­shipped this relic; one life or ten lives are a cheap price to pay for its sur­vival.

In these cases, a sin­gle model is cre­ated in the brain, and a sin­gle com­par­i­son is made.

No comments.