Anthropic probabilities and cost functions

I’ve claimed that an­thropic prob­a­bil­ities like SIA and SSA don’t ac­tu­ally ex­ist—or, more prop­erly, that you need to in­clude some de­tails of prefer­ences in or­der to get any an­thropic prob­a­bil­ities, and thus that an­thropic is­sues should be ap­proached from the per­spec­tive of de­ci­sion the­ory.

What do I mean by this? Well, in­for­mally, what are prob­a­bil­ities? If I said that (a very visi­ble event) would hap­pen with a prob­a­bil­ity , then I would ex­pect to see events like hap­pen about a tenth of the time.

This makes a lot of sense. Why can’t it be trans­posed into an­thropic situ­a­tions? Well, the big prob­lem is the “I” in “I would ex­pect”. Who is this “I”—me, my copies, some weighted av­er­age of us all?

In non-an­thropic situ­a­tions, we can for­mal­ise “I would ex­pect to see” with a cost func­tion. Let me choose a num­ber to be what­ever I want; then, if doesn’t hap­pen I pay a cost of , while if it does hap­pen, I pay a cost of (this is ex­actly equal to , for the in­di­ca­tor func­tion of ).

Then, for this cost func­tion, I min­i­mize my losses by set­ting “” to be equal to my sub­jec­tive opinion of the prob­a­bil­ity of (note there are many elic­it­ing cost func­tions we could have used, not just the quadratic loss, but the re­sults are the same in for all of them).

In the in­for­mal set­ting, we didn’t know how to deal with “I” when ex­pect­ing fu­ture out­comes. In the for­mal set­ting, we don’t know how to ag­gre­gate the cost when mul­ti­ple copies could all have to pay the cost.

There are two nat­u­ral meth­ods of ag­gre­ga­tion: the first is to keep , as above, as the cost for ev­ery copy. Thus each copy has the av­er­age cost of all the copies (this also al­lows us to gen­er­al­ise to situ­a­tions where differ­ent copies would see differ­ent things). In this case, the prob­a­bil­ity that de­vel­ops from this is SSA.

Alter­na­tively, we could add up all the costs, giv­ing a to­tal cost of if there were copies (this also gen­er­al­ises to situ­a­tions where differ­ent copies see differ­ent things). In this case, the prob­a­bil­ity that de­vel­ops from this is SIA.

So prob­a­bil­ity might be an es­ti­mate of what I ex­pect to see, or a cost-min­imiser for er­rors of pre­dic­tion, but an­thropic prob­a­bil­ities differ de­pend­ing on how one ex­tends “I” and “cost” to situ­a­tions of mul­ti­ple copies.