Prob­ab­il­ity is Real, and Value is Complex

(This post idea is due en­tirely to Scott Gar­rabrant, but it has been sev­eral years and he hasn’t writ­ten it up.)

In 2009, Vladi­mir Nesov ob­served that prob­ab­il­ity can be mixed up with util­ity in dif­fer­ent ways while still ex­press­ing the same pref­er­ences. The ob­ser­va­tion was con­cep­tu­ally sim­ilar to one made by Jef­frey and Bolker in the book The Lo­gic of De­cision, so I give them in­tel­lec­tual pri­or­ity, and refer to the res­ult as “Jef­frey-Bolker ro­ta­tion”.

Based on Nesov’s post, Scott came up with a way to rep­res­ent pref­er­ences as vec­tor-val­ued meas­ures, which makes the res­ult geo­met­ric­ally clear and math­em­at­ic­ally el­eg­ant.

Vector Val­ued Preferences

As usual, we think of a space of events which form a sigma al­gebra. Each event has a prob­ab­il­ity and an ex­pec­ted util­ity as­so­ci­ated with it. However, rather than deal­ing with dir­ectly, we define . Vladi­mir Nesov called “should­ness”, but that’s fairly mean­ing­less. Since it is graphed on the y-axis, rep­res­ents util­ity times prob­ab­il­ity, and is oth­er­wise fairly mean­ing­less, a good name for it is “up”. Here is a graph of prob­ab­il­ity and up­ness for some events, rep­res­en­ted as vec­tors:

(The post title is a pun on the fact that this looks like the com­plex plane: events are com­plex num­bers with real com­pon­ent P and ima­gin­ary com­pon­ent Q. However, it is bet­ter to think of this as a gen­eric 2D vec­tor space rather than the com­plex plane spe­cific­ally.)

If we as­sume and are mu­tu­ally ex­clus­ive events (that is, ), then cal­cu­lat­ing the P and Q of their union is simple. The prob­ab­il­ity of the union of two mu­tu­ally ex­clus­ive events is just the sum:

The ex­pec­ted util­ity is the weighted sum of the com­pon­ent parts, nor­mal­ized by the sum of the prob­ab­il­it­ies:

The nu­mer­ator is just the sum of the should­nesses, and the de­nom­in­ator is just the prob­ab­il­ity of the union:

But, we can mul­tiply both sides by the de­nom­in­ator to get a re­la­tion­ship on should­ness alone:

Thus, we know that both co­ordin­ates of are simply the sum of the com­pon­ent parts. This means union of dis­joint events is vec­tor ad­di­tion in our vec­tor space, as il­lus­trated in my dia­gram earlier.

Lin­ear Transformations

When we rep­res­ent pref­er­ences in a vec­tor space, it is nat­ural to think of them as basis-in­de­pend­ent: the way we drew the axes was ar­bit­rary; all that mat­ters is the sys­tem of pref­er­ences be­ing rep­res­en­ted. What this ends up mean­ing is that we don’t care about lin­ear trans­form­a­tions of the space, so long as the pref­er­ences don’t get re­flec­ted (which re­verses the pref­er­ence rep­res­en­ted). This is a gen­er­al­iz­a­tion of the usual “util­ity is unique up to af­fine trans­form­a­tions with pos­it­ive coef­fi­cient”: util­ity is no longer unique in that way, but the com­bin­a­tion of prob­ab­il­ity and util­ity is unique up to non-re­flect­ing lin­ear trans­form­a­tions.

Let’s look at that visu­ally. Mul­tiply­ing all the ex­pec­ted util­it­ies by a pos­it­ive con­stant doesn’t change any­thing:

Ad­ding a con­stant to ex­pec­ted util­ity doesn’t change any­thing:

Slightly weird, but not too weird… mul­tiply­ing all the prob­ab­il­it­ies by a pos­it­ive con­stant (and the same for Q, since Q is U*P) doesn’t change any­thing (mean­ing we don’t care if prob­ab­il­it­ies are nor­mal­ized):

Here’s the really new trans­form­a­tion, which can com­bine with the other 4 to cre­ate all the valid trans­form­a­tions. The Jef­frey-Bolker ro­ta­tion, which changes what parts of our pref­er­ences are rep­res­en­ted in prob­ab­il­it­ies vs util­it­ies:

Let’s pause for a bit on this one, since it is really the whole point of the setup. What does it mean to ro­tate our vec­tor-val­ued meas­ure?

A simple ex­ample: sup­pose that we can take a left path, or a right path. There are two pos­sible worlds, which are equally prob­able: in Left World, the left path leads to a golden city over­flow­ing with wealth and char­ity, which we would like to go to with V=+1. The right path leads to a dan­ger­ous bad­lands full of ban­dits, which we would like to avoid, V=-1. On the other hand, Right World (so named be­cause we would prefer to go right in this world) has a some­what nice vil­lage on the right path, V=+.5, and a some­what nasty swamp on the left, V=-.5. Sup­pos­ing that we are (strangely enough) un­cer­tain about which path we take, we cal­cu­late the events as fol­lows:

  • Go left in left-world:

    • P=.25

    • V=1

    • Q=.25

  • Go left in right-world:

    • P=.25

    • V=-.5

    • Q=-.125

  • Go right in left-world:

    • P=.25

    • V=-1

    • Q=-.25

  • Go right in right-world:

    • P=.25

    • V=.5

    • Q=.125

  • Go left (union of the two left-go­ing cases):

    • P=.5

    • Q=.125

    • V=Q/​P=.25

  • Go right:

    • P=.5

    • Q=-.125

    • V=Q/​P=-.25

We can cal­cu­late the V of each ac­tion and take the best. So, in this case, we sens­ibly de­cide to go left, since the Left-world is more im­pact­ful to us and both are equally prob­able.

Now, let’s ro­tate 30°. (Hope­fully I get the math right here.)

  • Left in L-world:

    • P=.09

    • Q=.34

    • V=3.7

  • Left in R-world:

    • P=.28

    • Q=.02

    • V=.06

  • Right in L-world:

    • P=.34

    • Q=-.09

    • V=-.26

  • Right in R-world:

    • P=.15

    • Q=.23

    • V=1.5

  • Left over­all:

    • P=.37

    • Q=.36

    • V=.97

  • Right over­all:

    • P=.49

    • Q=.14

    • V=.29

Now, it looks like go­ing left is evid­ence for be­ing in R-world, and go­ing right is evid­ence for be­ing in L-world! The dis­par­ity between the worlds has also got­ten lar­ger; L-world now has a dif­fer­ence of al­most 4 util­ity between the dif­fer­ent paths, rather than 2. R-world now eval­u­ates both paths as pos­it­ive, with a dif­fer­ence between the two of only .9. Also note that our prob­ab­il­it­ies have stopped sum­ming to one (but as men­tioned already, this doesn’t mat­ter much; we could nor­mal­ize the prob­ab­il­it­ies if we want).

In any case, the fi­nal de­cision is ex­actly the same, as we ex­pect. I don’t have a good in­tu­it­ive ex­plan­a­tion of what the agent is think­ing, but roughly, the de­creased con­trol the agent has over the situ­ation due to the cor­rel­a­tion between its ac­tions and which world it is in seems to be com­pensated for by the more ex­treme pay­off dif­fer­ences in L-world.

Ra­tional Preferences

Al­right, so pref­er­ences can be rep­res­en­ted as vec­tor-val­ued meas­ures in two di­men­sions. Does that mean ar­bit­rary vec­tor-val­ued meas­ures in two di­men­sions can be in­ter­preted as pref­er­ences?


The re­stric­tion that prob­ab­il­it­ies be non-neg­at­ive means that events can only ap­pear in quad­rants I and IV of the graph. We want to state this in a basis-in­de­pend­ent way, though, since it is un­nat­ural to have a pre­ferred basis in a vec­tor space. One way to state the re­quire­ment is that there must be a line passing through the (0,0) point, such that all of the events are strictly to one side of the line, ex­cept per­haps events at the (0,0) point it­self:

As il­lus­trated, there may be a single such line, or there may be mul­tiple, de­pend­ing on how closely pref­er­ences hug the (0,0) point. The nor­mal vec­tor of this line (drawn in red) can be in­ter­preted as the di­men­sion, if you want to pull out prob­ab­il­it­ies in a way which guar­an­tees that they are non-neg­at­ive. There may be a unique dir­ec­tion cor­res­pond­ing to prob­ab­il­ity, and there may not. Since , we get a unique prob­ab­il­ity dir­ec­tion if and only if we have events with both ar­bit­rar­ily high util­it­ies and ar­bit­rar­ily low. So, Jef­frey-Bolker ro­ta­tion is in­trins­ic­ally tied up in the ques­tion of whether util­it­ies are bounded.

Ac­tu­ally, Scott prefers a dif­fer­ent con­di­tion on vec­tor-val­ued meas­ures: that they have a unique (0,0) event. This al­lows for either in­fin­ite pos­it­ive util­it­ies (not merely un­boun­ded—in­fin­ite), or in­fin­ite neg­at­ive util­it­ies, but not both. I find this less nat­ural. (Note that we have to have an empty event in our sigma-al­gebra, and it has to get value (0,0) as a ba­sic fact of vec­tor-val­ued meas­ures. Whether any other event is al­lowed to have that value is an­other ques­tion.)

How do we use vec­tor-val­ued pref­er­ences to op­tim­ize? The ex­pec­ted value of a vec­tor is the slope, . This runs into trouble for prob­ab­il­ity zero events, though, which we may cre­ate as we ro­tate. In­stead, we can prefer events which are less clock­wise:

(Note, how­ever, that the pref­er­ence of a (0,0) event is un­defined.)

This gives the same an­swers for pos­it­ive-x-value, but keeps mak­ing sense as we ro­tate into other quad­rants. More and less clock­wise al­ways makes sense as a no­tion since we as­sumed that the vec­tors al­ways stay to one side of some line; we can’t spin around in a full circle look­ing for the best op­tion, be­cause we will hit the sep­ar­at­ing line. This al­lows us to define a pref­er­ence re­la­tion based on the angle of be­ing within 180° of ’s.


This is a fun pic­ture of how prob­ab­il­it­ies and util­it­ies re­late to each other. It sug­gests that the two are in­ex­tric­ably in­ter­twined, and mean­ing­less in isol­a­tion. View­ing them in this way makes it some­what more nat­ural to think that prob­ab­il­it­ies are more like “caring meas­ure” ex­press­ing how much the agent cares about how things go in par­tic­u­lar worlds, rather than sub­ject­ive ap­prox­im­a­tions of an ob­ject­ive “ma­gical real­ity fluid” which de­term­ines what worlds are ex­per­i­enced. (See here for an ex­ample of this de­bate.) More prac­tic­ally, it gives a nice tool for visu­al­iz­ing the Jef­frey-Bolker ro­ta­tion, which helps us think about pref­er­ence re­la­tions which are rep­res­ent­able via mul­tiple dif­fer­ent be­lief dis­tri­bu­tions.

A down­side of this frame­work is that it re­quires agents to be able to ex­press a pref­er­ence between any two events, which might be a little ab­surd. Let me know if you fig­ure out how to con­nect this to com­plete-class style found­a­tions which only re­quire agents to have pref­er­ences over things which they can con­trol.