We go to the trouble of sa-measures because it’s possible to add a sa-measure to an a-measure, and get another a-measure where the expectation values of all the functions went up, while the new a-measure we landed at would be impossible to make by adding an a-measure to an a-measure.
Basically, we’ve gotta use sa-measures for a clean formulation of “we added all the points we possibly could to this set”, getting the canonical set in your equivalence class.
Admittedly, you could intersect with the cone of a-measures again at the end (as we do in the next post) but then you wouldn’t get the nice LF-duality tie-in.
Adding the cone of a-measures instead would correspond to being able to take expectation values of continuous functions in [0,∞), instead of in [0,1], so I guess you could reformulate things this way, but IIRC the 0-1 normalization doesn’t work as well (ie, there’s no motive for why you’re picking 1 as the thing to renormalize to 1 instead of, say, renormalizing 10 to 10). We’ve got a candidate other normalization for that case, but I remember being convinced that it doesn’t work for belief functions, but for the Nirvana-as-1-reward-forever case, I remember getting really confused about the relative advantages of the two normalizations. And apparently, when working on the internal logic of infradistributions, this version of things works better.
So, basically, if you drop sa-measures from consideration you don’t get the nice LF-duality tie in and you don’t have a nice way to express how upper completion works. And maybe you could work with a-measures and use upper completion w.r.t. a different cone and get a slightly different LF-duality, but then that would make normalization have to work differently and we haven’t really cleaned up the picture of normalization in that case yet and how it interacts with everything else. I remember me and Vanessa switched our opinions like 3 times regarding which upper completion to use as we kept running across stuff and going “wait no, I take back my old opinion, the other one works better with this”.
We go to the trouble of sa-measures because it’s possible to add a sa-measure to an a-measure, and get another a-measure where the expectation values of all the functions went up, while the new a-measure we landed at would be impossible to make by adding an a-measure to an a-measure.
Basically, we’ve gotta use sa-measures for a clean formulation of “we added all the points we possibly could to this set”, getting the canonical set in your equivalence class.
Admittedly, you could intersect with the cone of a-measures again at the end (as we do in the next post) but then you wouldn’t get the nice LF-duality tie-in.
Adding the cone of a-measures instead would correspond to being able to take expectation values of continuous functions in [0,∞), instead of in [0,1], so I guess you could reformulate things this way, but IIRC the 0-1 normalization doesn’t work as well (ie, there’s no motive for why you’re picking 1 as the thing to renormalize to 1 instead of, say, renormalizing 10 to 10). We’ve got a candidate other normalization for that case, but I remember being convinced that it doesn’t work for belief functions, but for the Nirvana-as-1-reward-forever case, I remember getting really confused about the relative advantages of the two normalizations. And apparently, when working on the internal logic of infradistributions, this version of things works better.
So, basically, if you drop sa-measures from consideration you don’t get the nice LF-duality tie in and you don’t have a nice way to express how upper completion works. And maybe you could work with a-measures and use upper completion w.r.t. a different cone and get a slightly different LF-duality, but then that would make normalization have to work differently and we haven’t really cleaned up the picture of normalization in that case yet and how it interacts with everything else. I remember me and Vanessa switched our opinions like 3 times regarding which upper completion to use as we kept running across stuff and going “wait no, I take back my old opinion, the other one works better with this”.