If we wanted to be super proper, then preferences should have as objects maximally specific ways the world could be, including the whole history and future of the universe, down to the last detail. Decision theory involving anything more coarse-grained than that is just a useful approximation
If X is better than Y, that gives no guidance about 40% chance of X being better or worse than 60% of Y. Preference over probability distributions holds strictly more data than preference over pure outcomes. Almost anything (such as actions-in-context) can in principle be given the semantics of a probability distribution or of an event in some space of maximally specific or partial outcomes, so preference over probability distributions more plausibly holds sufficient data.
Yeah, you’re correct—I shouldn’t have conflated “outcomes” (things utilities are non-derivatively assigned to) with “objects of preference.” Thanks for this.
If X is better than Y, that gives no guidance about 40% chance of X being better or worse than 60% of Y. Preference over probability distributions holds strictly more data than preference over pure outcomes. Almost anything (such as actions-in-context) can in principle be given the semantics of a probability distribution or of an event in some space of maximally specific or partial outcomes, so preference over probability distributions more plausibly holds sufficient data.
Yeah, you’re correct—I shouldn’t have conflated “outcomes” (things utilities are non-derivatively assigned to) with “objects of preference.” Thanks for this.