Except you get this result by making up probabilities rather than arriving at them through any rational process. This has been discussed here many times before, including in the sequences and very recently. Downvoted.
I disagree that the above is not a new contribution to thought on this. The issue at stake has to do with restricting the set of permissible utility functions. If we have a probability measure induced by our empirical observations, then it doesn’t do any good from a rationalism standpoint to allow non-summable or non-integrable utility functions with respect to that probability measure.
This example shows one such case. Suppose Nature hands me a probability distribution over some sequence of events, P(Xn) = 2^{-n}. Then there is a meta-probability assignment over the space of utility functions I can assign to the events Xn and it involves the resulting expectations. You can think of it like a Dirichlet distribution.
It makes no sense to speak of utility functions that aren’t L1(problem domain) (respectively, l1(problem domain)) under the probability measure you believe to be true about the situation.
I think Pascal’s mugging suffers from this issue. For any valid probability distribution over the number of lives at stake, I can produce utility functions for valuing lives that produce arbitrarily different output decisions. In reality, though, you can’t decouple the choice of a “permissible” utility function from the exact same processes that yield some knowledge or model about the probability distribution over lives threatened.
I could go get some evidence about probability of lives threatened, then internally reflect on how I should choose to assign value to lives, then compute joint probability distributions over both the threatened lives and all my different options for utility functions on the space of threatened lives, then internally reflect on how to value joint configurations of (threatened lives, utility functions over spaces of threatened lives), then compute joint probabilities over the 2-tuple consisting of ( (threatened lives, utility functions over threatened lives), utility functions over 2-tuples of (threatened lives, utility functions over threatened lives) ), and so on ad infinitum.
At some point, because brains have finite computing resources and (at least human brains) have a machine epsilon, I just have to stop this recursive computation, draw a line in the sand, accept some conditional probabilities some at some deep ply of the computation, and then integrate my way back all the way down to the decision of choosing a utility function.
Nothing stops me from choosing a utility function that, when coupled with the probabilities that Nature gives me, causes my expectation to fail to be summable (integrable). I could, after all, act like The Ultimate Pessimist and assign a utility of -\infty to every outcome, for example. More realistically, I could choose a utility function that has the same shape as a Cauchy distribution. But in the landscape of meta-goals, or even just correspondence of utility functions to reality, this would be bad for me. How can I make decisions about which bets to accept if I am in a situation where Nature hands me an improper prior uniform probability of a set of different outcomes, and I choose to have a Cauchy distribution of personal utility over that set of outcomes? The idea of an expectation fails to even exist in that scenario. Hence, scalar multiples of Cauchy distributions don’t make much sense viewed as potential utility functions.
The example here of conditional convergence is a very elementary one. More complicated issues like this arise when you think in terms of probability theory and functional analysis on the space of utility functions. But it’s a salient example nonetheless. If we choose utility functions such that the resultant expectation calculation includes a conditionally convergent, or worse non-summable, series, then we can’t accept or reject bets in a way that has meaningful correspondence to our perceived actual utility. Hence, implicitly, rationalists must make some time-saving admissibility criteria for what sorts of functions are even allowed to be utility functions.
Getting rid of conditional convergence, or issues of non-measurability and non-integrability, would seem like intuitively plausible first steps in forming utility functions. Similar to the way that Jaynes showed how consistent formulations of belief in terms of wagers was isomorphic to probability theory, we have similar constraints on consistent use of utility functions. But as the Cauchy distribution example above, for utility functions, shows that the restrictions must actually be quite a bit more severe than mere summability.
The fact that this is a problem does not make anything in the post novel. In the grandparent, I linked to discussions of this problem that touched on everything that you discussed here.
I could go get some evidence about probability of lives threatened, then internally reflect on how I should choose to assign value to lives, then compute joint probability distributions over both the threatened lives and all my different options for utility functions on the space of threatened lives
Since utility functions are only unique modulo affine transforms, you can’t combine them using naive expected utility. The correct method to do so is unknown.
Since utility functions are only unique modulo affine transforms, you can’t combine them using naive expected utility. The correct method to do so is unknown.
I’m aware of this, but fail to see how it would change the ability to make probability distributions over the space of utility functions and then take expectations there. Sure, you’d be doing it over equivalence classes of functions, but that’s hardly any difficulty. What I am saying is you can assign utility to choices of utility functions: utility functions must inherently be recursive in practice. And so their non-summability (or other technical difficulties) causes immediate problems.
Utility functions are not primitive. They are constructed using an algorithm specified by vN&M (or Savage, or A&A). Constructed from preferences over lotteries over outcomes. Preferences are primitive. Priors over states of nature are primitive. Utility functions are constructs. They are not arbitrary.
As has been mentioned, if you constrain preferences using one of the standard vN&M axioms, and if you assume that you can construct a lottery leading to any outcome, then you can prove that outcome utilities are bounded.
I think that the OP needs to be seen as a proposal for constraining the freedom to construct arbitrary lottery-probes. And, if the constraint is properly defined, we can have an algorithm that generates unbounded utilities, but not poorly behaved utilities—utilities which cannot be used to construct expectations that are not unconditionally convergent.
You had one link for changing the expected utility just to make Pascal’s mugging go away, and another that seems to be based on the same idea, but has flawed reasoning and a different conclusion.
The first link was to the comment, not the post; I disagree with the post. The proposal in the second link was qualitatively similar to yours and it failed for the same reason.
I disagree that the above is not a new contribution to thought on this. The issue at stake has to do with restricting the set of permissible utility functions. If we have a probability measure induced by our empirical observations, then it doesn’t do any good from a rationalism standpoint to allow non-summable or non-integrable utility functions with respect to that probability measure.
This example shows one such case. Suppose Nature hands me a probability distribution over some sequence of events, P(Xn) = 2^{-n}. Then there is a meta-probability assignment over the space of utility functions I can assign to the events Xn and it involves the resulting expectations. You can think of it like a Dirichlet distribution.
It makes no sense to speak of utility functions that aren’t L1(problem domain) (respectively, l1(problem domain)) under the probability measure you believe to be true about the situation.
I think Pascal’s mugging suffers from this issue. For any valid probability distribution over the number of lives at stake, I can produce utility functions for valuing lives that produce arbitrarily different output decisions. In reality, though, you can’t decouple the choice of a “permissible” utility function from the exact same processes that yield some knowledge or model about the probability distribution over lives threatened.
I could go get some evidence about probability of lives threatened, then internally reflect on how I should choose to assign value to lives, then compute joint probability distributions over both the threatened lives and all my different options for utility functions on the space of threatened lives, then internally reflect on how to value joint configurations of (threatened lives, utility functions over spaces of threatened lives), then compute joint probabilities over the 2-tuple consisting of ( (threatened lives, utility functions over threatened lives), utility functions over 2-tuples of (threatened lives, utility functions over threatened lives) ), and so on ad infinitum.
At some point, because brains have finite computing resources and (at least human brains) have a machine epsilon, I just have to stop this recursive computation, draw a line in the sand, accept some conditional probabilities some at some deep ply of the computation, and then integrate my way back all the way down to the decision of choosing a utility function.
Nothing stops me from choosing a utility function that, when coupled with the probabilities that Nature gives me, causes my expectation to fail to be summable (integrable). I could, after all, act like The Ultimate Pessimist and assign a utility of -\infty to every outcome, for example. More realistically, I could choose a utility function that has the same shape as a Cauchy distribution. But in the landscape of meta-goals, or even just correspondence of utility functions to reality, this would be bad for me. How can I make decisions about which bets to accept if I am in a situation where Nature hands me an improper prior uniform probability of a set of different outcomes, and I choose to have a Cauchy distribution of personal utility over that set of outcomes? The idea of an expectation fails to even exist in that scenario. Hence, scalar multiples of Cauchy distributions don’t make much sense viewed as potential utility functions.
The example here of conditional convergence is a very elementary one. More complicated issues like this arise when you think in terms of probability theory and functional analysis on the space of utility functions. But it’s a salient example nonetheless. If we choose utility functions such that the resultant expectation calculation includes a conditionally convergent, or worse non-summable, series, then we can’t accept or reject bets in a way that has meaningful correspondence to our perceived actual utility. Hence, implicitly, rationalists must make some time-saving admissibility criteria for what sorts of functions are even allowed to be utility functions.
Getting rid of conditional convergence, or issues of non-measurability and non-integrability, would seem like intuitively plausible first steps in forming utility functions. Similar to the way that Jaynes showed how consistent formulations of belief in terms of wagers was isomorphic to probability theory, we have similar constraints on consistent use of utility functions. But as the Cauchy distribution example above, for utility functions, shows that the restrictions must actually be quite a bit more severe than mere summability.
The fact that this is a problem does not make anything in the post novel. In the grandparent, I linked to discussions of this problem that touched on everything that you discussed here.
Since utility functions are only unique modulo affine transforms, you can’t combine them using naive expected utility. The correct method to do so is unknown.
I’m aware of this, but fail to see how it would change the ability to make probability distributions over the space of utility functions and then take expectations there. Sure, you’d be doing it over equivalence classes of functions, but that’s hardly any difficulty. What I am saying is you can assign utility to choices of utility functions: utility functions must inherently be recursive in practice. And so their non-summability (or other technical difficulties) causes immediate problems.
Utility functions are not primitive. They are constructed using an algorithm specified by vN&M (or Savage, or A&A). Constructed from preferences over lotteries over outcomes. Preferences are primitive. Priors over states of nature are primitive. Utility functions are constructs. They are not arbitrary.
As has been mentioned, if you constrain preferences using one of the standard vN&M axioms, and if you assume that you can construct a lottery leading to any outcome, then you can prove that outcome utilities are bounded.
I think that the OP needs to be seen as a proposal for constraining the freedom to construct arbitrary lottery-probes. And, if the constraint is properly defined, we can have an algorithm that generates unbounded utilities, but not poorly behaved utilities—utilities which cannot be used to construct expectations that are not unconditionally convergent.
You had one link for changing the expected utility just to make Pascal’s mugging go away, and another that seems to be based on the same idea, but has flawed reasoning and a different conclusion.
The first link was to the comment, not the post; I disagree with the post. The proposal in the second link was qualitatively similar to yours and it failed for the same reason.