Are you sure you get to choose a specific normalized probability function? Because then you can just number your possible outcomes by integers n and set p(n) to 1/U(n) * 1/2^n, which seems too easy to have been missed.
It might be slightly more complicated in practice, since S doesn’t have to be countable if the uncountable parts can be integrated over. This doesn’t violate the linked theorem because each item in the “sum” is not greater than zero, it’s infinitesimal. But you could do pretty much the same thing to the probability densities inside any integrals.
So I suspect that the result is only proved for when you don’t get to choose your own probability function, and that that comes up in the paper. But I agree that due to some effects you can end up with probability and utility functions that naturally are biased against high-utility situations, and this would be a counterexample to the result’s applicability in those special cases.
The probability function I chose meets the requirements in the paper; therefore it is a case that the theorem should apply to.
Trying to set p(n) to 1/U(n) * 1/2^n is clever; but it doesn’t work, because that probability distribution is not known to sum to 1. (It would sum to 1 if U(n) = 1 for all n.)
Because then you can just number your possible outcomes by integers n and set p(n) to 1/U(n) * 1/2^n, which seems too easy to have been missed.
The reason why this wouldn’t work is that sometimes what you’re calling “U(n)” would fail to be well defined (because some computation doesn’t halt) whereas p(n) must always return something.
Are you sure you get to choose a specific normalized probability function? Because then you can just number your possible outcomes by integers n and set p(n) to 1/U(n) * 1/2^n, which seems too easy to have been missed.
It might be slightly more complicated in practice, since S doesn’t have to be countable if the uncountable parts can be integrated over. This doesn’t violate the linked theorem because each item in the “sum” is not greater than zero, it’s infinitesimal. But you could do pretty much the same thing to the probability densities inside any integrals.
So I suspect that the result is only proved for when you don’t get to choose your own probability function, and that that comes up in the paper. But I agree that due to some effects you can end up with probability and utility functions that naturally are biased against high-utility situations, and this would be a counterexample to the result’s applicability in those special cases.
The probability function I chose meets the requirements in the paper; therefore it is a case that the theorem should apply to.
Trying to set p(n) to 1/U(n) * 1/2^n is clever; but it doesn’t work, because that probability distribution is not known to sum to 1. (It would sum to 1 if U(n) = 1 for all n.)
The reason why this wouldn’t work is that sometimes what you’re calling “U(n)” would fail to be well defined (because some computation doesn’t halt) whereas p(n) must always return something.
No; the utility function is stipulated to be computable.
What Manfred is calling U(n) here corresponds to what the paper would call U(phi_n(k)).
The utility function is defined as being computable over all possible input.
phi_n(k) may not halt.