And I’ve added in the specific other decisions needed to achieve this effect.
That you claim achieve that effect. But as I said, unless the are choices you can make that would protect you from light injury involve less inconvenience per % reduction in risk than the choices you can make that would protect you from death, it doesn’t work.
However, I did think of something which seems to sort of achieve what you want: if you have high uncertainty about what the value of your utility function will be, then adding something to it with some probability will have a significant effect on the median value, even if the probability is significantly less than 50%. For instance, a 49% chance of death is very bad because if there’s a 49% chance you die, then the median outcome is one in which you’re alive but in a worse situation than all but 1⁄51 of the scenarios in which you die. It may be that this is what you had in mind, and adding future decisions that involve uncertainty was merely a mechanism by which large uncertainty about the outcome was introduced, in which case future-you actually getting to make any choices about them was a red herring. I still don’t find this argument convincing either, though, both because it still undervalues protection from risks of losses that are large relative to the rest your uncertainty about the value of the outcome (for instance, note that when valuing reductions in risk of death, there is still a weird discontinuity around 50%), and because it assumes that you can’t make decisions that selectively have significant consequences only in very good or very bad outcomes (this is what I was getting at with the house insurance example).
My take on this is that counterfactual decision count as well. … And if imagination and past experiences are valid for the purpose of constructing your utility function, they should be valid for the purpose of median-maximalisation.
I don’t understand what you’re saying here. Is it that you can maximize the median value of the mean of the values of your utility function in a bunch of hypothetical scenarios? If so, that sounds kind of like Houshalter’s median of means proposal, which approaches mean maximization as the number of samples considered approaches infinity.
The observation I have is that when facing many decisions, median maximialisation tends to move close to mean maximalisation (since the central limit theorem has “convergence in the distribution”, the median will converge to the mean in the case of averaging repeated independent processes; but there are many other examples of this). Therefore I’m considering what happens if you add “all the decisions you can imagine making” to the set of actual decisions you expect to make. This feels like it should move the two even closer together.
Ah, are you saying you should use your prior to choose a policy that maximizes your median utility, and then implementing that policy, rather than updating your prior with your observations and then choosing a policy that maximizes the median? So like UDT but with medians?
It seems difficult to analyze how it would actually behave, but it seems likely to be true that it acts much more similarly to mean utility maximization than it would if you updated before choosing the policy. Both of these properties (difficulty to analyze, and similarity to mean maximization) make it difficult to identify problems that it would perform poorly on. But this also makes it difficult to defend its alleged advantages (for instance, if it ends up being too similar to mean maximization, and if you use an unbounded utility function as you seem to insist, perhaps it pays Pascal’s mugger).
Ah, are you saying you should use your prior to choose a policy that maximizes your median utility, and then implementing that policy, rather than updating your prior with your observations and then choosing a policy that maximizes the median? So like UDT but with medians?
Ouch! Sorry for not being clear. If you missed that, then you can’t have understood much of what I was saying!
That you claim achieve that effect. But as I said, unless the are choices you can make that would protect you from light injury involve less inconvenience per % reduction in risk than the choices you can make that would protect you from death, it doesn’t work.
However, I did think of something which seems to sort of achieve what you want: if you have high uncertainty about what the value of your utility function will be, then adding something to it with some probability will have a significant effect on the median value, even if the probability is significantly less than 50%. For instance, a 49% chance of death is very bad because if there’s a 49% chance you die, then the median outcome is one in which you’re alive but in a worse situation than all but 1⁄51 of the scenarios in which you die. It may be that this is what you had in mind, and adding future decisions that involve uncertainty was merely a mechanism by which large uncertainty about the outcome was introduced, in which case future-you actually getting to make any choices about them was a red herring. I still don’t find this argument convincing either, though, both because it still undervalues protection from risks of losses that are large relative to the rest your uncertainty about the value of the outcome (for instance, note that when valuing reductions in risk of death, there is still a weird discontinuity around 50%), and because it assumes that you can’t make decisions that selectively have significant consequences only in very good or very bad outcomes (this is what I was getting at with the house insurance example).
I don’t understand what you’re saying here. Is it that you can maximize the median value of the mean of the values of your utility function in a bunch of hypothetical scenarios? If so, that sounds kind of like Houshalter’s median of means proposal, which approaches mean maximization as the number of samples considered approaches infinity.
The observation I have is that when facing many decisions, median maximialisation tends to move close to mean maximalisation (since the central limit theorem has “convergence in the distribution”, the median will converge to the mean in the case of averaging repeated independent processes; but there are many other examples of this). Therefore I’m considering what happens if you add “all the decisions you can imagine making” to the set of actual decisions you expect to make. This feels like it should move the two even closer together.
Ah, are you saying you should use your prior to choose a policy that maximizes your median utility, and then implementing that policy, rather than updating your prior with your observations and then choosing a policy that maximizes the median? So like UDT but with medians?
It seems difficult to analyze how it would actually behave, but it seems likely to be true that it acts much more similarly to mean utility maximization than it would if you updated before choosing the policy. Both of these properties (difficulty to analyze, and similarity to mean maximization) make it difficult to identify problems that it would perform poorly on. But this also makes it difficult to defend its alleged advantages (for instance, if it ends up being too similar to mean maximization, and if you use an unbounded utility function as you seem to insist, perhaps it pays Pascal’s mugger).
Ouch! Sorry for not being clear. If you missed that, then you can’t have understood much of what I was saying!