I agree with your claim that VNM is in some ways too lax.
vNM is .. too restrictive … [because] vNM requires you to be risk-neutral. Risk aversion violates preferences being linear in probability … Many people desperately want risk aversion, but that’s not the vNM way.
Do many people desperately want to be risk averse about the probability a given outcome will be achieved? I agree many people want to be loss averse about e.g. how many dollars they will have. Scott Garrabrant provides an example in which a couple wishes to be fair to its members via compensating for other scenarios in which things would’ve been done the husband’s way (even though those scenarios did not Scott’s example is … sort of an example of risk aversion about probabilities? I’d be interested in other examples if you have them.
I’m… pretty sure that something like the certainty effect is really important to people, and I’d count that as a type of risk aversion. Often that takes the form of violating continuity and lexically preferring options with certainty over lotteries with non-{0, 1} probabilities.
The issue may also partially lie with Bayesianism, where you can never update to (or away from) certainty that you actually have got The Good Thing, Here (or avoided That Bad Thing since it’s definitely Not Here).
And that can also connect to some of the lack of green in optimizers, because they can never be sure that they have actually got The Good Thing (being certain that at least one paperclip is right here, for real, at least for now). Instead they strive to update ever closer to that certainty, gaining ever more marginal utility since that’s marginally more valuable under vNM.
Humans and animals, on the other hand, have a mode where they sometimes either round the probability up to 1 (or down to 0) or act as if there is no marginally increasing utility from increasing the probability of Good Thing. So (I think) that they by default perform mild optimization.
I agree with your claim that VNM is in some ways too lax.
Do many people desperately want to be risk averse about the probability a given outcome will be achieved? I agree many people want to be loss averse about e.g. how many dollars they will have. Scott Garrabrant provides an example in which a couple wishes to be fair to its members via compensating for other scenarios in which things would’ve been done the husband’s way (even though those scenarios did not
Scott’s example is … sort of an example of risk aversion about probabilities? I’d be interested in other examples if you have them.
I’m… pretty sure that something like the certainty effect is really important to people, and I’d count that as a type of risk aversion. Often that takes the form of violating continuity and lexically preferring options with certainty over lotteries with non-{0, 1} probabilities.
The issue may also partially lie with Bayesianism, where you can never update to (or away from) certainty that you actually have got The Good Thing, Here (or avoided That Bad Thing since it’s definitely Not Here).
And that can also connect to some of the lack of green in optimizers, because they can never be sure that they have actually got The Good Thing (being certain that at least one paperclip is right here, for real, at least for now). Instead they strive to update ever closer to that certainty, gaining ever more marginal utility since that’s marginally more valuable under vNM.
Humans and animals, on the other hand, have a mode where they sometimes either round the probability up to 1 (or down to 0) or act as if there is no marginally increasing utility from increasing the probability of Good Thing. So (I think) that they by default perform mild optimization.