I think the Allais argument against Independence doesn’t really work.[...] But $0-with-extra-disappointment is a different outcome to $0, so those preferences don’t violate Independence!
I strongly agree, and I think that it’s worth emphasizing that people optimize (partially) for their own emotions, and choices which seem irrational when this consideration is neglected can be rational when it is taken into account.
With that being said, there’s still a chance that an Allais-like argument could work.
Let’s imagine a different hypothetical choice:
In situation one, you choose between:
Gamble A, which is a certainty that a charity that you value will be given a million euros.
Gamble B, which is an 89% chance of one million, 10% chance of five million, 1% chance of nothing going to the same charity. In any case, it is certain that you will never find out which of these has occurred.
In situation two, you choose between:
Gamble C, which is an 11% chance of one million and an 89% chance of nothing going to the same charity, and again in any case you will never find out which of these has occurred.
Gamble D, which is a 10% chance of five million and a 90% chance of nothing going to the same charity, and again in either case you will never find out which of these has occurred.
This has a similar structure to the original Allias choice, but the 1% risk of feeling disappointment from choosing option B is gone because you’ll never find out.
If people still choose A over B and D over C here, then I think that we could conclude that people violate Independence. This is am empirical question; has a study like this ever been done?
(I leave open the question of whether this would be a mark against Independence or a mark against people’s instinctive decision-making.)
I can see a problem with the way that I phrased the question here. I wanted an example of something that a person would value and want to make happen, but which they might plausibly not find out about. I wasn’t imagining a specific charity, but I was thinking of something linear in terms of good done per money donated, which would be something large that’s already adequately funded but not saturated. Yet one could imagine a specific charity when answering the question, and the conditions of that charity could affect the shape of the utility-vs-money curve. That means that the question could end up measuring a feature of someone’s contextual utility-vs-money curve instead of measuring their reaction to risk.
I just need an example of something that’s really good, and something else that’s five times as good, both which a person might not find out about. Maybe we could use lives saved—strangers’ lives, and you’ll never find out who—but people could have weird moral intuitions regarding saving lives that distort the results. (There’s a famous example of a framing effect, the ‘Asian disease’ problem, based on this.)
We could stick with the charity example and specify linearity in utility-vs-money, but that wouldn’t be a concise question, and it could be misunderstood.
Does anyone have any better ideas?