Thanks for this, I always thought this is a quite fundamental/important issue. I hope Scott Garrabrant chimes in.
Maximizing the geometric expectation makes a lot of sense when we interpret “utility” as measuring wealth or money. Losing all your wealth is obviously much worse than doubling your wealth is good. The geometric expectation accounts for this by making doubling and having your wealth (at equal odds) cancel out in expectation.
But more often we mean with “utility” the degree of goodness or badness of an outcome (“welfare”), or how strongly we want it to be true or false (“degree of desire”). These values can arguably be both positive and negative. There seems to be no a priori lower bound on badness of an outcome (or how strongly we disvalue the outcome), just as there is no upper bound on its goodness (or how strongly we value it being true).
But the geometric expectation requires that utility is non-negative. Perhaps even positive, as the problems with zero utility show. Usually the geometric mean is only used for positive numbers.
So my current take: Geometric expectation seems correct on utility as wealth, arithmetic expectation seems correct on utility as welfare, or utilities as degrees of desire (“values”). Though I haven’t yet checked how many of the issues you mention are solved by this.
By the way, you should crosspost this to the EA Forum because of its obvious application to ethics. There should be an option for that in the post options, though perhaps you need to link your EA Forum account first.
Thanks for this, I always thought this is a quite fundamental/important issue. I hope Scott Garrabrant chimes in.
Maximizing the geometric expectation makes a lot of sense when we interpret “utility” as measuring wealth or money. Losing all your wealth is obviously much worse than doubling your wealth is good. The geometric expectation accounts for this by making doubling and having your wealth (at equal odds) cancel out in expectation.
But more often we mean with “utility” the degree of goodness or badness of an outcome (“welfare”), or how strongly we want it to be true or false (“degree of desire”). These values can arguably be both positive and negative. There seems to be no a priori lower bound on badness of an outcome (or how strongly we disvalue the outcome), just as there is no upper bound on its goodness (or how strongly we value it being true).
But the geometric expectation requires that utility is non-negative. Perhaps even positive, as the problems with zero utility show. Usually the geometric mean is only used for positive numbers.
Eric Neyman also made this point a while ago.
So my current take: Geometric expectation seems correct on utility as wealth, arithmetic expectation seems correct on utility as welfare, or utilities as degrees of desire (“values”). Though I haven’t yet checked how many of the issues you mention are solved by this.
By the way, you should crosspost this to the EA Forum because of its obvious application to ethics. There should be an option for that in the post options, though perhaps you need to link your EA Forum account first.