Without wishing to be facetious: how much (if any) of the post did you read? If you disagree with me, that’s fine, but I feel like I’m answering questions which I already addressed in the post!
Are you arguing that we ought to (1) assign some “goodness” values to outcomes, and then (2) maximize the geometric expectation of “goodness” resulting from our actions?
I’m not arguing that we ought to maximize the geometric expectation of “goodness” resulting from our actions. I’m exploring what it might look like if we did. In the conclusion, (and indeed, many other parts of the post) I’m pretty ambivalent.
But then wouldn’t any argument for (2) depend on the details of how (1) is done? For example, if “goodnesses” were logarithmic in the first place, then wouldn’t you want to use arithmetic averaging?
I don’t think so. I think you could have a preference ordering over ‘certain’ world states and the you are still left with choosing a method for deciding between lotteries where the outcome is uncertain. I describe that this is my position in the section titled ‘Geometric Expectation Logarithmic Utility’.
Is there some description of how we should assign goodnesses in (1) without a kind of firm ground that VNM gives?
This is what philosophers of normative ethics do! People disagree on the how exactly to do it, but that doesn’t stop them from trying! My post tries to be agnostic as to what exactly it is we care about and how we assign utility to different world states, since I’m focusing on the difference between averaging methods.
Thanks for pointing this out, I missed a word. I have added it now.