It isn’t clear to me that the only reason why maximizing E(X) and maximizing E(log(X)) are different is that “zero is special”, even when we are considering what happens in the long run. Specifically, suppose your individual bets have some nasty distribution whose tails are too fat for the variance to be defined; then it needn’t be true that your performance almost always looks like its expectation.
In particular its possible for log(X) to have well-defined variance but not X, and for E(log(X)) but not E(X) to be defined.
In particular its possible for log(X) to have well-defined variance but not X, and for E(log(X)) but not E(X) to be defined.