We’re talking about outcomes, not mechanisms. Of course you have to design a mechanism that actually achieves a Pareto-optimal outcome/maximizes total utility—nobody argues that “just ask people to report their utilities” is the mechanism to do this. This remains the same whether for total utilitarianism or geometric rationality or anything else.
E.g. markets (under assumptions of perfect competititon, no transaction costs and no information asymmetry) maximize linear weighted utility.
This doesn’t solve the problem of motivation to lie about (or change) one’s utility function to move the group equilibrium, does it?
We’re talking about outcomes, not mechanisms. Of course you have to design a mechanism that actually achieves a Pareto-optimal outcome/maximizes total utility—nobody argues that “just ask people to report their utilities” is the mechanism to do this. This remains the same whether for total utilitarianism or geometric rationality or anything else.
E.g. markets (under assumptions of perfect competititon, no transaction costs and no information asymmetry) maximize linear weighted utility.