# AK

Karma: 35
NewTop
• I’m not sure about the first case:

if you don’t have a VNM utility function, you risk being mugged by wandering Bayesians

I don’t see why this is true. While “VNM utility function ⇒ safe from wandering Bayesians”, it’s not clear to me that “no VNM utility function ⇒ vulnerable to wandering Bayesians.” I think the vulnerability to wandering Bayesians comes from failing to satisfy Transitivity rather than failing to satisfy Completeness. I have not done the math on that.

But the general point, about approximation, I like. Utility functions in game theory (decision theory?) problems normally involve only a small space. I think completeness is an entirely safe assumption when talking about humans deciding which route to take to their destination, or what bets to make in a specified game. My question comes from the use of VNM utility in AI papers like this one: http://​intelligence.org/​files/​FormalizingConvergentGoals.pdf, where agents have a utility function over possible states of the universe (with the restriction that the space is finite).

Is the assumption that an AGI reasoning about universe-states has a utility function an example of reasonable use, for you?

• 13 May 2018 18:47 UTC
6 points

Thanks for this response. On notation: I want world-states, , to be specific outcomes rather than random variables. As such, is a real number, and the expectation of a real number could only be defined as itself: in all cases. I left aside all the discussion of ‘lotteries’ in the VNM Wikipedia article, though maybe I ought not have done so.

I think your first two bullet points are wrong. We can’t reasonably interpret ~ as ‘the agent’s thinking doesn’t terminate’. ~ refers to indifference between two options, so if and ~ , then . Equating ‘unable to decide between two options’ and ‘two options are equally preferable’ will lead to a contradiction or a trivial case when combined with transitivity. I can cook up something more explicit if you’d like?

There’s a similar problem with ~ meaning ‘the agent chooses randomly’, provided the random choice isn’t prompted by equality of preferences.

This comment has sharpened my thinking, and it would be good for me to directly prove my claims above—will edit if I get there.

# Why Universal Comparability of Utility?

13 May 2018 0:10 UTC
27 points