Very interesting! I actually started having similar thoughts about money pumps and utility functions after learning Haskell. Specifically, that you can avoid the intransitivity → money-pumpable implication if you just assume (quite reasonably) that humans’ utility functions are lazily evaluated and have side effects (i.e. are impure functions).
In other words, humans don’t instantly know the implication of their utility function for every possible decision (which would imply logical omniscience), but rather, evaluate it only as the need arises; and once they evaluate it for a given input, the function can change because it was so evaluated so that it has a different I/O mapping on future evaluations (the impure part).
EY has actually said as much about morality and human values, but used the term abstract idealized dynamic.
Anyone know how badly (or if at all) the standard implications of the VNM utility axioms break down if you take away the requirement that the utility function must be strictly evaluated and pure?
Edit: Do you have a cite for that quote? I google it and only get your post.
Economists have pointed out that technical functions (i.e. the functions which yield the “outputs” for any given resource inputs and production techniques) are also explored lazily, as it were. It’s quite likely that the existing literature on machine learning and search theory has extensively considered the implications of such exploration on the resulting behavior.
Very interesting! I actually started having similar thoughts about money pumps and utility functions after learning Haskell. Specifically, that you can avoid the intransitivity → money-pumpable implication if you just assume (quite reasonably) that humans’ utility functions are lazily evaluated and have side effects (i.e. are impure functions).
In other words, humans don’t instantly know the implication of their utility function for every possible decision (which would imply logical omniscience), but rather, evaluate it only as the need arises; and once they evaluate it for a given input, the function can change because it was so evaluated so that it has a different I/O mapping on future evaluations (the impure part).
EY has actually said as much about morality and human values, but used the term abstract idealized dynamic.
Anyone know how badly (or if at all) the standard implications of the VNM utility axioms break down if you take away the requirement that the utility function must be strictly evaluated and pure?
Edit: Do you have a cite for that quote? I google it and only get your post.
Beckstead’s dissertation isn’t online yet, and he asked me not to upload it.
Thanks for sharing the connections between human utility functions and programming functions.
Other works on that subject are Muehlhauser (2012) and Nielsen & Jensen (2004), both of which I cited in IEME, and also Srivastava & Schrater (2012), which was recently brought to my attention by Jacob Steinhardt.
Economists have pointed out that technical functions (i.e. the functions which yield the “outputs” for any given resource inputs and production techniques) are also explored lazily, as it were. It’s quite likely that the existing literature on machine learning and search theory has extensively considered the implications of such exploration on the resulting behavior.