Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
Simple list of values
Complex machinery for attaining those values
The idea being that if you can’t know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn’t work for Kasparov in life. If you try to predict Kasparov’s actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.
In hindsight we shouldn’t really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.
Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
Simple list of values
Complex machinery for attaining those values
The idea being that if you can’t know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn’t work for Kasparov in life. If you try to predict Kasparov’s actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.
In hindsight we shouldn’t really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.
Counter-example 1: gene-frequency maximization in biology. A tremendously simple principle with enormous explanatory power.
Counter-example 2: Entropy maximization. Another tremendously simple principle with enormous explanatory power.
Note that both are maximization principles—the very type of principle whose limitations you are arguing for.