I don’t like Dewey’s portfolio approach to utility functions. It goes something like this:
Dewey proposes a utility function maximizer, who considers all possible utility functions weighted by their probabilities: “[W]e propose uncertainty over utility functions. Instead of providing an agent one utility function up front, we provide an agent with a pool of possible utility functions and a probability distribution P such that each utility function can be assigned probability P(Ujyxm) given a particular interaction history [yxm]. An agent can then calculate an expected value over possible utility functions given a particular interaction history” He concludes saying that although it solves many of the mentioned problems, this method still leaves many open questions. However it should provide a direction for future work.
That’s roughly how I think about my goals, and it’s definately not very good for sustaining long term positive relationships with people. My outwardly professed goals can change in what appears to be a change of temperment. When the probability that I can secure one equally valuable goal slightly surpasses another goal, then the limits of me attention kick in and I overinvest in the new goal relative to the effort that ought to be accorded to in line with its utility function. So, a more intelligence system should either be able to prefess the heirachy of preferences (values) it has with greater sophistication than me, or to have a split attention unlike me.
I don’t like Dewey’s portfolio approach to utility functions. It goes something like this:
That’s roughly how I think about my goals, and it’s definately not very good for sustaining long term positive relationships with people. My outwardly professed goals can change in what appears to be a change of temperment. When the probability that I can secure one equally valuable goal slightly surpasses another goal, then the limits of me attention kick in and I overinvest in the new goal relative to the effort that ought to be accorded to in line with its utility function. So, a more intelligence system should either be able to prefess the heirachy of preferences (values) it has with greater sophistication than me, or to have a split attention unlike me.