What would I call an AI that optimizes strictly for my goals...A Will-AI?
One could say “Friendly towards Will.”
But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don’t see a problem of being vague about the target of Friendliness.
But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don’t see a problem of being vague about the target of Friendliness.
Agreed. And asking the question of what is preference of a specific person, represented in some formal language, seems to be a natural simplification of the problem statement, something that needs to be understood before the problem of preference aggregation can be approached.
One could say “Friendly towards Will.”
But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don’t see a problem of being vague about the target of Friendliness.
Agreed. And asking the question of what is preference of a specific person, represented in some formal language, seems to be a natural simplification of the problem statement, something that needs to be understood before the problem of preference aggregation can be approached.