I agree with you that, for a normal human, spending too much time/energy doing too precise calculations of outcomes and trying to figure out your utility function is not worth it. Using faster, less precise but less expensive algorithms is often more efficient. My hunter-gatherer would indeed be wiser to find way to be useful to (or, for a darker version of him, manipulate) the tribe, not ponder too much about decision theory.
But that is a different issue than having an universal utility function or not. There an be an universal utility function of a specific human (a mathematical tool to score possible outcomes and select the most desirable ones), while the human don’t know it and don’t actually use it.
But if you want to build a FAI, it then becomes important to figure out human’s utility function.
And if you build an AI (even a limited one, like an AI in a game) it may be interesting to know about decision theory, to know if the AI should have an utility function, if it should be risk averse, …
Say, you tell me rules of Chess, and tell me to write a chess engine for a computer chess tournament—software that’s playing chess, run on same hardware.
The chess got 3 utility values, really: win>tie>loss .
What will I do?
I will start making functions to evaluate the board position—the simplest one might sum values of pieces—which work a lot like utility functions, but I am going to deviate from maximization of this “utility” whenever I see fit, for this utility doesn’t matter. I will be inventing easy to compute utility function(s) to use to get my agent to the victory. I’d do the same for myself to be able to play it. I’ll have to be maximizing fake utilities, and violating their maximization from time to time.
If I am very advanced, and I make the AI that would be told rules of chess and then play chess (without having been programmed to play chess), such AI will have to invent such substitute functions, for it can not evaluate true utility of any move that’s far from the loss/win/tie, and it will lose almost all of the pieces before it’s choices being being driven by it’s foresight of it’s demise. This will be the case even if the AI is to run on strongly superhuman hardware that does 10^30 FLOPS (think Dyson Spheres). It will still get it merry ass handed to it even by deep blue (or Kasparov), if it won’t meta-strategize and invent utility functions that lead to victory.
I agree with you that, for a normal human, spending too much time/energy doing too precise calculations of outcomes and trying to figure out your utility function is not worth it. Using faster, less precise but less expensive algorithms is often more efficient. My hunter-gatherer would indeed be wiser to find way to be useful to (or, for a darker version of him, manipulate) the tribe, not ponder too much about decision theory.
But that is a different issue than having an universal utility function or not. There an be an universal utility function of a specific human (a mathematical tool to score possible outcomes and select the most desirable ones), while the human don’t know it and don’t actually use it.
But if you want to build a FAI, it then becomes important to figure out human’s utility function.
And if you build an AI (even a limited one, like an AI in a game) it may be interesting to know about decision theory, to know if the AI should have an utility function, if it should be risk averse, …
Indeed. However...
Say, you tell me rules of Chess, and tell me to write a chess engine for a computer chess tournament—software that’s playing chess, run on same hardware. The chess got 3 utility values, really: win>tie>loss .
What will I do?
I will start making functions to evaluate the board position—the simplest one might sum values of pieces—which work a lot like utility functions, but I am going to deviate from maximization of this “utility” whenever I see fit, for this utility doesn’t matter. I will be inventing easy to compute utility function(s) to use to get my agent to the victory. I’d do the same for myself to be able to play it. I’ll have to be maximizing fake utilities, and violating their maximization from time to time.
If I am very advanced, and I make the AI that would be told rules of chess and then play chess (without having been programmed to play chess), such AI will have to invent such substitute functions, for it can not evaluate true utility of any move that’s far from the loss/win/tie, and it will lose almost all of the pieces before it’s choices being being driven by it’s foresight of it’s demise. This will be the case even if the AI is to run on strongly superhuman hardware that does 10^30 FLOPS (think Dyson Spheres). It will still get it merry ass handed to it even by deep blue (or Kasparov), if it won’t meta-strategize and invent utility functions that lead to victory.