Say, you tell me rules of Chess, and tell me to write a chess engine for a computer chess tournament—software that’s playing chess, run on same hardware.
The chess got 3 utility values, really: win>tie>loss .
What will I do?
I will start making functions to evaluate the board position—the simplest one might sum values of pieces—which work a lot like utility functions, but I am going to deviate from maximization of this “utility” whenever I see fit, for this utility doesn’t matter. I will be inventing easy to compute utility function(s) to use to get my agent to the victory. I’d do the same for myself to be able to play it. I’ll have to be maximizing fake utilities, and violating their maximization from time to time.
If I am very advanced, and I make the AI that would be told rules of chess and then play chess (without having been programmed to play chess), such AI will have to invent such substitute functions, for it can not evaluate true utility of any move that’s far from the loss/win/tie, and it will lose almost all of the pieces before it’s choices being being driven by it’s foresight of it’s demise. This will be the case even if the AI is to run on strongly superhuman hardware that does 10^30 FLOPS (think Dyson Spheres). It will still get it merry ass handed to it even by deep blue (or Kasparov), if it won’t meta-strategize and invent utility functions that lead to victory.
Indeed. However...
Say, you tell me rules of Chess, and tell me to write a chess engine for a computer chess tournament—software that’s playing chess, run on same hardware. The chess got 3 utility values, really: win>tie>loss .
What will I do?
I will start making functions to evaluate the board position—the simplest one might sum values of pieces—which work a lot like utility functions, but I am going to deviate from maximization of this “utility” whenever I see fit, for this utility doesn’t matter. I will be inventing easy to compute utility function(s) to use to get my agent to the victory. I’d do the same for myself to be able to play it. I’ll have to be maximizing fake utilities, and violating their maximization from time to time.
If I am very advanced, and I make the AI that would be told rules of chess and then play chess (without having been programmed to play chess), such AI will have to invent such substitute functions, for it can not evaluate true utility of any move that’s far from the loss/win/tie, and it will lose almost all of the pieces before it’s choices being being driven by it’s foresight of it’s demise. This will be the case even if the AI is to run on strongly superhuman hardware that does 10^30 FLOPS (think Dyson Spheres). It will still get it merry ass handed to it even by deep blue (or Kasparov), if it won’t meta-strategize and invent utility functions that lead to victory.