If you are playing a game according to certain rules and set the playing-machine to play for victory, you will get victory if you get anything at all, and the machine will not pay the slightest attention to any consideration except victory according to the rules. If you are playing a war game with a certain conventional interpretation of victory, victory will be the goal at any cost, even that of extermination of your own side, unless this condition of survival is explicitly contained in the definition of victory according to which you program the machine.
While it is always possible to ask for something other than what we really want, this possibility is most serious when the process by which we are to obtain our wish is indirect, and the degree to which we have obtained our wish is not clear until the very end. Usually we realize our wishes, insofar as we do actually realize them, by a feedback process, in which we compare the degree of attainment of intermediate goals with our anticipation of them. In this process, the feedback goes through us, and we can turn back before it is too late. If the feedback is built into a machine that cannot be inspected until the final goal is attained, the possibilities for catastrophe are greatly increased.
A goal-seeking mechanism will not necessarily seek our goals unless we design it for that purpose, and in that designing we must foresee all steps of the process for which it is designed, instead of exercising a tentative foresight which goes up to a certain point, and can be continued from that point on as new difficulties arise. The penalties for errors of foresight, great as they are now, will be enormously increased as automation comes into its full use.
-- Norbert Wiener, God and Golem, 1964
Nice, succinct statement of the Unfriendly AGI argument, and written 53 years ago!