[Link] Using Stories to Teach Human Values to Artificial Agents

Abstract:

Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior.

-- Using Stories to Teach Human Values to Artificial Agents

Comment by the lead researcher Riedl (cited on Slashdot):

“The AI … runs many thousands of virtual simulations in which it tries out different things and gets rewarded every time it does an action similar to something in the story,” said Riedl, associate professor and director of the Entertainment Intelligence Lab. “Over time, the AI learns to prefer doing certain things and avoiding doing certain other things. We find that Quixote can learn how to perform a task the same way humans tend to do it. This is significant because if an AI were given the goal of simply returning home with a drug, it might steal the drug because that takes the fewest actions and uses the fewest resources. The point being that the standard metrics for success (eg, efficiency) are not socially best.”

Quixote has not learned the lesson of “do not steal,” Riedl says, but “simply prefers to not steal after reading and emulating the stories it was provided.”