A brief tutorial on preferences in AI

Preferences are important both for rationality and for Friendly AI, so preferences are a major topic of discussion on Less Wrong. We’ve discussed preferences in the context of economics and decision theory, but I think AI has a more robust set of tools for working with preferences than either economics or decision theory has, so I’d like to introduce Less Wrong to some of these tools. In particular, I think AI’s toolset for working with preferences may help us think more clearly about CEV.

In AI, we can think of working with preferences in four steps:

  1. Preference acquisition: In this step, we aim to extract preferences from a user. This can occur either by preference learning or by preference elicitation. Preference learning occurs when preferences are acquired from data about the user’s past behavior or past preferences. Preference elicitation occurs as a result of an interactive process with the user, e.g. a question-answer process.

  2. Preferences modeling: Our next step is to mathematically express these acquired preferences as preferences between pairwise choices. The properties of a preferences model are important. For example, is the relation transitive? (If the model tells us that choice c1 is preferred to c2, and c2 is preferred to c3, can we conclude that c1 is preferred to c3?) And is the relation complete? (Is any choice comparable to any other choice, or are there some incomparabilities?)

  3. Preference representation: Assuming we want to capture and manipulate the user’s preferences robustly, we’ll next want to represent the preferences model in a preference representation language.

  4. Preferences reasoning: Once a user’s preferences are represented in a preference representation language, we can do cool things like preferences aggregation (involving the preferences of multiple agents) and preference revision (a user’s new preferences being added to her old preferences). We can also perform the usual computations of decision theory, game theory, and more.

Preference acquisition

Preference learning is typically an application of supervised machine learning (classification). Throw the algorithm at a database containing a user’s preferences, and it will learn that user’s preferences and make predictions about the preferences not listed in the database, including preferences about pairwise choices the user may never have faced before.

Preference elicitation involves asking a user a series of questions, and extracting their preferences from the answers they give. Chen & Pu (2004) survey some of the methods used for this.

In studying CEV, I am interested in methods built for learning a user’s utility function from inconsistent behavior (because humans make inconsistent choices). Nielsen & Jensen (2004) provided two computationally tractable algorithms which handle the problem by interpreting inconsistent behavior as random deviations from an underlying “true” utility function. As far as I know, however, nobody in AI has tried to solve the problem with an algorithm informed by the latest data from neuroeconomics on how human choice is the product of at least three valuation systems, only one of which looks anything like an “underlying true utility function.”

Preference Modeling

A model of a user’s preferences describes one of three relations between any two choices (“objects”): a strict preference relation which says that one choice is preferred to another, an indifference relation, and an incomparability relation. Kaci (2011), chapter 2 provides a brief account of preference modeling.

Preference Representation

In decision theory, a preference relation is represented by a numerical function with associates a utility value with each choice. But this may not be the best representation. We face an exponential number of choices whose explicit enumeration and evaluation is time-consuming. Moreover, users can’t compare all pairwise choices and evaluate how satisfactory each choice is.

Luckily, choices are often made on the basis of a set of attributes, e.g. cost, color, price, etc. You can use a preference representation language to represent partial descriptions of preferences and rank-order possible choices. The challenge of a preference representation language is that it should (1) cope with a user’s preferences, (2) faithfully represent the user’s preferences such that it rank-orders choices in a way similar to how the user would specify choices if they were able to provide preferences for every pairwise comparison, (3) cope with possibly inconsistent preferences, and (4) offer attractive complexity properties, i.e. the spatial cost of representing partial descriptions of preferences and the time cost of comparing pairwise choices or computing the best choices.

One popular method of preference representation is with the graphical representation language of conditional preference networks or “CP-nets.” They look like this.

Preferences Reasoning

There are a multitude of ways in which one might want to reason algorithmically about preferences. I point the reader to Part II of Kaci (2011) for a very incomplete overview.

General Sources:

Domshlak et al. (2011). Preferences in AI: An Overview. Artificial Intelligence 175: 1037-1052.

Fürnkranz & Hüllermeier (2010). Preference Learning. Springer.

Kaci (2011). Working with Preferences: Less is More. Springer.