Reading through some AI literature, I stumbled upon a nicely concise statement of the core of decision theory, from Lindley (1985):
...there is essentially only one way to reach a decision sensibly. First, the uncertainties present in the situation must be quantified in terms of values called probabilities. Second, the various consequences of the courses of action must be similarly described in terms of utilities. Third, that decision must be taken which is expected — on the basis of the calculated probabilities — to give the greatest utility. The force of ‘must’, used in three places there, is simply that any deviation from the precepts is liable to lead the decision maker into procedures which are demonstrably absurd.
Of course, maximizing expected utility has its own absurd consequences (e.g. Pascal’s Mugging), so decision theory is not yet “finished.”
Reading through some AI literature, I stumbled upon a nicely concise statement of the core of decision theory, from Lindley (1985):
Of course, maximizing expected utility has its own absurd consequences (e.g. Pascal’s Mugging), so decision theory is not yet “finished.”