Dutch Books and Decision Theory: An Introduction to a Long Conversation

For a community that endorses Bayesian epistemology we have had surprisingly few discussions about the most famous Bayesian contribution to epistemology: the Dutch Book arguments. In this post I present the arguments, but it is far from clear yet what the right way to interpret them is or even if they prove what they set out to. The Dutch Book arguments attempt to justify the Bayesian approach to science and belief; I will also suggest that any successful Dutch Book defense of Bayesianism cannot be disentangled from decision theory. But mostly this post is to introduce people to the argument and to get people thinking about a solution. The literature is scant enough that it is plausible people here could actually make genuine progress, especially since the problem is related to decision theory.1

Bayesianism fits together. Like a well-tailored jacket it feels comfortable and looks good. It’s an appealing, functional aesthetic for those with cultivated epistemic taste. But sleekness is not a rigourous justification and so we should ask: why must the rational agent adopt the axioms of probability as conditions for her degrees of belief? Further, why should agents accept the principle conditionalization as a rule of inference? These are the questions the Dutch Book arguments try to answer.

The arguments begin with an assumption about the connection between degrees of belief and willingness to wager. An agent with degree of belief b in hypothesis h is assumed to be willing to buy wager up to and including $b in a unit wager on h and sell a unit wager on h down to and including $b. For example, if my degree of belief that I can drink ten eggnogs without passing out is .3 I am willing to bet $0.30 on the proposition that I can drink the nog without passing out when the stakes of the bet are $1. Call this the Will-to-wager Assumption. As we will see it is problematic.

The Synchronic Dutch Book Argument

Now consider what happens if my degree of belief that I can drink the eggnog is .3 and my degree of belief that I will pass out before I finish is .75. Given the Will-to-wager assumption my friend can construct a series of wagers that guarantee I will lose money. My friend could offer me a wager on b where I pay $0.30 for $1.00 stakes if I finish the eggnog. He could simultaneously offer me a bet where I pay $0.75 for $1.00 stakes if pass out. Now if I down the eggnog I win $0.70 from the first bet but I lose $0.75 from the second bet, netting me -$0.05. If I pass out I lose the $0.30 from the first bet, but win $0.25 from the second bet, netting me -$0.05. In gambling terminology these lose-lose bets are called a Dutch book. What’s cool about this is that violating the axioms of probability is a necessary and sufficient condition for degrees of belief to be susceptible to Dutch books, as in the above example. This is quite easy to see but the reader is welcome to pursue formal proofs: representing degrees of belief with only positive numbers, setting b(all outcomes)=1, and making b additive makes it impossible to construct a Dutch book. A violation of any axiom allows the sum of all b in the sample space to be greater than or less than 1, enabling a Dutch book.

The Diachronic Dutch Book Argument

What about conditionalization? Why must a rational agent believe h1 at b(h1|h2) once she learns h2? For this we update the Will-to-wager assumption to have it govern degrees of belief for hypothesis conditional on other hypotheses. An agent with degree of belief b in hypothesis h1|h2 is assumed to be willing to wager up to and including $b in a unit wager on h1 conditional on h2. This is a wager that is canceled if h2 turns out false but pays out if h2 turns out true. Say I believe with b=0.9 that I will finish ten drinks if we decide to drink cider instead of eggnog. Say I also believe with b=0.5 that we will drink cider and 0.5 that we drink eggnog. But say I *don’t* update my beliefs according to the principle of conditionalization. Once I learn that we will drink cider my belief that I will finish the drinks is only b=0.7. Given the Will-to-wager assumption I accept the following wagers.

(1) An unconditional wager on h2 (that we drink cider not eggnog) that pays $0.20 if h2 is true at b(h2)=0.5*$0.20= $0.10

(2) A unit wager on h1 (finishing ten drinks) conditional on h2 that pays $1.00 at b(h1|h2)=0.9*$1.00= $0.90

If h2 is false I lose $0.10 on wager (1). If h2 is true I win $0.10. But now I’m looking at all that cider and not feeling so good. I decide that my degree of belief that I will finish those ten ciders is only b=0.7. So my buys from me an unconditional wager (3) on h1 that pays $1.00 at b(h1)=0.7*$1.00=$0.7.

Then we start our drinking. If I finish the cider I gain $0.10 from wager (2) which puts me up $0.20, but then I lose $0.30 on wager (3) and I’m down $0.10 on the day. If I don’t finish that cider I win $0.70 from wager (3) which puts me at $0.80 until I have to pay out on wager (2) and go down to -$0.10 on the day.

Note again that any update in degree of belief in any hypothesis h upon learning evidence e that doesn’t equal b(h|e) is vulnerable to a Diachronic Dutch booking.

The Will-to-wager Assumption or Just What Does This Prove, Anyway?

We might want to take the above arguments literally and say they show not treating your degrees of belief like probabilities is liable to lead you into lose-lose wagers. But this would be a very dumb argument: there is no reason for anyone to actually make wagers in this manner. These are wagers which have zero expected gain and which presumably involve transaction costs. No rational person would make these wagers according to the Will-to-wager assumption. Second, the argument presented above uses money and as we are all familiar, money has diminishing return. You probably shouldn’t bet $100 for a one in a million shot at $100,000,000 because a hundred million dollars is probably not a million times more useful than a hundred dollars. Third, the argument assumes a rational person must want to win bets. A person might enjoy the wager even if the odds aren’t good or might prefer life without the money.

Nonetheless, the Will-to-wager Assumption doesn’t feel arbitrary, it just isn’t clear what it implies. There are a couple different strategies we might pursue to improve this argument. First, we can improve the Will-to-wager assumption and corresponding Dutch book theorems by making them about utility instead of money.

We start by defining a utility function, υ: XR where X is the set of outcomes and R is the set of real numbers. A rational agent is one that acts to maximize R according to their utility function. An agent with degree of belief b in hypothesis h is assumed to be willing to wager up to and including b(util) in a one unil wager on h. As a literal ascription of willingness to wager this interpretation still doesn’t make sense. But we can think of the wagers here as general stand-ins for decisions made under uncertainty. The Will-to-Wager assumption fails to work when taken literally because in real life we can always decline wagers. But we can take every decision we make as a forced selection of a set of wagers from an imaginary bookie that doesn’t charge a vig, pays out in utility whether you live or die. The Bookie sometimes offers a large, perhaps infinite selection of sets of wagers to pick from and sometimes offers only a handful. The agent can choose one and only one set at a time. Agents have little control over what wagers get offered to them but in many cases one set will clearly be better than the others. But the more an agent’s treatment of her beliefs diverges from the laws of probability the more often she’s going to get bilked by the imaginary bookie. In other words, the key might be to transform the Dutch Book arguments into decision theory problems. These problems would hopefully demonstrate that non-Bayesian reasoning creates a class of decision problem which the agent always answers sub-optimally or inconsistently. 2

A possible downside to the above strategy is that it leaves rationality entangled with utility. There have been some attempts to rewrite the Dutch Book arguments to remove the aspects of utility and preference embedded in them. The main problem with these strategies is that they tend to either fail to remove all notions of preference3 or have to introduce some kind of apparatus that already resembles probability for no particular reason.4,5 Our conception of utility is in a Goldilocks spot- it has exactly what we need to make sense of probability while also being something we’re familiar with, we don’t have to invent it whole cloth. We might also ask a further question: why should beliefs come in degrees. The fact that our utility function (such as humans have one) seems to consist of real numbers and isn’t binary (for example) might explain why. You don’t need degrees of belief if all but one possible decision are always of value 0. In discussions here many of us have also been given to concluding that probability was epiphenomenal to optimum decision making. Obviously if we believe that we’re going to want a Dutch book argument that includes utility. Moreover, any successful reduction of degrees of belief to some decision theoretic measure would benefit from a set of Dutch book arguments that left out degrees of belief altogether.

As you can see, I think a successful Dutch book will probably keep probability intertwined with decision theory, but since this is our first encounter with the topic: have at it. Use this thread to generate some hypotheses, both for decision theoretic approaches and approaches that leave out utility.

1 This post can also be thought of as an introduction to basic material and a post accompanying “What is Bayesianism”.

2 I have some more specific ideas for how to do this, but can’t well present everything in this post and I’d like to see if others come up with the similar answers. Remember: discuss a problem exhaustively before coming to a conclusion. I hope people will try to work out their own versions, here in the comments or in new posts. It is also interesting to examine what kinds of utility functions can yield Dutch books- consider what happens for example when the utility function is strictly deontological where every decision consists of a 1 for one option and a 0 for all the others. I also worry that some of the novel decision theories suggested here might have some Dutch book issues. In cases like the Sleeping Beauty problem where the payoff structure is underdetermined things get weird. It looks like this is discussed in “When Betting Odds and Credences Come Apart” by Bradley and Leitgeb. I haven’t read it yet though.

3 See Howson and Urbach, “Scientific Reasoning, the Bayesian Approach” as an example.

4 Helman, “Bayes and Beyond”.

5 For a good summary of these problems see Maher, “Depragmatizing Dutch Book Arguments” where he refutes such attempts. Maher has his own justification for Bayesian Epistemology which isn’t a Dutch Book argument (it uses Representation theory, which I don’t really understand) and which isn’t available online that I can find. This was published in his book “Betting on Theories” which I haven’t read yet. This looks pretty important so I’ve reserved the book, if someone is looking for work to do, dig into this.