Dutch Books and Decision Theory: An Introduction to a Long Conversation

For a com­mu­nity that en­dorses Bayesian episte­mol­ogy we have had sur­pris­ingly few dis­cus­sions about the most fa­mous Bayesian con­tri­bu­tion to episte­mol­ogy: the Dutch Book ar­gu­ments. In this post I pre­sent the ar­gu­ments, but it is far from clear yet what the right way to in­ter­pret them is or even if they prove what they set out to. The Dutch Book ar­gu­ments at­tempt to jus­tify the Bayesian ap­proach to sci­ence and be­lief; I will also sug­gest that any suc­cess­ful Dutch Book defense of Bayesi­anism can­not be dis­en­tan­gled from de­ci­sion the­ory. But mostly this post is to in­tro­duce peo­ple to the ar­gu­ment and to get peo­ple think­ing about a solu­tion. The liter­a­ture is scant enough that it is plau­si­ble peo­ple here could ac­tu­ally make gen­uine progress, es­pe­cially since the prob­lem is re­lated to de­ci­sion the­ory.1

Bayesi­anism fits to­gether. Like a well-tai­lored jacket it feels com­fortable and looks good. It’s an ap­peal­ing, func­tional aes­thetic for those with cul­ti­vated epistemic taste. But sleek­ness is not a rigourous jus­tifi­ca­tion and so we should ask: why must the ra­tio­nal agent adopt the ax­ioms of prob­a­bil­ity as con­di­tions for her de­grees of be­lief? Fur­ther, why should agents ac­cept the prin­ci­ple con­di­tion­al­iza­tion as a rule of in­fer­ence? Th­ese are the ques­tions the Dutch Book ar­gu­ments try to an­swer.

The ar­gu­ments be­gin with an as­sump­tion about the con­nec­tion be­tween de­grees of be­lief and will­ing­ness to wa­ger. An agent with de­gree of be­lief b in hy­poth­e­sis h is as­sumed to be will­ing to buy wa­ger up to and in­clud­ing $b in a unit wa­ger on h and sell a unit wa­ger on h down to and in­clud­ing $b. For ex­am­ple, if my de­gree of be­lief that I can drink ten eggnogs with­out pass­ing out is .3 I am will­ing to bet $0.30 on the propo­si­tion that I can drink the nog with­out pass­ing out when the stakes of the bet are $1. Call this the Will-to-wa­ger As­sump­tion. As we will see it is prob­le­matic.

The Syn­chronic Dutch Book Argument

Now con­sider what hap­pens if my de­gree of be­lief that I can drink the eggnog is .3 and my de­gree of be­lief that I will pass out be­fore I finish is .75. Given the Will-to-wa­ger as­sump­tion my friend can con­struct a se­ries of wa­gers that guaran­tee I will lose money. My friend could offer me a wa­ger on b where I pay $0.30 for $1.00 stakes if I finish the eggnog. He could si­mul­ta­neously offer me a bet where I pay $0.75 for $1.00 stakes if pass out. Now if I down the eggnog I win $0.70 from the first bet but I lose $0.75 from the sec­ond bet, net­ting me -$0.05. If I pass out I lose the $0.30 from the first bet, but win $0.25 from the sec­ond bet, net­ting me -$0.05. In gam­bling ter­minol­ogy these lose-lose bets are called a Dutch book. What’s cool about this is that vi­o­lat­ing the ax­ioms of prob­a­bil­ity is a nec­es­sary and suffi­cient con­di­tion for de­grees of be­lief to be sus­cep­ti­ble to Dutch books, as in the above ex­am­ple. This is quite easy to see but the reader is wel­come to pur­sue for­mal proofs: rep­re­sent­ing de­grees of be­lief with only pos­i­tive num­bers, set­ting b(all out­comes)=1, and mak­ing b ad­di­tive makes it im­pos­si­ble to con­struct a Dutch book. A vi­o­la­tion of any ax­iom al­lows the sum of all b in the sam­ple space to be greater than or less than 1, en­abling a Dutch book.

The Di­achronic Dutch Book Argument

What about con­di­tion­al­iza­tion? Why must a ra­tio­nal agent be­lieve h1 at b(h1|h2) once she learns h2? For this we up­date the Will-to-wa­ger as­sump­tion to have it gov­ern de­grees of be­lief for hy­poth­e­sis con­di­tional on other hy­pothe­ses. An agent with de­gree of be­lief b in hy­poth­e­sis h1|h2 is as­sumed to be will­ing to wa­ger up to and in­clud­ing $b in a unit wa­ger on h1 con­di­tional on h2. This is a wa­ger that is can­celed if h2 turns out false but pays out if h2 turns out true. Say I be­lieve with b=0.9 that I will finish ten drinks if we de­cide to drink cider in­stead of eggnog. Say I also be­lieve with b=0.5 that we will drink cider and 0.5 that we drink eggnog. But say I *don’t* up­date my be­liefs ac­cord­ing to the prin­ci­ple of con­di­tion­al­iza­tion. Once I learn that we will drink cider my be­lief that I will finish the drinks is only b=0.7. Given the Will-to-wa­ger as­sump­tion I ac­cept the fol­low­ing wa­gers.

(1) An un­con­di­tional wa­ger on h2 (that we drink cider not eggnog) that pays $0.20 if h2 is true at b(h2)=0.5*$0.20= $0.10

(2) A unit wa­ger on h1 (finish­ing ten drinks) con­di­tional on h2 that pays $1.00 at b(h1|h2)=0.9*$1.00= $0.90

If h2 is false I lose $0.10 on wa­ger (1). If h2 is true I win $0.10. But now I’m look­ing at all that cider and not feel­ing so good. I de­cide that my de­gree of be­lief that I will finish those ten ciders is only b=0.7. So my buys from me an un­con­di­tional wa­ger (3) on h1 that pays $1.00 at b(h1)=0.7*$1.00=$0.7.

Then we start our drink­ing. If I finish the cider I gain $0.10 from wa­ger (2) which puts me up $0.20, but then I lose $0.30 on wa­ger (3) and I’m down $0.10 on the day. If I don’t finish that cider I win $0.70 from wa­ger (3) which puts me at $0.80 un­til I have to pay out on wa­ger (2) and go down to -$0.10 on the day.

Note again that any up­date in de­gree of be­lief in any hy­poth­e­sis h upon learn­ing ev­i­dence e that doesn’t equal b(h|e) is vuln­er­a­ble to a Di­achronic Dutch book­ing.

The Will-to-wa­ger As­sump­tion or Just What Does This Prove, Any­way?

We might want to take the above ar­gu­ments liter­ally and say they show not treat­ing your de­grees of be­lief like prob­a­bil­ities is li­able to lead you into lose-lose wa­gers. But this would be a very dumb ar­gu­ment: there is no rea­son for any­one to ac­tu­ally make wa­gers in this man­ner. Th­ese are wa­gers which have zero ex­pected gain and which pre­sum­ably in­volve trans­ac­tion costs. No ra­tio­nal per­son would make these wa­gers ac­cord­ing to the Will-to-wa­ger as­sump­tion. Se­cond, the ar­gu­ment pre­sented above uses money and as we are all fa­mil­iar, money has diminish­ing re­turn. You prob­a­bly shouldn’t bet $100 for a one in a mil­lion shot at $100,000,000 be­cause a hun­dred mil­lion dol­lars is prob­a­bly not a mil­lion times more use­ful than a hun­dred dol­lars. Third, the ar­gu­ment as­sumes a ra­tio­nal per­son must want to win bets. A per­son might en­joy the wa­ger even if the odds aren’t good or might pre­fer life with­out the money.

Nonethe­less, the Will-to-wa­ger As­sump­tion doesn’t feel ar­bi­trary, it just isn’t clear what it im­plies. There are a cou­ple differ­ent strate­gies we might pur­sue to im­prove this ar­gu­ment. First, we can im­prove the Will-to-wa­ger as­sump­tion and cor­re­spond­ing Dutch book the­o­rems by mak­ing them about util­ity in­stead of money.

We start by defin­ing a util­ity func­tion, υ: XR where X is the set of out­comes and R is the set of real num­bers. A ra­tio­nal agent is one that acts to max­i­mize R ac­cord­ing to their util­ity func­tion. An agent with de­gree of be­lief b in hy­poth­e­sis h is as­sumed to be will­ing to wa­ger up to and in­clud­ing b(util) in a one unil wa­ger on h. As a literal as­crip­tion of will­ing­ness to wa­ger this in­ter­pre­ta­tion still doesn’t make sense. But we can think of the wa­gers here as gen­eral stand-ins for de­ci­sions made un­der un­cer­tainty. The Will-to-Wager as­sump­tion fails to work when taken liter­ally be­cause in real life we can always de­cline wa­gers. But we can take ev­ery de­ci­sion we make as a forced se­lec­tion of a set of wa­gers from an imag­i­nary bookie that doesn’t charge a vig, pays out in util­ity whether you live or die. The Bookie some­times offers a large, per­haps in­finite se­lec­tion of sets of wa­gers to pick from and some­times offers only a hand­ful. The agent can choose one and only one set at a time. Agents have lit­tle con­trol over what wa­gers get offered to them but in many cases one set will clearly be bet­ter than the oth­ers. But the more an agent’s treat­ment of her be­liefs di­verges from the laws of prob­a­bil­ity the more of­ten she’s go­ing to get bilked by the imag­i­nary bookie. In other words, the key might be to trans­form the Dutch Book ar­gu­ments into de­ci­sion the­ory prob­lems. Th­ese prob­lems would hope­fully demon­strate that non-Bayesian rea­son­ing cre­ates a class of de­ci­sion prob­lem which the agent always an­swers sub-op­ti­mally or in­con­sis­tently. 2

A pos­si­ble down­side to the above strat­egy is that it leaves ra­tio­nal­ity en­tan­gled with util­ity. There have been some at­tempts to rewrite the Dutch Book ar­gu­ments to re­move the as­pects of util­ity and prefer­ence em­bed­ded in them. The main prob­lem with these strate­gies is that they tend to ei­ther fail to re­move all no­tions of prefer­ence3 or have to in­tro­duce some kind of ap­para­tus that already re­sem­bles prob­a­bil­ity for no par­tic­u­lar rea­son.4,5 Our con­cep­tion of util­ity is in a Goldilocks spot- it has ex­actly what we need to make sense of prob­a­bil­ity while also be­ing some­thing we’re fa­mil­iar with, we don’t have to in­vent it whole cloth. We might also ask a fur­ther ques­tion: why should be­liefs come in de­grees. The fact that our util­ity func­tion (such as hu­mans have one) seems to con­sist of real num­bers and isn’t bi­nary (for ex­am­ple) might ex­plain why. You don’t need de­grees of be­lief if all but one pos­si­ble de­ci­sion are always of value 0. In dis­cus­sions here many of us have also been given to con­clud­ing that prob­a­bil­ity was epiphe­nom­e­nal to op­ti­mum de­ci­sion mak­ing. Ob­vi­ously if we be­lieve that we’re go­ing to want a Dutch book ar­gu­ment that in­cludes util­ity. More­over, any suc­cess­ful re­duc­tion of de­grees of be­lief to some de­ci­sion the­o­retic mea­sure would benefit from a set of Dutch book ar­gu­ments that left out de­grees of be­lief al­to­gether.

As you can see, I think a suc­cess­ful Dutch book will prob­a­bly keep prob­a­bil­ity in­ter­twined with de­ci­sion the­ory, but since this is our first en­counter with the topic: have at it. Use this thread to gen­er­ate some hy­pothe­ses, both for de­ci­sion the­o­retic ap­proaches and ap­proaches that leave out util­ity.

1 This post can also be thought of as an in­tro­duc­tion to ba­sic ma­te­rial and a post ac­com­pa­ny­ing “What is Bayesi­anism”.

2 I have some more spe­cific ideas for how to do this, but can’t well pre­sent ev­ery­thing in this post and I’d like to see if oth­ers come up with the similar an­swers. Re­mem­ber: dis­cuss a prob­lem ex­haus­tively be­fore com­ing to a con­clu­sion. I hope peo­ple will try to work out their own ver­sions, here in the com­ments or in new posts. It is also in­ter­est­ing to ex­am­ine what kinds of util­ity func­tions can yield Dutch books- con­sider what hap­pens for ex­am­ple when the util­ity func­tion is strictly de­on­tolog­i­cal where ev­ery de­ci­sion con­sists of a 1 for one op­tion and a 0 for all the oth­ers. I also worry that some of the novel de­ci­sion the­o­ries sug­gested here might have some Dutch book is­sues. In cases like the Sleep­ing Beauty prob­lem where the pay­off struc­ture is un­der­de­ter­mined things get weird. It looks like this is dis­cussed in “When Bet­ting Odds and Cre­dences Come Apart” by Bradley and Leit­geb. I haven’t read it yet though.

3 See How­son and Ur­bach, “Scien­tific Rea­son­ing, the Bayesian Ap­proach” as an ex­am­ple.

4 Hel­man, “Bayes and Beyond”.

5 For a good sum­mary of these prob­lems see Ma­her, “Deprag­ma­tiz­ing Dutch Book Ar­gu­ments” where he re­futes such at­tempts. Ma­her has his own jus­tifi­ca­tion for Bayesian Episte­mol­ogy which isn’t a Dutch Book ar­gu­ment (it uses Rep­re­sen­ta­tion the­ory, which I don’t re­ally un­der­stand) and which isn’t available on­line that I can find. This was pub­lished in his book “Bet­ting on The­o­ries” which I haven’t read yet. This looks pretty im­por­tant so I’ve re­served the book, if some­one is look­ing for work to do, dig into this.