# Bet or update: fixing the will-to-wager assumption

(Warn­ing: com­pletely ob­vi­ous rea­son­ing that I’m only post­ing be­cause I haven’t seen it spel­led out any­where.)

Some peo­ple say, ex­pand­ing on an idea of de Finetti, that Bayesian ra­tio­nal agents should offer two-sided bets based on their be­liefs. For ex­am­ple, if you think a coin is fair, you should be will­ing to offer any­one a 5050 bet on heads (or tails) for a penny. Jack called it the “will-to-wa­ger as­sump­tion” here and I don’t know a bet­ter name.

In its sim­plest form the as­sump­tion is false, even for perfectly ra­tio­nal agents in a perfectly sim­ple world. For ex­am­ple, I can give you my fa­vorite fair coin so you can flip it and take a peek at the re­sult. Then, even though I still be­lieve the coin is fair, I’d be a fool to offer both sides of the wa­ger to you, be­cause you’d just take whichever side benefits you (since you’ve seen the re­sult and I haven’t). That ob­jec­tion is not just aca­demic: us­ing your sincere be­liefs to bet money against bet­ter in­formed peo­ple is a bad idea in real world mar­kets as well.

Then the ques­tion arises, how can we fix the as­sump­tion so it still says some­thing sen­si­ble about ra­tio­nal­ity? I think the right fix should go some­thing like this. If you flip a coin and peek at the re­sult, then offer me a bet at 90:10 odds that the coin came up heads, I must ei­ther ac­cept the bet or up­date to­ward be­liev­ing that the coin in­deed came up heads, with at least these odds. I don’t get to keep my 50:50 be­liefs about the coin and re­fuse the bet at the same time. More gen­er­ally, a Bayesian ra­tio­nal agent offered a bet (by an­other agent who might have more in­for­ma­tion) must ei­ther ac­cept the bet or up­date their be­liefs so the bet be­comes un­prof­itable. The old obli­ga­tion about offer­ing two-sided bets on all your be­liefs is ob­so­lete, use this one from now on. It should also come in handy in liv­ing room Bayesian scuffles, throw­ing some money on the table and say­ing “bet or up­date!” has a nice ring to it.

What do you think?

• This cor­re­sponds with what peo­ple ac­tu­ally do. For ex­am­ple, when Stephen Di­a­mond said on Over­com­ing Bias that there was a 99% chance that Clin­ton would win, I said, ok, I’ll pay you \$10 if Clin­ton wins and you can pay me \$1,000 if Trump wins. He said no, that’s just a break even point, so there’s no mo­tive to take the bet. I said fine, \$10 - \$500. He re­fused again. And ob­vi­ously that was pre­cisely be­cause he re­al­ized those odds were ab­surd. So he in fact up­dated. But in­sist­ing, “you have to ad­mit that you up­dated,” is just a sta­tus game. If you just offer the bet, and they re­fuse, that is enough. They will up­date. You don’t have to get them to ad­mit it.

• I don’t think not be­liev­ing in one’s prob­a­bil­ity is the only rea­son to avoid bet­ting. There’s also a lot of phys­i­cal re­sis­tance for many peo­ple.

Even if he be­lieved in the odds it would be very ir­ra­tional to take your bet. He would get bet­ter odds on Pre­dic­tIt and Pre­dic­tIt is likely a more trust­wor­thy third-party to pay him in case he wins the bet.

• Yeah, if they re­fuse the bet that means they prob­a­bly up­dated (or weren’t try­ing to be ra­tio­nal to be­gin with).

• Or don’t have \$500.

• True, but then they should coun­teroffer some­thing they can af­ford, like \$1 to \$50, since they should be ea­ger to rake in the “free money”.

• It is pos­si­ble that you may up­date in the di­rec­tion of some­thing which makes the bet un­prof­itable, but which doesn’t lead to more cre­dence in the propo­si­tion which the bet was origi­nally offered to prove. For in­stance, you may up­date in the di­rec­tion of the bet be­ing a scam in a way which you haven’t man­aged to figure out.

• I re­ally like this post, and am very glad to see it! Nice work.

I’ll pay what­ever cost I need to for vi­o­lat­ing non-use­ful­ness-of-com­ments norms in or­der to say this—an up­vote didn’t seem like enough.

• Thank you!

• Yes, definitely. There is some­thing about the pres­ence of other agents with differ­ing be­liefs that changes the struc­ture of the math­e­mat­ics in a deep way.

P(X) is some­how very differ­ent from P(X|an­other agent is will­ing to take the bet).

How about us­ing a “bet” against the uni­verse in­stead of other agents? This is eas­ily con­cretized by talk­ing about data com­pres­sion. If I do some­thing stupid and as­sign prob­a­bil­ities badly, then I suffer from in­creased code­lengths as a re­sult, and vice versa. But no­body else gains or loses be­cause of my suc­cess or failure.

• I think the idea in the post works for all bets in­clud­ing those offered by smart agents, stupid agents, and na­ture.

• I don’t un­der­stand the “must ac­cept” thing at all. There are ob­vi­ous con­sid­er­a­tions like the fact that util­ity is not lin­ear with money and that risk tol­er­ance is a fac­tor. There are other con­sid­er­a­tions as well, for ex­am­ple, go­ing meta and think­ing about the un­cer­tainty of un­cer­tainty—e.g. when I say that the prob­a­bil­ity of X is 50%, I can be very cer­tain of that es­ti­mate, or I can be very un­cer­tain.

• At the scale of liv­ing room bets, risk aver­sion is not a fac­tor, be­cause even a small amount of risk aver­sion around \$100 stakes would im­ply crazy high risk aver­sion at larger stakes. It grows ex­po­nen­tially, see this post by Stu­art. Most peo­ple use risk aver­sion (diminish­ing marginal util­ity of money) as an ex­cuse for loss aver­sion which is straight up ir­ra­tional.

As to your sec­ond ob­jec­tion, Bayesi­ans don’t be­lieve in meta-un­cer­tainty, their will­ing­ness to bet is rep­re­sented by one num­ber which is their un­cer­tainty (a.k.a. their prob­a­bil­ity).

• At the scale of liv­ing room bets, risk aver­sion is not a factor

You’re right about this strictly speak­ing, but liquidity con­straints can re­sult in the same prac­ti­cal out­come as risk aver­sion, and these are definitely rele­vant “on the mar­gin”. I could be will­ing to take a \$10 - \$500 bet in the ab­stract, but if that re­quires me to bor­row the \$500 should I lose (for an ex­tra \$300 cost, say), it’s no longer ra­tio­nal for me to take that side of the bet! It would have to be a \$10 - \$200 bet or some­thing, but ob­vi­ously that cre­ates a bid-ask spread which trans­lates to an “im­pre­cise” elic­i­ta­tion of prob­a­bil­ities. The ‘proper’ fix is to make the stakes small enough that liquidity too be­comes a neg­ligible fac­tor—but a 5¢ - \$2.5 bet is, um, not very ex­cit­ing, and fixed trans­ac­tion costs might make the bet in­fea­si­ble again!

• Good point. Though if I were bet­ting against you, I’d just offer you to pay the \$500 some­time in the next month or three. It’s the same as bor­row­ing money, but the cost is low enough that the bid-ask spread should stay small.

• At the scale of liv­ing room bets

So, “as long as it doesn’t mat­ter”? Why should I care about bets which don’t mat­ter?

By the way, risk aver­sion is NOT at all the same thing as the diminish­ing marginal util­ity of money.

Bayesi­ans don’t be­lieve in meta-uncertainty

Why not?

• The only way for a Bayesian ra­tio­nal agent to be risk averse is via diminish­ing marginal util­ity of money, I think. As for meta-un­cer­tainty, this post by Shal­izi (who’s crit­i­cal of Bayesi­anism) is a good start­ing point.

• The only way for a Bayesian ra­tio­nal agent to be risk averse is via diminish­ing marginal util­ity of money, I think.

Why in the world would it be so?

“Bayesian” gen­er­ally means that you in­ter­pret prob­a­bil­ity as sub­jec­tive and that you have pri­ors and up­date them on the ba­sis of ev­i­dence. How does risk aver­sion or lack thereof fall out of this?

meta-uncertainty

You don’t think that things like hy­per­pa­ram­e­ters and hy­per­pri­ors are meta-un­cer­tainty?

But even on a ba­sic level, let’s try this. You have two coins. Coin 1 you have flipped a cou­ple of thou­sand times, recorded the re­sults, and, as ex­pected, it’s a fair coin: the fre­quency of heads is very close to 50%. I give you Coin 2 which you’ve never seen be­fore, but it looks like a nor­mal coin.

Ab­solutely the same thing? Really?

• Bayesian ra­tio­nal­ity is also about de­ci­sion mak­ing, not just be­liefs. Usu­ally peo­ple take it to mean ex­pected util­ity max­i­miza­tion. Just as­sume my post said that in­stead.

My bet­ting be­hav­ior w.r.t. the next coin­flip is in­deed the same for the two coins. My prob­a­bil­ity dis­tri­bu­tions over longer se­quences of coin­flips are differ­ent be­tween the two coins. For ex­am­ple, P(10th flip is heads | first 9 are heads) is 12 for the first coin and close to 1 for the sec­ond coin. You can de­scribe it as un­cer­tainty over a hid­den pa­ram­e­ter, but you can make the same de­ci­sions with­out it, us­ing only prob­a­bil­ities over se­quences. The kind of meta-un­cer­tainty you seem to want, that gets you out of un­com­fortable bets, doesn’t ex­ist for Bayesi­ans.

• ex­pected util­ity maximization

You are just re­ar­rang­ing the prob­lem with­out solv­ing it. Can my util­ity func­tion in­clude risk aver­sion? If it can, we’re back to the square one: a risk-averse Bayesian ra­tio­nal agent.

And that’s even be­sides the ob­ser­va­tion that be­ing Bayesian and be­ing com­mit­ted to ex­pected util­ity max­i­miza­tion are or­thog­o­nal things.

The kind of meta-un­cer­tainty you seem to want, that gets you out of un­com­fortable bets, doesn’t ex­ist for Bayesi­ans.

I have no need for some­thing that can get me out of un­com­fortable bets since I’m perfectly fine with not bet­ting at all. What I want is a rep­re­sen­ta­tion for prob­a­bil­ity that is more rich than a sim­ple scalar.

In my hy­po­thet­i­cal the two 50% prob­a­bil­ites are differ­ent. I want to ex­press the differ­ence be­tween them. There are no se­quences in­volved.

• Can my util­ity func­tion in­clude risk aver­sion?

That would be miss­ing the point. The vNM the­o­rem says that if you have prefer­ences over “lot­ter­ies” (prob­a­bil­ity dis­tri­bu­tions over out­comes; like, 20% chance of win­ning \$5 and 80% chance of win­ning \$10) that satisfy the ax­ioms, then your de­ci­sion­mak­ing can be rep­re­sented as max­i­miz­ing ex­pected util­ity for some util­ity func­tion over out­comes. The con­cept of “risk aver­sion” is about how you re­act to un­cer­tainty (how you de­cide be­tween lot­ter­ies) and is em­bod­ied in the util­ity func­tion; it doesn’t ap­ply to out­comes known with cer­tainty. (How risk-averse are you about win­ning \$5?)

See “The Allais Para­dox” for how this was cov­ered in the vaunted Se­quences.

In my hy­po­thet­i­cal the two 50% prob­a­bil­ites are differ­ent. I want to ex­press the differ­ence be­tween them. There are no se­quences in­volved.

Ob­vi­ously you’re al­lowed to have differ­ent be­liefs about Coin 1 and Coin 2, which could be ex­pressed in many ways. But your differ­ent be­liefs about the coins don’t need to show up in your prob­a­bil­ity for a sin­gle coin­flip. The rea­son for men­tion­ing se­quences of flips, is be­cause that’s when your be­liefs about Coin 1 vs. Coin 2 would start mak­ing differ­ent pre­dic­tions.

• That would be miss­ing the point.

Would it? My in­ter­est is in con­struct­ing a frame­work which pro­vides use­ful, in­sight­ful, and rea­son­ably ac­cu­rate mod­els for ac­tual hu­man de­ci­sion-mak­ing. The vNM the­o­rem is quite use­less in this re­spect—I don’t know what my (or other peo­ple’s) util­ity func­tion is, I can­not calcu­late or even es­ti­mate it, a great deal of im­por­tant choices can be ex­pressed as a set of lot­ter­ies only in very awk­ward ways, etc. And this is even be­sides the fact that em­piri­cal hu­man prefer­ences tend to not be co­her­ent and they change with time.

Risk aver­sion is an eas­ily ob­serv­able fact. Every day in fi­nan­cial mar­kets peo­ple pay very large amounts of money in or­der to re­duce their risk (for the same ex­pected re­turn). If you think they are all wrong, by all means, go and be­come rich off these mis­guided fools.

But your differ­ent be­liefs about the coins don’t need to show up in your prob­a­bil­ity for a sin­gle coin­flip.

Why not? As I said, I want a richer way to talk about prob­a­bil­ities, more com­plex than tak­ing them as sim­ple scalars. Do you think it’s a bad idea? Does St.Bayes frown upon it?

• As I said, I want a richer way to talk about prob­a­bil­ities, more com­plex than tak­ing them as sim­ple scalars. Do you think it’s a bad idea?

That’s right, I think it’s a bad idea: it sounds like what you ac­tu­ally want is a richer way to talk about your be­liefs about Coin 2, but you can do that us­ing stan­dard prob­a­bil­ity the­ory, with­out need­ing to in­vent a new field of math from scratch.

Sup­pose you think Coin 2 is bi­ased and lands heads some un­known frac­tion _r_ of the time. Your un­cer­tainty about the pa­ram­e­ter _r_ will be rep­re­sented by a prob­a­bil­ity dis­tri­bu­tion: say it’s nor­mally dis­tributed with a mean of 0.5 and a stan­dard de­vi­a­tion of 0.1. The point is, the prob­a­bil­ity of _r_ hav­ing a par­tic­u­lar value is a differ­ent ques­tion from the the prob­a­bil­ity of get­ting heads on your first toss of Coin 2, which is still 0.5. You’d have to ask a differ­ent ques­tion than “What is the prob­a­bil­ity of heads on the first flip?” if you want the an­swer to dis­t­in­guish the two coins. For ex­am­ple, the prob­a­bil­ity of get­ting ex­actly _k_ heads in _n_ flips is C(_n_, _k_)(0.5)^_k_(0.5)^(_n_−_k_) for Coin 1, but (I think?) ∫₀¹ (1/​√(0.02π))_e_^−((_p_−0.5)^2/​0.02) C(_n_, _k_)(_p_)^_k_(_p_)^(_n_−_k_) _dp_ for Coin 2.

Does St.Bayes frown upon it?

St. Cox prob­a­bly does.

• Sup­pose you think Coin 2 is bi­ased and lands heads some un­known frac­tion r of the time. Your un­cer­tainty about the pa­ram­e­ter r will be rep­re­sented by a prob­a­bil­ity dis­tri­bu­tion: say it’s nor­mally dis­tributed with a mean of 0.5 and a stan­dard de­vi­a­tion of 0.1. The point is, the prob­a­bil­ity of r hav­ing a par­tic­u­lar value is a differ­ent ques­tion from the the prob­a­bil­ity of get­ting heads on your first toss of Coin 2, which is still 0.5.

A stan­dard ap­proach is to use the beta dis­tri­bu­tion to rep­re­sent your un­cer­tainty over the value of r.

• but you can do that us­ing stan­dard prob­a­bil­ity theory

Of course I can. I can rep­re­sent my be­liefs about the prob­a­bil­ity as a dis­tri­bu­tion, a meta- (or a hy­per-) dis­tri­bu­tion. But I’m be­ing told that this is “meta-un­cer­tainty” which right-think­ing Bayesi­ans are not sup­posed to have.

No one is talk­ing about in­vent­ing new fields of math

say it’s nor­mally distributed

Clearly not since the nor­mal dis­tri­bu­tion goes from nega­tive in­finity to pos­i­tive in­finity and the prob­a­bil­ity goes merely from 0 to 1.

the prob­a­bil­ity of r hav­ing a par­tic­u­lar value is a differ­ent ques­tion from the the prob­a­bil­ity of get­ting heads on your first toss of Coin 2, which is still 0.5

That 0.5 is con­di­tional on the dis­tri­bu­tion of r, isn’t it? That makes it not a differ­ent ques­tion at all.

Notably, if I’m risk-averse, the risk of bet­ting on Coin 1 looks differ­ent to me from the risk of bet­ting on Coin2.

St. Cox prob­a­bly does.

Can you elab­o­rate? It’s not clear to me.

• But I’m be­ing told that this is “meta-un­cer­tainty” which right-think­ing Bayesi­ans are not sup­posed to have.

Hm. Maybe those peo­ple are wrong??

Clearly not since the nor­mal dis­tri­bu­tion goes from nega­tive in­finity to pos­i­tive infinity

That’s right; I should have ei­ther said “ap­prox­i­mately”, or cho­sen a differ­ent dis­tri­bu­tion.

That 0.5 is con­di­tional on the dis­tri­bu­tion of r, isn’t it? That makes it not a differ­ent ques­tion at all.

Yes, it is av­er­ag­ing over your dis­tri­bu­tion for _r_. Does it help if you think of prob­a­bil­ity as rel­a­tive to sub­jec­tive states of knowl­edge?

Can you elab­o­rate?

(At­tempted hu­morous al­lu­sion to how Cox’s the­o­rem de­rives prob­a­bil­ity the­ory from sim­ple ax­ioms about how rea­son­ing un­der un­cer­tainty should work, less rele­vant if no one is talk­ing about in­vent­ing new fields of math.)

• But I’m be­ing told that this is “meta-un­cer­tainty” which right-think­ing Bayesi­ans are not sup­posed to have.

Hm. Maybe those peo­ple are wrong??

Nope.

• Maybe those peo­ple are wrong?

That’s what I thought, too, and that dis­agree­ment led to this sub­thread.

But if we both say that we can eas­ily talk about dis­tri­bu­tions of prob­a­bil­ities, we’re prob­a­bly in agree­ment :-)

• It seems like you’ve come to an agree­ment, so let me ruin things by adding my own in­ter­pre­ta­tion.

The coin has some propen­sity to come up heads. Say it will in the long run come up heads r of the time. The num­ber r is like a prob­a­bil­ity in that it satis­fies the math­e­mat­i­cal rules of prob­a­bil­ity (in par­tic­u­lar the rate at which the coin comes up heads plus the rate at which it comes up tails must sum to one). But it’s a phys­i­cal prop­erty of the coin; not any­thing to do with our opinion of it. The num­ber r is just some par­tic­u­lar num­ber based on the shape of the coin (and the way it’s be­ing tossed), it doesn’t change with our knowl­edge of the coin. So r isn’t a “prob­a­bil­ity” in the Bayesian sense—a de­scrip­tion of our knowl­edge—it’s just some­thing out there in the world.

Now if we have some Bayesian agent who doesn’t know r, then the must have some prob­a­bil­ity dis­tri­bu­tion over it. It could also be un­cer­tain about the weight, w, and have a prob­a­bil­ity dis­tri­bu­tion over w. The dis­tribuiton over r isn’t “meta-un­cer­tainty” be­cause it’s a dis­tri­bu­tion over a real phys­i­cal thing in the world, not over our own in­ter­nal prob­a­bil­ity as­sign­ments. The prob­a­bil­ity dis­tri­bu­tion over r is con­cep­tu­ally the same as the one over w.

Now sup­pose some­one is about to flip the coin again. If we knew for cer­tain what the value of r was we would then as­sign that same value as the prob­a­bil­ity of the coin com­ing up heads. If we don’t know for cer­tain what r is then we must there­fore av­er­age over all val­ues of r ac­cord­ing to our dis­tri­bu­tion. The prob­a­bil­ity of the coin land­ing heads is its ex­pected value, E(r).

Now E(r) ac­tu­ally is a Bayesian prob­a­bil­ity—it is our de­gree of be­lief that the coin will come up heads. This trans­for­ma­tion from r be­ing a phys­i­cal prop­erty to E(r) be­ing a prob­a­bil­ity is pro­duced by the par­tic­u­lar ques­tion that we are ask­ing. If we had in­stead asked about the prob­a­bil­ity of the coin dent­ing the floor then this would de­pend on the weight and would be ex­pressed as E(f(w)) for some func­tion f rep­re­sent­ing how prob­a­ble it was that the floor got dented at each weight. We don’t need a similar f in the case of r be­cause we were free to choose the units of r so that this was un­nec­es­sary. If we had in­stead let r be the av­er­age num­ber of heads in 1000 flips then we would have to have calcu­lated the prob­a­bil­ity as E(f(r)) us­ing f(r)=r/​1000.

But the dis­tri­bu­tion over r does give you the ex­tra in­for­ma­tion you wanted to de­scribe. Coin 1 would have an r dis­tri­bu­tion tightly clus­tered around 12, whereas our dis­tri­bu­tion for Coin 2 would be more spread out. But we would have E(r) = 12 in both cases. Then, when we see more flips of the coins, our dis­tri­bu­tions change (al­though our dis­tri­bu­tion for Coin 1 prob­a­bly doesn’t change very much; we are already quite cer­tain) and we might no longer have that E(r_1) = E(r_2).

• But it’s a phys­i­cal prop­erty of the coin; not any­thing to do with our opinion of it.

Well, coin + en­vi­ron­ment, but sure, you’re mak­ing the point that r is not a ran­dom vari­able in the un­der­ly­ing re­al­ity. That’s fine, if we climb the tur­tles all the way down we’d find a a philo­soph­i­cal de­bate about whether the uni­verse is de­ter­minis­tic and that’s not quite what we are in­ter­ested in right now.

The dis­tribuiton over r isn’t “meta-un­cer­tainty” be­cause it’s a dis­tri­bu­tion over a real phys­i­cal thing in the world

I don’t think de­scribing r as a “real phys­i­cal thing” is use­ful in this con­text.

For ex­am­ple, we treat the out­come of each coin flip as stochas­tic, but you can eas­ily make an ar­gu­ment that it is not, be­ing a “real phys­i­cal thing” in­stead, driven by de­ter­minis­tic physics.

For an­other ex­am­ple, it’s easy to add more meta-lev­els. Con­sider Alice form­ing a prob­a­bil­ity dis­tri­bu­tion of what Bob be­lieves the prob­a­bil­ity dis­tri­bu­tion of r is...

This trans­for­ma­tion from r be­ing a phys­i­cal prop­erty to E(r) be­ing a prob­a­bil­ity is pro­duced by the par­tic­u­lar ques­tion that we are ask­ing.

Isn’t r it­self “pro­duced by the par­tic­u­lar ques­tion that we are ask­ing”?

But the dis­tri­bu­tion over r does give you the ex­tra in­for­ma­tion you wanted to de­scribe.

Yes.

• I’m mostly in­ter­ested in pre­scrip­tive ra­tio­nal­ity, and vNM is the right start­ing point for that (with game the­ory be­ing the right next step, and more be­yond, lead­ing to MIRI’s re­search among other things). If you want a good de­scrip­tive al­ter­na­tive to vNM, check out prospect the­ory.

• Can my util­ity func­tion in­clude risk aver­sion?

Yes. There is noth­ing pre­vent­ing you from as­sign­ing a value equal to -\$1,000 to the state of af­fairs, “I made a bet and lost \$100.” This would sim­ply mean that you con­sider two situ­a­tions equally valuable, for ex­am­ple one in which you have been robbed of \$1,000, and an­other in which you made a bet and lost \$100.

As­sign­ing such val­ues does noth­ing to pre­vent you from hav­ing a math­e­mat­i­cally con­sis­tent util­ity func­tion, and it does not im­ply any nec­es­sary vi­o­la­tion of the VNM ax­ioms.

• That doesn’t fol­low, since there’s also noth­ing pre­vent­ing you from as­sign­ing a value equal to \$-2000 to the state of af­fairs “I was robbed of \$1000”.

• Some­one who has risk aver­sion in Lu­mifer’s sense might as­sign a value of -\$2,000 to “I was robbed of \$1,000 be­cause I left my door un­locked,” but they will not as­sign that value to “I took all rea­son­able pre­cau­tions and was robbed any­way.” The lat­ter is con­sid­ered not as bad.

Speci­fi­cally, peo­ple as­sign a nega­tive value to the thought, “If only I had taken such pre­cau­tions I would not have suffered this loss.” If there are no pre­cau­tions they could have taken, there will be no such re­gret. Even if there are some pre­cau­tions, if they are un­usual and ex­pen­sive ones, the re­gret will be much less, if it ex­ists at all.

Re­fus­ing a bet is nat­u­rally an ob­vi­ous pre­cau­tion, so losses that re­sult from ac­cept­ing bets will be as­signed high nega­tive val­ues in this scheme.

• The richer struc­ture you seek for those two coins is your dis­tri­bu­tion over their prob­a­bil­ities. They’re both 50% likely to come up heads, given the in­for­ma­tion you have. You should be will­ing to make ex­actly the same bets about them, as­sum­ing the per­son offer­ing you the bet has no more in­for­ma­tion than you do. How­ever, if you flip each coin once and ob­serve the re­sults, your new prob­a­bil­ity es­ti­mate for next flips are now differ­ent.

For ex­am­ple, for the sec­ond coin you might have a uniform dis­tri­bu­tion (ig­no­rance prior) over the set of all pos­si­ble prob­a­bil­ities. In that case, if you ob­serve a sin­gle flip that comes up heads, your prob­a­bil­ity that the next flip will be heads is now 23.

• The richer struc­ture you seek for those two coins is your dis­tri­bu­tion over their prob­a­bil­ities.

Yes, I un­der­stand that. This sub­thread started when cousin_it said

Bayesi­ans don’t be­lieve in meta-uncertainty

at which point I ob­jected.

• Let’s re­verse this and see if it makes more sense. Say I give you a die that looks nor­mal, but you have no ev­i­dence about whether it’s fair. Then I offer you a two-sided bet: I’ll bet \$101 to your \$100 that it comes up odd. I’ll also offer \$101 to your \$100 that it comes up even. As­sum­ing that trans­ac­tion costs are small, you would take both bets, right?

If you had even a small rea­son to be­lieve that the die was weighted to­wards even num­bers, on the other hand, you would take one of those bets but not the other. So if you take both, you are ex­hibit­ing a prob­a­bil­ity es­ti­mate of ex­actly 50%, even though it is “un­cer­tain” in the sense that it would not to make ev­i­dence to move that es­ti­mate.

• Huh? If I take both bets, there is the cer­tain out­come of me win­ning \$1 and that in­volves no risk at all (well, other than the pos­si­bil­ity that this die is not a die but a pun and the act of rol­ling it opens a trans­di­men­sional por­tal to the nether realm...)

• True, you’re sure to make money if you take both bets. But if you think the prob­a­bil­ity is 51% on odd rather than 50%, you make a bet­ter ex­pected value by only tak­ing one side.

• The thing, is, I’m perfectly will­ing to ac­cept the an­swer “I don’t know”. How will I bet? I will not bet.

There is a com­mon idea that “I don’t know” nec­es­sar­ily im­plies a par­tic­u­lar (usu­ally uniform) dis­tri­bu­tion over all the pos­si­ble val­ues. I don’t think this is so.

• You will not bet on just one side, you mean. You already said you’ll take both bets be­cause of the guaran­teed win. But un­less your cre­dence is quite pre­cisely 50%, you could in­crease your ex­pected value over that sta­tus quo (guaran­teed \$1) by choos­ing NOT to take one of the bets. If you still take both, or if you now de­cide to take nei­ther, it seems clear that loss aver­sion is the rea­son (un­less the amounts are so large that de­creas­ing marginal value has a sig­nifi­cant effect).

• You already said you’ll take both bets be­cause of the guaran­teed win.

From my point of view it’s not a bet—there is no un­cer­tainty in­volved—I just get to col­lect \$1.

it seems clear that loss aver­sion is the reason

Not loss aver­sion—risk aver­sion. And yes, in most situ­a­tions most hu­mans are risk averse. There are ex­cep­tions—e.g. lot­ter­ies and gam­bling in gen­eral.

• I’m not sure what you mean here by risk aver­sion. If it’s not loss aver­sion, and it’s not due to de­creas­ing marginal value, what is left?

Would you rather have \$5 than a 50% chance of get­ting \$4 and a 50% chance of get­ting \$7? That, to me, sounds like the kind of risk aver­sion you’re de­scribing, but I can’t think of a rea­son to want that.

• what is left?

Aver­sion to un­cer­tainty :-)

Would you rather have \$5 than a 50% chance of get­ting \$4 and a 50% chance of get­ting \$7? That, to me, sounds like the kind of risk aver­sion you’re de­scribing, but I can’t think of a rea­son to want that.

Let me give you an ex­am­ple. You are go­ing to the the­ater to watch the first show­ing of a movie you re­ally want to see. At the ticket booth you dis­cover that you for­got your wallet and can’t pay the ticket cost of \$5. A by­stan­der offers to help you, but be­cause he’s a pro­fes­sor of de­ci­sion sci­ence he offers you a choice: a guaran­teed \$5, or a 50% chance of \$4 and a 50% chance of \$7. What do you pick?

• That’s a great ex­am­ple, but it goes both ways. If the pro­fes­sor offered you a choice be­tween guaran­teed \$4 and a 50% chance be­tween \$5 and \$2, you’d be averse to cer­tainty in­stead (and even pay some ex­pected money for the priv­ilege). Both kinds of sce­nar­ios should hap­pen equally of­ten, so it can’t ex­plain why peo­ple are risk-averse over­all.

• Both kinds of sce­nar­ios should hap­pen equally often

Not in real life, they don’t.

Peo­ple plan­ning fu­ture ac­tions pre­fer the cer­tainty of hav­ing the nec­es­sary re­sources on hand at the proper time. Crudely speak­ing, that’s what plan­ning is. If the amount of re­sources that will be available is un­cer­tain, peo­ple of­ten pre­fer to cre­ate that cer­tainty by get­ting enough re­sources so that the amount at the lower bound is suffi­cient—and that in­volves pay­ing the price of get­ting more (in the ex­pec­ta­tion) than you need.

Be­cause peo­ple do plan, the situ­a­tion of “I’ll pick the suffi­cient and cer­tain amount over a chance to lose and a chance to win” oc­curs much more of­ten than “I cer­tainly have in­suffi­cient re­sources, so a chance to win is bet­ter than no chance at all”.

• How about say­ing that the Bayesian doesn’t have to offer any bets, but must ac­cept a side of any two sided bets offered (even by some­one who knows more).

So if you see the re­sult of the coin and offer me ei­ther side of a 90:10 bet, I would up­date based on my be­liefs about you and why you would offer that bet, and then I pick whichever side is prof­itable. If af­ter up­dat­ing my odds are ex­actly 90:10, then I am happy to pick ei­ther side.

• The fact that an agent has cho­sen to offer the bet, as op­posed to the uni­verse, is im­por­tant in this sce­nario. If they are try­ing to make money off you, then the way to do that is to offer an un­bal­anced bet on the ex­pec­ta­tion that you will take the wrong side. So for ex­am­ple, if you think you have in­side in­for­ma­tion, but they know that is ac­tu­ally un­re­li­able.

The prob­lem is that you have to always play when they want, whilst the other per­son only has to some­times play.

So I’m not sure if this works.

• How about no, be­cause I pre­fer my sta­bil­ity and I don’t want to track ran­dom bets on stuff I don’t care about?

Ap­ply marginal util­ity and a 5050 coin with the op­por­tu­nity to bet a dol­lar, and you’ve got 50% chance to, say, gain 9.9998 points and 50% chance to lose 10 points. Why bother play­ing?

The only rea­sons to play are is if an op­tion is dis­counted (4x pay­out for heads and 1.5x pay­out on tails on a fair coin), if you don’t care about the win­nings but about play­ing the game it­self, or if there’s a thresh­old to reach (e.g. if I had 200 dol­lars then I could do pay­off some­thing else which would avoid the deferred in­ter­est from com­ing into play, sav­ing me 1000 dol­lars, so I would take a 60% chance to lose 100 dol­lars be­cause those ex­tra 100 dol­lars are worth not 100 but 1000 to me).

Plus there’s always ep­silon—“the coin falls on its side” or other vari­a­tions.

• I’m not sug­gest­ing that peo­ple ac­tu­ally do this, just that this is a sen­si­ble as­sump­tion to make when lay­ing the math­e­mat­i­cal foun­da­tion of ra­tio­nal­ity.

• Sure, but what Pimgd is point­ing out is that it does not model ra­tio­nal be­hav­ior very well. Don’t build a math­e­mat­i­cal frame­work on shaky foun­da­tions.

• Yeah. It wouldn’t be as strong in prac­tice (nei­ther na­ture nor peo­ple are in the habit of offer­ing two-sided bets) but as a the­o­ret­i­cal con­straint it seems to work as well.

• Isn’t na­ture always in the habit of offer­ing two-sided bets? Like, you can do one thing or the other.

• Not with the pay­offs given by de Finetti. For ex­am­ple, there’s no way to play the roulette so it be­comes an “anti-roulette”, giv­ing you a slight edge in­stead of the cas­ino. Na­ture usu­ally gives you a choice be­tween do­ing X (ac­cept­ing a one-sided bet as is) or not do­ing X. You don’t always have the op­tion of do­ing “anti-X” (tak­ing the other side of the bet, with the risks and pay­offs ex­actly re­versed).

• This is one way to make your be­liefs pay rent

...

Puns aside, great post!

• Thanks!

• I never took this idea liter­ally—it’s a thought ex­per­i­ment that helps you see whether your be­liefs about your be­liefs are con­sis­tent. If you have a prefer­ence for one side or the other of a wa­ger, that im­plies that your be­liefs about the re­s­olu­tion are not at the line you’re con­sciously con­sid­er­ing.

There are LOTS of rea­sons not to ac­tu­ally make or ac­cept a wa­ger, mostly about the cost of track­ing/​col­lect­ing, and about the differ­ence be­tween the wa­ger out­comes and the nom­i­nal de­scrip­tion of the wa­ger.

• Thanks for post­ing this. I’ve always been skep­ti­cal of the idea that you should offer two sided bets, but I never broke it down in de­tail. Hon­estly, that is such an ob­vi­ous counter-ex­am­ple in ret­ro­spect.

That said, “must ei­ther ac­cept the bet or up­date their be­liefs so the bet be­comes un­prof­itable” does not work. The offer­ing agent has an in­cen­tive to only ever offer bets that benefit them since only one side of the bet is available for bet­ting.

I’m not cer­tain (with­out much more con­sid­er­a­tion), but it seems that Os­car_Cun­ning­ham’s solu­tion of always tak­ing one half of a two sided bet sounds more plau­si­ble.

• Par­tial anal­y­sis:

Sup­pose David is will­ing to stake 100:1 odds against Trump win­ning the pres­i­dency (be­fore the elec­tion). As­sume that David is con­sid­ered to be a perfectly ra­tio­nal agent who can util­ise their available in­for­ma­tion to calcu­late odds op­ti­mally or at least as well as Cameron, so this sug­gests David has some quite sig­nifi­cant in­for­ma­tion.

Now, Cameron might have his own in­for­ma­tion that he sus­pects that David does not and Cameron know that David has no way of know­ing that he has this in­for­ma­tion. Tak­ing this info into ac­count, and the fact that Cameron offered to stake 100:1 odds, he might calcu­late 80:1 when his in­for­ma­tion is in­cor­po­rated. So this would sug­gest that David should take the bet as the odds are bet­ter than Cameron thinks. Ex­cept, per­haps David sus­pected that Cameron had some in­side info and ac­tu­ally thinks the true odds are 200:1 - he only offered 100:1 to fool David into think­ing it was bet­ter that it was—mean­ing that the bet is ac­tu­ally bad for Cameron de­spite his in­side info.

Hmm… I still can’t get my head around this prob­lem.

• The offer­ing agent has an in­cen­tive to only ever offer bets that benefit them

Right, and with two-sided bets there’s no in­cen­tive to offer them at all. One-sided bets do get offered some­times, so you get a chance for free in­for­ma­tion (if the other agent is more in­formed than you) or free money (if you think they might be less in­formed).

• Is there a way to get the benefit of in­clud­ing bet­ting into set­tling ar­gu­ments, with­out the shady as­so­ci­a­tions (and pos­si­ble le­gal ram­ifi­ca­tions) of it be­ing gam­bling?

• I’m not aware of any le­gal im­pli­ca­tions in the US. US gam­bling laws ba­si­cally only ap­ply when there is a “house” tak­ing a cut or bet­ting to their own ad­van­tage or similar. Bets be­tween friends where some­one wins the whole stake are per­mit­ted.

As for the shady im­pli­ca­tions… spend more time hang­ing out with as­piring ra­tio­nal­ists and their ilk?