Knightian Uncertainty and Ambiguity Aversion: Motivation

Re­cently, I found my­self in a con­ver­sa­tion with some­one ad­vo­cat­ing the use of Knigh­tian un­cer­tainty. I ad­mit­ted that I’ve never found the con­cept com­pel­ling. We went back and forth for a lit­tle while. His points were crisp and well-sup­ported, my ob­jec­tions were vague. We didn’t have enough time to reach con­sen­sus, but it be­came clear that I needed to re­search his view­point and flesh out my ob­jec­tions be­fore be­ing jus­tified in my re­jec­tion.

So I did. This is the first in a short se­ries of posts dur­ing which I ex­plore what it means for an agent to rea­son us­ing Knigh­tian un­cer­tainty.

In this first post, I’ll pre­sent a num­ber of ar­gu­ments claiming that Bayesian rea­son­ing fails to cap­ture cer­tain de­sir­able be­hav­ior. I’ll dis­cuss a pro­posed solu­tion, max­i­miza­tion of min­i­mum ex­pected util­ity, which is ad­vo­cated by my friend and oth­ers.

In the sec­ond post, I’ll dis­cuss some more gen­eral ar­gu­ments against Bayesian rea­son­ing as an ideal­iza­tion of hu­man rea­son­ing. What role should “un­known un­knowns” play in a bounded Bayesian rea­soner? Is “Knigh­tian un­cer­tainty” a use­ful con­cept that is not cap­tured by the Bayesian frame­work?

In the third post, I’ll dis­cuss the pro­posed solu­tion: can ra­tio­nal agents dis­play am­bi­guity aver­sion? What does it mean to have a ra­tio­nal agent that does not max­i­mize ex­pected util­ity, max­i­miz­ing “min­i­mum ex­pected util­ity” in­stead?

In the fi­nal post, I’ll ap­ply these in­sights to hu­mans and ar­tic­u­late my ob­jec­tions to am­bi­guity aver­sion in gen­eral. I’ll con­clude that while it is pos­si­ble for agents to be am­bi­guity-averse, am­bi­guity aver­sion in hu­mans is a bias. The max­i­miza­tion of min­i­mum ex­pected util­ity may be a use­ful con­cept for ex­plain­ing how hu­mans ac­tu­ally act, but prob­a­bly isn’t how you should act.


The fol­low­ing is a stylized con­ver­sa­tion that I had at the Stan­ford work­shop on Logic, Ra­tion­al­ity, and In­tel­li­gent In­ter­ac­tion. I’ll anonymize my friend as ‘Sir Percy’, which seems a fit­ting pseudonym for some­one ad­vo­cat­ing Knigh­tian un­cer­tainty.

“I think that’s re­pug­nant”, Sir Percy said. “I can’t as­sign a prob­a­bil­ity to the simu­la­tion hy­poth­e­sis, be­cause I have Knigh­tian un­cer­tainty about it.”

“I’ve never found Knigh­tian un­cer­tainty com­pel­ling” I replied with a shrug. “I don’t see how it helps to claim un­cer­tainty about your cre­dence. I know what it means to feel very un­cer­tain (e.g. place a low prob­a­bil­ity on many differ­ent sce­nar­ios), and I even know what it means to ex­pect that I’m wildly in­cor­rect (though I never know the di­rec­tion of my er­ror). But even­tu­ally I have to act, and this in­volves cash­ing my out my un­cer­tainty into an ac­tual cre­dence and weigh­ing the odds. Even if I’m un­com­fortable pro­duc­ing a suffi­ciently pre­cise cre­dence, even if I feel like I don’t have enough in­for­ma­tion, even though I’m prob­a­bly mi­sus­ing the in­for­ma­tion that I do have, I have to pick the most ac­cu­rate cre­dence I can any­way when it comes time to act.”

“Sure”, Sir Percy an­swered. “If you’re max­i­miz­ing ex­pected util­ity, then you should strive to be a perfect Bayesian, and you should always act like you as­sign a sin­gle cre­dence to any given event. But I’m not max­i­miz­ing ex­pected util­ity.”

Woah. I blinked. I hadn’t even con­sid­ered that some­one could ob­ject to the con­cept of ex­pected util­ity max­i­miza­tion. Ex­pected util­ity max­i­miza­tion seemed fun­da­men­tal: I un­der­stand risk aver­sion, and I un­der­stand cau­tion, but at the end of the day, if I hon­estly ex­pect more util­ity in the left branch than the right branch, then I’m tak­ing the left branch. No fur­ther ques­tions.

“Uh”, I said, de­ploy­ing all wits to ar­tic­u­late my grave con­fu­sion, “wat?”

“I max­i­mize the min­i­mum ex­pected util­ity, given my Knigh­tian un­cer­tainty.”

My brain strug­gled to catch up. Is it even pos­si­ble for a ra­tio­nal agent to re­fuse to max­i­mize ex­pected util­ity? Un­der the as­sump­tion that peo­ple are risk-neu­tral with re­spect to utils, what does it mean for an agent to ra­tio­nally re­fuse an out­come where they ex­pect to get more utils? Doesn’t that merely in­di­cate that they picked the wrong thing to call “util­ity”?

“Look”, Sir Percy con­tinued. “Con­sider the fol­low­ing ‘coin toss game’. There was a coin flip, and the coin came up ei­ther heads (H) or tails (T). You don’t know whether or not the coin was weighted, and if it was, you don’t know which way it was weighted. In fact, all you know is that your cre­dence of event H is some­where in the in­ter­val [0.4, 0.6].”

“That sounds like a failure of in­tro­spec­tion”, I replied. “I agree that you might not be able to gen­er­ate cre­dences with ar­bi­trary pre­ci­sion, but if you have no rea­son to be­lieve that your in­ter­val is skewed to­wards one end or the other, then you should just act like your cre­dence of H is in the mid­dle of your in­ter­val (or the mean of your dis­tri­bu­tion), e.g. 50%.”

“Not so fast. Con­sider the fol­low­ing two bets:”

  1. Pay 50¢ to be payed $1.10 if the coin came up heads

  2. Pay 50¢ to be payed $1.10 if the coin came up tails

“If you’re a Bayesian, then for any as­sign­ment of cre­dence to H, you’ll want to take at least one of these bets. For ex­am­ple, if your cre­dence of H is 50% then each bet has a pay­off of 5¢. But if you pick any ar­bi­trary cre­dence out of your con­fi­dence in­ter­val then at least one of these bets will have pos­i­tive ex­pected value.

On the other hand, I’m max­i­miz­ing the min­i­mum ex­pected util­ity. Given bet (1), I no­tice that per­haps the prob­a­bil­ity of H is only 40%, in which case the ex­pected util­ity of bet (1) is −6¢, so I re­ject it. Given bet (2), I no­tice that per­haps the prob­a­bil­ity of H is 60%, in which case the ex­pected util­ity of bet (2) is −6¢, so I re­ject that too.”

“Uh”, I replied, “you do un­der­stand that I’ll be richer than you, right? Why ain’t you rich?

“Don’t be so sure”, he an­swered. “I re­ject each bet in­di­vi­d­u­ally, but I gladly ac­cept the pair to­gether, and walk away with 10¢. You’re only richer if bets can be re­tracted, and that’s some­what of un­rea­son­able. Be­sides, I do bet­ter than you in the worst case.”


Some­thing about this felt fishy to me, and I ob­jected half­heart­edly. It’s all well and good to say you don’t max­i­mize util­ity for one rea­son or an­other, but when some­body tells me that they ac­tu­ally max­i­mize “min­i­mum ex­pected util­ity”, my first in­cli­na­tion is to tell them that they’ve mis­placed their “util­ity” la­bel.

Fur­ther­more, ev­ery choice in life can be viewed as a bet about which available ac­tion will lead to the best out­come, and on this view, it is quite rea­son­able to ex­pect that many bets will be “re­tracted” (e.g., the op­por­tu­nity will pass).

Still, these com­plaints are rather weak, and my friend had pre­sented a con­sis­tent al­ter­na­tive view­point that came from com­pletely out­side of my hy­poth­e­sis space (and which he backed up with a num­ber of refer­ences). The least I could do was grant it my hon­est con­sid­er­a­tion.

And as it turns out, there are sev­eral con­sis­tent ar­gu­ments for max­i­miz­ing min­i­mum ex­pected util­ity.

The Ells­berg Paradox

Con­sider the Ells­berg “Para­dox”. There is an urn con­tain­ing 90 balls. 30 of the balls are red, and the other 60 are ei­ther black or yel­low. You don’t know how many of the 60 balls are black: it may be zero, it may be 60, it may be any­where in be­tween.

I am about to draw balls out of the urn and pay you ac­cord­ing to their color. You get to choose how I pay out, but you have to pick be­tween two pay­off struc­tures:

  • 1a) I pay you $100 if I draw a red ball.

  • 1b) I pay you $100 if I draw a black ball.

How do you choose? (I’ll give you a mo­ment to pick.)

After­wards, we play again with a sec­ond urn (which also has 30 red balls and 60 ei­ther-black-or-yel­low balls), but this time, you have to choose be­tween the fol­low­ing two pay­off struc­tures:

  • 2a) I pay you $100 if I draw a red or yel­low ball.

  • 2b) I pay you $100 if I draw a black or yel­low ball.

How do you choose? (I’ll give you a mo­ment to pick.)

A perfect Bayesian (with no rea­son to be­lieve that the 60 balls are more likely to be black than yel­low) is in­differ­ent be­tween these pairs. How­ever, most peo­ple pre­fer 1a to 1b, but also pre­fer 2b to 2a.

Th­ese prefer­ences seem strange through a Bayesian lens, given that the b bets are just the a bets al­tered to also pay out on yel­low balls as well. Why do peo­ple’s prefer­ences flip when you add a pay­out on yel­low balls to the mix?

One pos­si­ble an­swer is that peo­ple have am­bi­guity aver­sion. Peo­ple pre­fer 1a to 1b be­cause 1a guaran­tees 30:60 odds (while se­lect­ing 1b when faced with an urn con­tain­ing only yel­low balls means that you have no chance of be­ing paid at all). Peo­ple pre­fer 2b to 2a be­cause 2b guaran­tees 60:30 odds, while 2a may be as bad as 30:60 odds when fac­ing the urn with no yel­low balls.

If you rea­son in this way (and I, for one, feel the al­lure) then you are am­bi­guity averse.

And if you’re am­bi­guity averse, then you have prefer­ences where a perfect Bayesian rea­soner does not, and it looks a lit­tle bit like you’re max­i­miz­ing min­i­mum ex­pected util­ity.

Three games of tennis

Gär­den­fors and Sahlin dis­cuss this prob­lem in their pa­per Un­re­li­able Prob­a­bil­ities, Risk Tak­ing, and De­ci­sion Making

It seems to us […] that it is pos­si­ble to find de­ci­sion situ­a­tions which are iden­ti­cal in all the re­spects rele­vant to the strict Bayesian, but which nev­er­the­less mo­ti­vate differ­ent de­ci­sions.

Th­ese are the peo­ple who coined the de­ci­sion rule of max­i­miz­ing min­i­mum ex­pected util­ity (“the MMEU rule”), and it’s worth un­der­stand­ing the ex­am­ple that mo­ti­vates their ar­gu­ment.

Con­sider three ten­nis games each about to be played: the bal­anced game, the mys­te­ri­ous game, and the un­bal­anced game.

  • The bal­anced game will be played be­tween two play­ers Loren and Lau­ren who are very evenly matched. You hap­pen to know that both play­ers are well-rested, that they are in good health, and that they are each at the top of their men­tal game. Nei­ther you nor any­one else has in­for­ma­tion that makes one of them seem more likely to win than the other, and your cre­dence on the event “Loren wins” is 50%.

  • The mys­te­ri­ous game will be played be­tween John and Michael, about whom you know noth­ing. On pri­ors, it’s likely to be a nor­mal ten­nis game where the play­ers are matched as evenly as av­er­age. One player might be a bit bet­ter than the other, but you don’t know which. Your cre­dence on the event “John wins” is 50%.

  • The un­bal­anced game will be played be­tween An­abel and Zara. You don’t know who is bet­ter at ten­nis, but you have heard that one of them is far bet­ter than the other, and know that ev­ery­body con­sid­ers the game to be a sure thing, with the out­come prac­ti­cally already de­cided. How­ever, you’re not sure whether An­abel or Zara is the su­pe­rior player, so your cre­dence on the event “An­abel wins” is 50%.

A perfect Bayesian would be in­differ­ent be­tween a bet with 1:1 odds on Loren, a bet with 1:1 odds on John, and a bet with 1:1 odds on An­abel. Yet peo­ple are likely to pre­fer 1:1 bets on the bal­anced game. This is not nec­es­sar­ily a bias: peo­ple may ra­tio­nally pre­fer the bet on the bal­anced game. This seems to im­ply that Bayesian ex­pected util­ity max­i­miza­tion is not an ideal­iza­tion of the hu­man rea­son­ing pro­cess.

As these ten­nis games illus­trate, hu­mans treat differ­ent types of un­cer­tainty differ­ently. This mo­ti­vates the dis­tinc­tion be­tween “nor­mal” un­cer­tainty and “Knigh­tian” un­cer­tainty: we treat them differ­ently, speci­fi­cally by be­ing averse to the lat­ter.

The ten­nis games show hu­mans dis­play­ing prefer­ences where a Bayesian would be in­differ­ent. On the view of Gär­den­fors and Sahlin, this means that Bayesian ex­pected util­ity max­i­miza­tion can’t cap­ture ac­tual hu­man prefer­ences; hu­mans ac­tu­ally want to have prefer­ences where Bayesi­ans can­not. How, then, should we act? If Bayesian ex­pected util­ity max­i­miza­tion does not cap­ture an ideal­iza­tion of our in­tended be­hav­ior, what de­ci­sion rule should we be ap­prox­i­mat­ing?

Gär­den­fors and Sahlin pro­pose act­ing such that in the worst case you still do pretty well. Speci­fi­cally, they sug­gest max­i­miz­ing the min­i­mum ex­pected util­ity given our Knigh­tian un­cer­tainty. This idea is dis­cussed in the pa­per Un­re­li­able Prob­a­bil­ities, Risk Tak­ing, and De­ci­sion Mak­ing, which fur­ther mo­ti­vates this new de­ci­sion rule, which I’ll re­fer to as the “MMEU rule”.


We have now seen three sce­nar­ios (the Ells­burg urn, the ten­nis games, and Sir Percy’s coin toss) where the Bayesian de­ci­sion rule of ‘max­i­mize ex­pected util­ity’ seems in­suffi­cient.

In the Ells­berg para­dox, most peo­ple dis­play an aver­sion to am­bi­guity, even though a Bayesian agent (with a neu­tral prior) is in­differ­ent.

In the three ten­nis games, peo­ple act as if they’re try­ing to max­i­mize their util­ity in the least con­ve­nient world, and thus they al­low differ­ent types of un­cer­tainty (whether An­abel is the stronger player vs whether Loren will win the bal­anced game) to af­fect their ac­tions in differ­ent ways.

Most alarm­ingly, in the coin toss game, we see Sir Percy re­ject­ing both bets (1) and (2) but ac­cept­ing their con­junc­tion. Sir Percy knows that his ex­pected util­ity is lower, but seems to have de­cided that this is ac­cept­able given his prefer­ences about am­bi­guity (us­ing rea­son­ing that is not ob­vi­ously flawed). Sir Percy acts like he has a cre­dence in­ter­val, and there is sim­ply no cre­dence that a Bayesian agent can as­sign to H such that the agent acts as Sir Percy prefers.

All these ar­gu­ments sug­gest that there are ra­tio­nal prefer­ences that the strict Bayesian frame­work can­not cap­ture, and so per­haps ex­pected util­ity max­i­miza­tion is not always ra­tio­nal.

Rea­sons for skepticism

Let’s not throw ex­pected util­ity max­i­miza­tion out the win­dow at the first sign of trou­ble. While it surely seems like hu­mans have a gut-level aver­sion to am­bi­guity, there are a num­ber of fac­tors that ex­plain the phe­nomenon with­out sac­ri­fic­ing ex­pected util­ity max­i­miza­tion.

There are some ar­gu­ments in fa­vor of us­ing the MMEU rule, but the real ar­gu­ments are eas­ily ob­scured by a num­ber of fake ar­gu­ments. For ex­am­ple, some peo­ple might pre­fer a bet on the bal­anced ten­nis game over the un­bal­anced ten­nis game for rea­sons com­pletely un­re­lated to am­bi­guity aver­sion: when con­sid­er­ing the ar­gu­ments in fa­vor of am­bi­guity aver­sion, it is im­por­tant to sep­a­rate out the prefer­ences that Bayesian rea­son­ing can cap­ture from the prefer­ences it can­not.

Below are four cases where it may look like hu­mans are act­ing am­bi­guity averse, but where Bayesian ex­pected util­ity max­i­miz­ers can (and do) dis­play the same prefer­ences.

Cau­tion. If you en­joy bets for their own sake, and some­one comes up to you offer­ing 1:1 odds on Lau­ren in the bal­anced ten­nis game, then you are en­couraged to take the bet.

If, how­ever, a cheer­ful bookie comes up to you offer­ing 1:1 odds on Zara in the un­bal­anced game, then the first thing you should do is laugh at them, and the sec­ond thing you should do is up­date your cre­dence that Zara will lose.

Why? Be­cause in the un­bal­anced game, one of the play­ers is much bet­ter than the other, and the bookie might know which. If the bookie, hear­ing that you have no idea whether An­abel is bet­ter or worse than Zara, offers you a bet with 1:1 odds in fa­vor of Zara, then this is pretty good ev­i­dence that Zara is the worse player.

In fact, if you’re op­er­at­ing un­der the as­sump­tion that any­one offer­ing you a bet thinks that they are go­ing to make money, then even as a Bayesian ex­pected util­ity max­i­mizer you should be leery of peo­ple offer­ing bets about the mys­te­ri­ous game or the un­bal­anced game. Ac­tual bets are usu­ally offered to peo­ple by other peo­ple, and peo­ple tend to only offer bets that they ex­pect to win. It’s perfectly nat­u­ral to as­sume that the bookie is ad­ver­sar­ial, and given this as­sump­tion, a strict Bayesian will also re­fuse bets on the un­bal­anced game.

Similarly, in the Ells­berg game, if a Bayesian agent be­lieves that the per­son offer­ing the bet is ad­ver­sar­ial and gets to choose how many black balls there are, then the Bayesian will pick bets 1a and 2b.

Hu­mans are nat­u­rally in­clined to be sus­pi­cious of bets. Bayesian rea­son­ers with those same sus­pi­cions are averse to many bets in a way that looks a lot like am­bi­guity aver­sion. It’s easy to look at a bet on the un­bal­anced game and feel a lot of sus­pi­cion and then, upon hear­ing that a Bayesian has no prefer­ences in the mat­ter, de­cide that you don’t want to be a Bayesian. But a Bayesian with your sus­pi­cions will also avoid bets on the un­bal­anced game, and it’s im­por­tant to sep­a­rate sus­pi­cion from am­bi­guity aver­sion.

Risk aver­sion. Most peo­ple would pre­fer a cer­tainty of $1 billion to a 50% chance of $10 billion. This is not usu­ally due to am­bi­guity aver­sion, though: dol­lars are not utils, and prefer­ences are not gen­er­ally lin­ear in dol­lars. You can pre­fer $1 billion with cer­tainty to a chance of $10 billion on grounds of risk aver­sion, with­out ever bring­ing am­bi­guity aver­sion into the pic­ture.

The Ells­berg urn and the ten­nis games are ex­am­ples that tar­get am­bi­guity aver­sion ex­plic­itly, but be care­ful not to take these ex­am­ples to heart and run around claiming that your pre­fer a cer­tainty of $1 billion to a chance of $10 billion be­cause you’re am­bi­guity averse. Hu­mans are nat­u­rally very risk-averse, so we should ex­pect that most cases of ap­par­ent am­bi­guity aver­sion are ac­tu­ally risk aver­sion. Re­mem­ber that a failure to max­i­mize ex­pected dol­lars does not im­ply a failure to max­i­mize ex­pected util­ity.

Loss aver­sion. When you con­sider a bet on the bal­anced game, you might vi­su­al­ize a tight and thrilling match where you won’t know whether you won the bet un­til the bit­ter end. When you con­sider a bet on the un­bal­anced game, you might vi­su­al­ize a match where you im­me­di­ately figure out whether you won or lost, and then you have to sit through a whole bor­ing ten­nis game ei­ther bored and wait­ing to col­lect your money (if you chose cor­rectly) or with that slow sink­ing feel­ing of loss as you re­al­ize that you don’t have a chance (if you chose in­cor­rectly).

Be­cause hu­mans are strongly loss averse, sit­ting through a game where you know you’ve lost is more bad than sit­ting through a game where you know you’ve won is good. In other words, am­bi­guity may be treated as di­su­til­ity. The ex­pected util­ity of a bet for money in the un­bal­anced game may be less than a similar bet on the bal­anced game: the former bet has more ex­pected nega­tive feel­ings as­so­ci­ated with it, and thus less ex­pected util­ity.

This is a form of am­bi­guity aver­sion, but this por­tion of am­bi­guity aver­sion is a known bias that should be dealt with, not a suffi­cient rea­son to aban­don ex­pected util­ity max­i­miza­tion.

Pos­si­bil­ity com­pres­sion. The three ten­nis games ac­tu­ally are differ­ent, and the ‘strict Bayesian’ does treat them differ­ently. Three Bayesi­ans sit­ting in the stands be­fore each of the three ten­nis games all ex­pect differ­ent ex­pe­riences. The Bayesian at the bal­anced game ex­pects to see a close match. The Bayesian at the mys­te­ri­ous game ex­pects the game to be fairly av­er­age. The Bayesian at the un­bal­anced game ex­pects to see a wash.

When we think about these games, it doesn’t feel like they all yield the same prob­a­bil­ity dis­tri­bu­tions over fu­tures, and that’s be­cause they don’t, even in a Bayesian.

When you’re forced to make a bet only about whether the 1st player will win, you’ve got to pro­ject your dis­tri­bu­tion over all fu­tures (which in­cludes in­for­ma­tion about how ex­cit­ing the game will be and so on) onto a much smaller bi­nary space (player 1 ei­ther wins or loses). This feels lossy be­cause it is lossy. It should come as no sur­prise that many highly differ­ent dis­tri­bu­tions over fu­tures pro­ject onto the same dis­tri­bu­tion over the much smaller bi­nary space of whether player 1 wins or loses.

There is some temp­ta­tion to ac­cept the MMEU rule be­cause, well, the games feel differ­ent, and Bayesi­ans treat the bets iden­ti­cally, so maybe we should switch to a de­ci­sion rule that treats the bets differ­ently. Be wary of this temp­ta­tion: Bayesi­ans do treat the games differ­ently. You don’t need “Knigh­tian un­cer­tainty” to cap­ture this.


I am not try­ing to ar­gue that we don’t have am­bi­guity aver­sion. Hu­mans do in fact seem averse to am­bi­guity. How­ever, much of the ap­par­ent aver­sion is prob­a­bly a com­bi­na­tion of sus­pi­cion, risk aver­sion, and loss aver­sion. The former is available to Bayesian rea­son­ers, and the lat­ter two are known bi­ases. In­so­far as your am­bi­guity aver­sion is caused by a bias, you should be try­ing to re­duce it, not en­dorse it.

Am­bi­guity Aversion

But for all those dis­claimers, hu­mans still ex­hibit am­bi­guity aver­sion.

Now, you could say that what­ever aver­sion re­mains (af­ter con­trol­ling for risk aver­sion, loss aver­sion, and sus­pi­cion) is ir­ra­tional. We know that hu­mans suffer from con­fir­ma­tion bias, hind­sight bias, and many other bi­ases, but we don’t try to throw ex­pected util­ity max­i­miza­tion out the win­dow to ac­count for those strange prefer­ences.

Per­haps am­bi­guity aver­sion is merely a good heuris­tic. In a world where peo­ple only offer you bets when the odds are stacked against you but you don’t know it yet, am­bi­guity aver­sion is a fine heuris­tic. Or per­haps am­bi­guity aver­sion is a use­ful coun­ter­mea­sure against the plan­ning fal­lacy: if we tend to be over­con­fi­dent in our pre­dic­tions, then at­tempt­ing to max­i­mize util­ity in the least con­ve­nient world may coun­ter­bal­ance our over­con­fi­dence. Maybe. (Be leery of evolu­tion­ary just-so sto­ries.)

But this doesn’t have to be the case. Even if my own am­bi­guity aver­sion is a bias, isn’t it still pos­si­ble that there could ex­ist an am­bi­guity-averse ra­tio­nal agent?

An ideal ra­tio­nal agent had bet­ter not have con­fir­ma­tion bias or hind­sight bias, but it seems like you should be able to build a ra­tio­nal agent that dis­prefers am­bi­guity. Am­bi­guity aver­sion is about prefer­ences, not epistemics. Even if hu­man am­bi­guity aver­sion is a bias, shouldn’t it be pos­si­ble to de­sign a ra­tio­nal agent with prefer­ences about am­bi­guity? This seems like a prefer­ence that a ra­tio­nal agent should be able to have, at least in prin­ci­ple.

But if a ra­tio­nal agent dis­prefers am­bi­guity, then it re­jects bets (1) and (2) in the coin toss game, but ac­cepts their ag­glomer­a­tion. And if this is so, then there is no cre­dence it can as­sign to H that to make its ac­tions con­sis­tent, so how could it pos­si­bly be a Bayesian?

What gives? Is the Bayesian frame­work un­able to ex­press agents with prefer­ences about am­bi­guity?

And if so, do we need a differ­ent frame­work that can cap­ture a broader class of “ra­tio­nal” agents, in­clud­ing max­i­miz­ers of min­i­mum ex­pected util­ity?