Knightian uncertainty in a Bayesian framework

Re­cently, I found my­self in a con­ver­sa­tion with some­one ad­vo­cat­ing the use of Knigh­tian un­cer­tainty. I pointed out that it doesn’t re­ally mat­ter what un­cer­tainty you call “nor­mal” and what un­cer­tainty you call “Knigh­tian” be­cause, at the end of the day, you still have to cash out all your un­cer­tainty into a cre­dence so that you can ac­tu­ally act.

My con­ver­sa­tion part­ner, who I’m anonymiz­ing as “Sir Percy”, ac­knowl­edged that this is true if your goal is to max­i­mize your ex­pected gains, but de­nies that he should max­i­mize ex­pected gains. He pro­poses max­i­miz­ing min­i­mum ex­pected gains given Knigh­tian un­cer­tainty (“us­ing the MMEU rule”), and when us­ing such a rule, the dis­tinc­tion be­tween nor­mal un­cer­tainty and Knigh­tian un­cer­tainty does mat­ter. I mo­ti­vate the MMEU rule in my pre­vi­ous post, and in the next post, I’ll ex­plore it in more de­tail.

In this post, I will be ex­am­in­ing Knigh­tian un­cer­tainty more broadly. The MMEU rule is one way of cash­ing out Knigh­tian un­cer­tainty into de­ci­sions in a way that looks non-Bayesian. But this de­ci­sion rule is only one way in which the con­cept of Knigh­tian un­cer­tainty could prove use­ful, and I want to take a post to ex­plore the con­cept of Knigh­tian un­cer­tainty in its own right.


Ac­cord­ing to Wikipe­dia:

In eco­nomics, Knigh­tian un­cer­tainty is risk that is im­mea­surable, not pos­si­ble to calcu­late.

There are many ways to in­ter­pret this. In Sir Percy’s coin toss, we cash out the idea of Knigh­tian un­cer­tainty by say­ing that we have “Knigh­tian un­cer­tainty” about whether the coin was weighted, and that we can nar­row down our cre­dence in the event H to a “Knigh­tian in­ter­val” [.4, .6], but no fur­ther. This in­di­cates a failure of in­tro­spec­tion: an agent with this sort of Knigh­tian un­cer­tainty can­not get a pre­cise cre­dence for ev­ery event.

Another com­mon phrase tossed around when peo­ple men­tion Knigh­tian un­cer­tainty is “black swan events”, events that are un­pre­dictable in fore­sight but which have very high im­pact. One com­mon ex­am­ple of a black swan event was the dawn of per­sonal com­put­ing: in the 1940′s, very few peo­ple would have pre­dicted that per­sonal com­put­ers would be­come so per­va­sive, yet when they did, they com­pletely al­tered the course of his­tory.

Peo­ple who ex­pect black swan events to oc­cur of­ten claim they have Knigh­tian un­cer­tainty about the fu­ture. This in­di­cates a failure of pre­dic­tion: an agent with this sort of Knigh­tian un­cer­tainty ex­pects that even their best pre­dic­tions will be sig­nifi­cantly flawed.

I don’t like the term “Knigh­tian un­cer­tainty”, but I mostly don’t like it be­cause it is one la­bel that tries to cover a few very differ­ent con­cepts, in­clud­ing failures of in­tro­spec­tion and failures of pre­dic­tion. (I also dis­like the term be­cause it’s named for a per­son in­stead of for its func­tion, but un­til I can con­vince ev­ery­body to re­fer to “Bayesian rea­son­ing” by a bet­ter name (“ra­tience?”) I won’t com­plain.)

Re­gard­less, the con­cepts in­tro­duced by “Knigh­tian un­cer­tainty” are not mys­te­ri­ous un­know­able im­mea­surable hor­rible no-good very bad un­cer­tainty. In­deed, these con­cepts merely cap­ture cer­tain states of knowl­edge in bounded Bayesian rea­son­ers. Allow me to re­peat that:

Knigh­tian un­cer­tainty is not a spe­cial, im­mea­surable un­cer­tainty. It’s just a term that cap­tures a few differ­ent states of limited knowl­edge in a bounded rea­soner.

I’ll ex­pand upon that point.

Failures of Pre­dic­tion (black swans)

You can’t pre­dict the fu­ture, say the ad­vo­cates of Knigh­tian un­cer­tainty. Or, rather, you can, but you’ll be com­pletely wrong. Your Bayesian rea­son­ing al­lows you to pre­dict what is likely among the out­comes in your hy­poth­e­sis space, but your hy­poth­e­sis space is sorely lack­ing. The cor­rect hy­poth­e­sis is so far out­side your hy­poth­e­sis space that it hasn’t even been brought to your at­ten­tion, and yet it is so differ­ent from ev­ery­thing you can con­sider that your abil­ity to pre­dict any­thing about the fu­ture is com­pletely ru­ined.

This is the black swan effect: some­times, fate throws you a curve­ball so com­pletely out­side your hy­poth­e­sis space that all your pre­dic­tions are shat­tered by a sin­gle “black swan” event (which usu­ally has the gall to seem ob­vi­ous in hind­sight). This phe­nom­ena is real, and vin­di­cated by his­tory: but we don’t need a new kind of un­cer­tainty to con­sider it.

Black swan effects oc­cur pri­mar­ily when a bounded agent fails to con­sider part of the hy­poth­e­sis space. A perfect Solomonoff in­duc­tor in a com­putable uni­verse is not vuln­er­a­ble to black swans: it can have a bad prior, and it can be sur­prised to find it­self in an overly com­plex uni­verse, but there is no hy­poth­e­sis which is likely but which the in­duc­tor fails to con­sider. Un­bounded rea­son­ers need not en­counter this failure mode.

But we are bounded rea­son­ers, and we usu­ally can’t con­sider all available hy­pothe­ses. We can’t ex­pect to gen­er­ate even the top ten most likely hy­pothe­ses, no mat­ter how long we have to brain­storm. It’s not that our ev­i­dence doesn’t im­ply the cor­rect hy­poth­e­sis, it’s that we can’t gen­er­ate all the hy­pothe­ses that our ev­i­dence en­tails. This is a large part of why black swan events seem ob­vi­ous in ret­ro­spect: once we have the hy­poth­e­sis, it is ob­vi­ously en­tailed by our ev­i­dence, and so it seems like it should have been ob­vi­ous. But it wasn’t, be­cause we aren’t good at gen­er­at­ing the right ideas.

This phe­nom­ena is wor­ri­some when at­tempt­ing to pre­dict the fu­ture, but we don’t need a new kind of un­cer­tainty to deal with the failure mode. In fact, this failure mode is noth­ing but a de­scrip­tion of one of the limi­ta­tions of a bounded Bayesian rea­soner.

Bounded Bayesian rea­son­ers should ex­pect that they don’t have ac­cess to the full hy­poth­e­sis space. Bounded Bayesian rea­son­ers can ex­pect that their first-or­der pre­dic­tions are in­cor­rect due to a want of the right hy­poth­e­sis, and thus place high cre­dence on “some­thing I haven’t thought of”, and place high value on new in­for­ma­tion or other ac­tions that ex­pand their hy­poth­e­sis space. Bounded Bayesi­ans can even ex­pect that their cre­dence for an event will change wildly as new in­for­ma­tion comes in.

Let’s make things more con­crete. Con­sider the event “there is a cure for Alzheimer’s dis­ease 70 years from now”.

As an as­piring Bayesian, I can as­sign a cre­dence to this event. But as a denizen in a world of chaos, I can also ex­pect black swan events. Deal­ing with the black swans doesn’t re­quire any new type of prob­a­bil­ity, though: I can ac­count for it within the Bayesian frame­work.

Let’s pre­tend, to make things sim­ple, that I as­sign 50% cre­dence to this event. Sir Percy might call this re­pug­nant, claiming Knigh­tian un­cer­tainty. How can I as­sign a cre­dence when I ex­pect black swans? How can I even claim to know the shape of the dis­tri­bu­tion?

But, of course, I’m ac­count­ing for black swans (in­so­far as I can) with my 50% cre­dence. Let’s con­sider a few black swans that could af­fect this event. The av­er­age per­son con­sid­er­ing an alzheimer’s cure in sev­enty years prob­a­bly imag­ines the sta­tus quo con­tin­u­ing in the in­terim, and then asks whether med­i­cal sci­ence (ex­trap­o­lated out sev­enty years at the same rate of growth) will lead to an Alzheimer’s cure. The av­er­age per­son prob­a­bly does not con­sider the fol­low­ing po­ten­tial black swans:

  1. Within 70 years, hu­man civ­i­liza­tion will have col­lapsed.

  2. Within 70 years, we will have achieved a pos­i­tive sin­gu­lar­ity.

  3. Within 70 years, all mod­ern dis­eases will be elimi­nated by whole-brain em­u­la­tion.

Of course, these aren’t black swan events to me, be­cause these are in my hy­poth­e­sis space. But they’d be black swans to the av­er­age per­son, and I was ca­pa­ble of tak­ing them into ac­count when as­sign­ing my cre­dence. So in a way, yes, I can ac­count for black swans.

I’m still sus­cep­ti­ble to black swans that I don’t see com­ing, of course. My black swans are hy­poth­e­sis that are just as weird to me as whole-brain em­u­la­tion is to my grand­mother, and there’s a de­cent chance that I’ll be blind­sided by one of these strange events some­time in the next sev­enty years.

But I can still ac­count for this. I don’t know where to ex­pect black swans, but I can ask ques­tions like “how will the av­er­age black swan af­fect Alzheimer’s cures?”. If I ex­pect that most black swans will make Alzheimer’s cures eas­ier to achieve, then I ad­just my cre­dence up­wards. If I ex­pect the op­po­site, then I ad­just my cre­dence down­wards.

And if I ex­pect that I have ab­solutely no idea what the black swans will look like but also have no rea­son to be­lieve black swans will make this event any more or less likely, then even though I won’t ad­just my cre­dence fur­ther, I can still in­crease the var­i­ance of my dis­tri­bu­tion over my fu­ture cre­dence for this event.

In other words, even if my cur­rent cre­dence is 50% I can still ex­pect that in 35 years (af­ter en­coun­ter­ing a black swan or two) my cre­dence will be very differ­ent. This has the effect of mak­ing me act un­cer­tain about my cur­rent cre­dence, al­low­ing me to say “my cre­dence for this is 50%” with­out much con­fi­dence. So long as I can’t pre­dict the di­rec­tion of the up­date, this is con­sis­tent Bayesian rea­son­ing.

As a bounded Bayesian, I have all the be­hav­iors recom­mended by those ad­vo­cat­ing Knigh­tian un­cer­tainty. I put high value on in­creas­ing my hy­poth­e­sis space, and I of­ten ex­pect that a hy­poth­e­sis will come out of left field and throw off my pre­dic­tions. I’m happy to in­crease my er­ror bars, and I of­ten ex­pect my cre­dences to vary wildly over time. But I do all of this within a Bayesian frame­work, with no need for ex­otic “im­mea­surable” un­cer­tainty.

Failures of In­tro­spec­tion (im­pre­cise cre­dences)

Black swan events are not a good rea­son to fail to pro­duce a cre­dence. Black swan events are a good rea­son to lower your con­fi­dence and in­crease your er­ror bars, and they are a good rea­son to ex­pect your cre­dence to vary, but they don’t pro­hibit you from us­ing the (ad­mit­tedly in­com­plete) in­for­ma­tion you have right now to give the best guess you can right now, even if you ex­pect it to be wrong.

There are other sce­nar­ios, though, where ad­vo­cates of Knigh­tian un­cer­tainty claim that they sim­ply can­not gen­er­ate a sharp cre­dence. This hap­pens dur­ing failures of in­tro­spec­tion. Hu­mans are not perfect Bayesi­ans, and we can’t sim­ply ask our in­tu­itions to take all of the ev­i­dence and weight it ap­pro­pri­ately and out­put a pre­cise num­ber. Be­cause first of all, our in­tu­itions don’t weigh things very well. Our cre­dence calcu­la­tions de­pend upon the fram­ing of the ques­tion and on our mood. Our abil­ity to use the ev­i­dence de­pends upon our mem­ory and is limited by our vuln­er­a­bil­ity to var­i­ous bi­ases.

Se­condly, even if these con­found­ing fac­tors didn’t ex­ist, we’d still lack the abil­ity to query our in­tu­itions and get a pre­cise num­ber out. The best we can get is a vague, fuzzy feel­ing. Even if our brains were do­ing good Bayesian rea­son­ing, we would lack the in­tro­spec­tion to trans­late our feel­ings into sharp num­bers. All we can get is an in­ter­val, or at best a fuzzy dis­tri­bu­tion over pos­si­ble cre­dences that we should hold.

This phe­nom­ena oc­curs when­ever an as­piring Bayesian can’t gen­er­ate enough sig­nifi­cant digits, for one rea­son or an­other. Per­haps the agent didn’t start with a pre­cise prior. Per­haps the agent can’t do perfect Bayesian up­dates. Per­haps it sim­ply lacks perfect in­tro­spec­tion. As some­one with­out a pre­cise prior who can’t do perfect Bayesian up­dates and lacks perfect in­tro­spec­tion, I sym­pa­thize.

Have you heard the joke about the Tyran­nosaurus Rex?

A tourist goes to the mu­seum, and sees a Tyran­nosaurus Rex. “Wow”, she says. “This looks old.” Turn­ing to the tour guide, she asks “How old is this skele­ton?”

“66 mil­lion years, three weeks, and two days old!” the tour guide says triumphantly.

“Dang”, the tourist says, “how do you know its age with such pre­ci­sion?”

“Well, three weeks and two days ago, I asked the pa­le­on­tol­o­gist, and she said it was 66 mil­lion years old.”

Ad­vo­cates of Knigh­tian un­cer­tainty may well feel like as­piring Bayesi­ans are act­ing like the tour guide. In­deed, this is a pos­si­ble failure mode among as­piring Bayesi­ans. In gen­eral, bounded Bayesian rea­son­ers should not ex­pect that they are able to gen­er­ate sig­nifi­cant digits in­definitely.

If you asked me to guess the age of a Tyran­nosaur skele­ton, I would say that it’s likely be­tween 66 and 67 mil­lion years old. But if you asked me to guess the mil­len­nia in which the Tyran­nosaur lived, I’d be fairly un­com­fortable, and if you asked me to guess the year it died, I’d look at you funny.

Given a perfect Bayesian, you could query their sharp cre­dences un­til you found an event “This Tyran­nosaur was born at or be­fore minute X” for some X to which the Bayesian as­signs 50% cre­dence. But if you tried that trick with me, I’d be some­what miffed. And if you made me take a bet about the ex­act minute in which the T-Rex was born, I’d be quite an­noyed.

While I can gen­er­ate cre­dences for when the Dinosaur was born, ask­ing for a pre­dic­tion down to the minute of its birth is ask­ing for way more sig­nifi­cant digits than I have ac­cess to.

If you want, you can say I have “Knigh­tian un­cer­tainty” about when the Tyran­nosaur was born. I surely don’t want to make bets in sce­nar­ios where the bet de­pends upon more sig­nifi­cant digits then I am ca­pa­ble of pro­duc­ing.

And yet, there are sce­nar­ios where the world will de­mand more pre­ci­sion than I can pro­duce. So the ques­tion is, what then?

The clas­si­cal Bayesian an­swer is that you calcu­late as many sig­nifi­cant digits as you can un­til you have a dis­tri­bu­tion over which cre­dences you should have, and then you pick the mean. In ac­tual fact, as a bounded agent, you won’t be able to get very many sig­nifi­cant digits at all, and you won’t be able to get a clear cre­dence dis­tri­bu­tion, so “pick­ing the mean” will be an­other fuzzy and vague task.

But you still have to do it, be­cause even though the situ­a­tion is an­noy­ing, cash­ing out your cre­dences is the best op­tion you’ve got.

Sure, you say that you’re un­com­fortable guess­ing the ex­act minute for which you as­sign cre­dence 50% to the event “the T-Rex was born at or be­fore this minute”. But con­sider the fol­low­ing game. Omega comes down to you and says:

Listen. You must pick a pre­cise minute in Earth’s his­tory. Then, I’ll cre­ate a clone of you that has ex­actly the same knowl­edge, but ex­actly op­po­site prefer­ences. That clone will choose ei­ther “be­fore” or “af­ter” your cho­sen minute. If the T-Rex was born in the times­pan cho­sen by your evil twin, then I’ll de­stroy the world. Other­wise, I’ll help you solve global co­or­di­na­tion.

In this sce­nario, you max­i­mize the chances of the world’s sur­vival by pick­ing the minute be­fore which you as­sign a 50% cre­dence to the event ‘the T-Rex was born be­fore this minute’.

Now, I agree that this sce­nario is ridicu­lous. And that it sucks. And I agree that pick­ing a pre­cise minute feels un­com­fortable. And I agree that this is de­mand­ing way more pre­ci­sion than you are able to gen­er­ate. But if you find your­self in the game, you’d best pick the minute as well as you can. When the gun is pressed against your tem­ple, you cash out your cre­dences.

Yes, you can say you have “Knigh­tian un­cer­tainty” about the pre­cise minute in which the T-Rex was born. But this doesn’t mean that the un­cer­tainty is “im­mea­surable” or “not pos­si­ble to calcu­late”. It just means that na­ture is de­mand­ing more pre­ci­sion than you feel com­fortable gen­er­at­ing.

Bounded Bayesi­ans have Knigh­tian powers

I take is­sue with the term “Knigh­tian un­cer­tainty” for a num­ber of rea­sons. It is one la­bel used for many things, and I find such la­bels un­helpful. It is touted as “im­mea­surable” and “im­pos­si­ble to calcu­late” when ac­tu­ally it only de­scribes cer­tain limi­ta­tions of bounded agents. The scary de­scrip­tion doesn’t seem to help.

That said, many of the ob­jec­tions made by ad­vo­cates of Knigh­tian un­cer­tainty against ideal Bayesian rea­son­ing are sound ob­jec­tions: the fu­ture will of­ten defy ex­pec­ta­tion. In many com­pli­cated sce­nar­ios, you should ex­pect that the cor­rect hy­poth­e­sis is in­ac­cessible to you. Hu­mans lack in­tro­spec­tive ac­cess to their cre­dences, and even if they didn’t, such cre­dences of­ten lack pre­ci­sion.

Th­ese are short­com­ings not found in an ideal­ized Bayesian, but they are preva­lent in any bounded rea­soner. But none of these short­com­ings sug­gest that stan­dard prob­a­bil­ity the­ory is in­ad­e­quate for rea­son­ing about our en­vi­ron­ment due to some ex­otic “im­mea­surable” un­cer­tainty.

I un­der­stand some of the aver­sion to a Bayesian frame­work. Bayesi­ans do tent to fetishize bets. When offered the two bets in Sir Percy’s coin toss, there is a cer­tain ap­peal to re­fus­ing both bets. Bets of­ten come with stigma and this (when paired with loss aver­sion) can make both bets seem un­ap­peal­ing de­spite the fact that we are told a Bayesian rea­soner always prefers one bet or the other.

But the thing is, a bounded Bayesian rea­soner may also pre­fer not to take the bets. If I ex­pect my cre­dence for H to vary wildly then I may de­lay my de­ci­sion as long as pos­si­ble. Fur­ther­more, if the bets are for money (rather than util­ity) then I’m all for risk aver­sion.

But from an­other per­spec­tive, ev­ery de­ci­sion in life in­volves a “bet” of sorts on which ac­tion to take. The best available ac­tion may in­volve keep­ing your op­tions open, de­lay­ing de­ci­sions, and gath­er­ing more in­for­ma­tion. But even those choices are still “part of the bet”. At the end of the day, you still have to choose an ac­tion.

Hu­mans can’t gen­er­ate pre­cise cre­dences. Even our fuzzy in­tu­itions vary with fram­ing and con­text. Our rea­son­ing in­volves heuris­tics with known failure modes. We are sub­ject to in­nu­mer­able bi­ases, and we can’t trust in­tro­spec­tion. But when it comes time to act, we still have to cash out our un­cer­tainty.

If you ex­pect you’re bi­ased one way or the other, then ad­just. If you still ex­pect you’re bi­ased but you don’t know which way, then you’ve done the best you can. The uni­verse doesn’t give you the op­tion to re­fuse its bets, and any com­plaints about in­suffi­cient pre­ci­sion will fall upon deaf ears.


Failures of pre­dic­tion and in­tro­spec­tion are not the only states which the “Knigh­tian un­cer­tainty” la­bel cov­ers (and in­deed, the la­bel seems some­what fuzzy). I don’t mean to im­ply that the above post com­pletely dis­pels the term. Rather, I make the claim that all “im­mea­surable” un­cer­tain­ties can be dealt with in a bounded Bayesian frame­work.

You can say that Knigh­tian un­cer­tainty is un­cer­tainty about which you know noth­ing (not even the shape of the dis­tri­bu­tion), and you can feel hel­pless in the face of un­known un­knowns, but a bounded Bayesian can han­dle these feel­ings: ad­just in­so­far as you can, and then act.

As such, I am ini­tially skep­ti­cal of the sug­ges­tion that any spe­cific un­cer­tain­ties (cor­re­spond­ing to limi­ta­tions in bounded agents) should be given spe­cial treat­ment. If you want to say that “Knigh­tian un­cer­tainty” is some­how differ­ent, then I only note that when it comes time to act, you still have to cash it out into Good Old Fash­ioned Uncer­tainty — un­less you re­fuse to max­i­mize ex­pected util­ity.

And this brings us back to the MMEU rule, a de­ci­sion rule that ac­tu­ally does treat Knighi­tan un­cer­tainty differ­ently. Does this rule give us pow­ers that the Bayesi­ans know not? This ques­tion will be ex­plored fur­ther in the next post.