Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does.

In risk mod­el­ing, there is a well-known dis­tinc­tion be­tween aleatory and epistemic un­cer­tainty, which is some­times referred to, or thought of, as ir­re­ducible ver­sus re­ducible un­cer­tainty. Epistemic un­cer­tainty ex­ists in our map; as Eliezer put it, “The Bayesian says, ‘Uncer­tainty ex­ists in the map, not in the ter­ri­tory.’” Aleatory un­cer­tainty, how­ever, ex­ists in the ter­ri­tory. (Well, at least ac­cord­ing to our map that uses quan­tum me­chan­ics, ac­cord­ing to Bells The­o­rem – like, say, the time at which a ra­dioac­tive atom de­cays.) This is what peo­ple call quan­tum un­cer­tainty, in­de­ter­minism, true ran­dom­ness, or re­cently (and some­what con­fus­ingly to my­self) on­tolog­i­cal ran­dom­ness – refer­ring to the fact that our on­tol­ogy al­lows ran­dom­ness, not that the on­tol­ogy it­self is in any way ran­dom. It may be bet­ter, in Less­wrong terms, to think of un­cer­tainty ver­sus ran­dom­ness – while be­ing aware that the wider world refers to both as un­cer­tainty. But does the dis­tinc­tion mat­ter?

To clar­ify a key point, many facts are treated as ran­dom, such as dice rolls, are ac­tu­ally mostly un­cer­tain – in that with enough physics mod­el­ing and in­puts, we could pre­dict them. On the other hand, in chaotic sys­tems, there is the pos­si­bil­ity that the “true” quan­tum ran­dom­ness can prop­a­gate up­wards into macro-level un­cer­tainty. For ex­am­ple, a sphere of highly re­fined and shaped ura­nium that is *ex­actly* at the crit­i­cal mass will set off a nu­clear chain re­ac­tion, or not, based on the quan­tum physics of whether the neu­trons from one of the first set of de­cays sets off a chain re­ac­tion – af­ter enough of them de­cay, it will be re­duced be­yond the crit­i­cal mass, and be­come in­creas­ingly un­likely to set off a nu­clear chain re­ac­tion. Of course, the ques­tion of whether the nu­clear sphere is above or be­low the crit­i­cal mass (given its ge­om­e­try, etc.) can be a difficult to mea­sure un­cer­tainty, but it’s not aleatory – though some part of the ques­tion of whether it kills the guy try­ing to mea­sure whether it’s just above or just be­low the crit­i­cal mass will be ran­dom – so maybe it’s not worth find­ing out. And that brings me to the key point.

In a large class of risk prob­lems, there are fac­tors treated as aleatory – but they may be epistemic, just at a level where find­ing the “true” fac­tors and out­comes is pro­hibitively ex­pen­sive. Po­ten­tially, the timing of an earth­quake that would hap­pen at some point in the fu­ture could be de­ter­mined ex­actly via a simu­la­tion of the rele­vant data. Why is it con­sid­ered aleatory by most risk an­a­lysts? Well, do­ing it might re­quire a de­struc­tive, cur­rently tech­nolog­i­cally im­pos­si­ble de­con­struc­tion of the en­tire earth – mak­ing the earth­quake ir­rele­vant. We would start with mea­sure­ment of the po­si­tion, den­sity, and stress of each rel­a­tively macro­scopic struc­ture, and the perform a very large physics simu­la­tion of the earth as it had ex­isted be­fore­hand. (We have lots of sili­con from de­con­struct­ing the earth, so I’ll just as­sume we can now build a big enough com­puter to simu­late this.) Of course, this is not worth­while – but do­ing so would po­ten­tially show that the ac­tual aleatory un­cer­tainty in­volved is neg­ligible. Or it could show that we need to model the macro­scop­i­cally chaotic sys­tem to such a high fidelity that micro­scopic, fun­da­men­tally in­de­ter­mi­nate fac­tors ac­tu­ally mat­ter – and it was truly aleatory un­cer­tainty. (So we have epistemic un­cer­tainty about whether it’s aleatory; if our map was of high enough fidelity, and was com­putable, we would know.)

It turns out that most of the time, for the types of prob­lems be­ing dis­cussed, this dis­tinc­tion is ir­rele­vant. If we know that the value of in­for­ma­tion to de­ter­mine whether some­thing is aleatory or epistemic is nega­tive, we can treat the un­cer­tainty as ran­dom­ness. (And usu­ally, we can figure this out via a quick or­der of mag­ni­tude calcu­la­tion; Value of Perfect in­for­ma­tion is es­ti­mated to be worth $100 to figure out which side the dice lands on in this game, and build­ing and test­ing /​ val­i­dat­ing any model for pre­dict­ing it would take me at least 10 hours, my time is worth at least $25/​hour, it’s nega­tive.) But some­times, slightly im­proved mod­els, and slightly bet­ter data, are fea­si­ble – and then worth check­ing whether there is some epistemic un­cer­tainty that we can pay to re­duce. In fact, for earth­quakes, we’re do­ing that – we have mon­i­tor­ing sys­tems that can give sev­eral min­utes of warn­ing, and ge­olog­i­cal mod­els that can pre­dict to some de­gree of ac­cu­racy the rel­a­tive like­li­hood of differ­ent sized quakes.

So, in con­clu­sion; most un­cer­tainty is lack of re­s­olu­tion in our map, which we can call epistemic un­cer­tainty. This is true even if lots of peo­ple call it “truly ran­dom” or ir­re­ducibly un­cer­tain – or if they are fancy, aleatory un­cer­tainty. Some of what we as­sume is un­cer­tainty is re­ally ran­dom­ness. But lots of the epistemic un­cer­tainty can be safely treated as aleatory ran­dom­ness, and value of in­for­ma­tion is what ac­tu­ally makes a differ­ence. And know­ing the ter­minol­ogy used el­se­where can be helpful.