The “map” and “territory” analogy as it pertains to potentially novel territories that people may not anticipate

So in terms of the “map” and “ter­ri­tory” anal­ogy, the goal of ra­tio­nal­ity is to make our map cor­re­spond more closely with the ter­ri­tory. This comes in two forms - (a) area and (b) ac­cu­racy. Per­son A could have a larger map than per­son B, even if A’s map might be less ac­cu­rate than B’s map. There are ways to in­crease the area of the ter­ri­tory—of­ten by test­ing things in the bound­ary value con­di­tions of the ter­ri­tory. I of­ten like ask­ing bound­ary value/​pos­si­bil­ity space ques­tions like “well, what might hap­pen to the at­mo­sphere of a rogue planet as time ap­proaches in­finity?”, since I feel like they might give us ad­di­tional in­sight about the ro­bust­ness of plane­tary at­mo­sphere mod­els across differ­ent en­vi­ron­ments (and also, the pos­si­bil­ity that I might be wrong makes me more mo­ti­vated to ac­tu­ally spend ad­di­tional effort to test/​cal­ibrate my model more than I oth­er­wise would test/​cal­ibrate it). My in­tense cu­ri­os­ity with these highly the­o­ret­i­cal ques­tions of­ten puz­zles the ex­perts in the field though, since they feel like these ques­tions aren’t em­piri­cally ver­ifi­able (so they are con­sid­ered less “in­ter­est­ing”). I also like to study other things that many aca­demics aren’t nec­es­sar­ily com­fortable with study­ing (per­haps since it is harder to be em­piri­cally rigor­ous), such as the pos­si­ble so­cial out­comes that could spring out of a rad­i­cal so­cial ex­per­i­ment. When you’re con­cerned with main­tain­ing the ac­cu­racy of your map, it may come at the sac­ri­fice of dA/​dt, where A is area (so your Area in­creases more slowly with time).

I also feel that so­cial breach­ing ex­per­i­ments are an­other in­ter­est­ing way of in­creas­ing the vol­ume of my “map”, since they help me test the ro­bust­ness of my so­cial mod­els in situ­a­tions that peo­ple are un­ac­cus­tomed to. Hack­ers of­ten perform these sorts of ex­per­i­ments to test the ro­bust­ness of se­cu­rity sys­tems (in fact, a low level of po­ten­tially em­bar­rass­ing hack­ing is prob­a­bly op­ti­mal when it comes to en­sur­ing that the se­cu­rity sys­tem re­mains ro­bust—al­though it’s en­tirely pos­si­ble that even then, peo­ple may pay too much at­ten­tion to cer­tain mod­els of hack­ing, caus­ing po­ten­tially mal­i­cious hack­ers to dream up of new mod­els of hack­ing).

With pos­si­bil­ity space, you could code up the con­di­tions of the en­vi­ron­ment in a k-di­men­sional space such as (1,0,0,1,0,...), where 1 in­di­cates the ex­is­tence of some vari­able in a par­tic­u­lar en­vi­ron­ment, and 0 in­di­cates the ab­sence of such vari­able. We can then use Huff­man Cod­ing to in­di­cate the fre­quency of the com­bi­na­tion of each set of con­di­tions in the set of en­vi­ron­ments we most fre­quently en­counter (so then, less prob­a­ble en­vi­ron­ments would have longer Huff­man codes, or higher val­ues of en­tropy/​in­for­ma­tion).

As we know from Taleb’s book “The Black Swan”, many peo­ple fre­quently un­der­es­ti­mate the prevalence of “long tail” events (which are of­ten part of the un­re­al­ized por­tion of pos­si­bil­ity space, and have longer Huff­man codes). This causes them to over-rely on Gaus­sian dis­tri­bu­tions even in situ­a­tions where the Gaus­sian dis­tri­bu­tions may be in­ap­pro­pri­ate, and it is of­ten said that this was one of the fac­tors be­hind the re­cent fi­nan­cial crisis.

Now, what does this in­ves­ti­ga­tion of pos­si­bil­ity space al­low us to do? It al­lows us to re-ex­am­ine the ro­bust­ness of our for­mal sys­tem - how sen­si­tive or flex­ible our sys­tem is with re­spect to con­tin­u­ing its du­ties in the face of per­tur­ba­tions in the en­vi­ron­ment we be­lieve it’s ap­pli­ca­ble for. We of­ten have a ten­dency to over­es­ti­mate the con­sis­tency of the en­vi­ron­ment. But if we con­sis­tently try to test the bound­ary con­di­tions, we might be able to bet­ter es­ti­mate the “map” that cor­re­sponds to the “ter­ri­tory” of differ­ent (or po­ten­tially novel) en­vi­ron­ments that ex­ist in pos­si­bil­ity space, but not yet in re­al­ized pos­si­bil­ity space.

The thing is, though, that many peo­ple have a ha­bit­ual ten­dency to avoid ex­plor­ing bound­ary con­di­tions. The fact is, that the space of re­al­ized events is always far smaller than the en­tirety of pos­si­bil­ity space, and it is usu­ally im­prac­ti­cal to ex­plore all of pos­si­bil­ity space. Since our time is limited, and the pay­offs of ex­plor­ing the un­re­al­ized por­tions of pos­si­bil­ity space un­cer­tain (and of­ten time-de­layed, and also sub­ject to hy­per­bolic time-dis­count­ing, es­pe­cially when the pay­offs may come only af­ter a sin­gle per­son’s life­time), peo­ple of­ten don’t ex­plore these por­tions of pos­si­bil­ity space (al­though life ex­ten­sion, com­bined with var­i­ous cre­ative ap­proaches to de­crease peo­ple’s time prefer­ence, might change the in­cen­tives). Fur­ther­more, we can­not em­piri­cally ver­ify un­re­al­ized por­tions of pos­si­bil­ity space us­ing the tra­di­tional sci­en­tific method. Bayesian meth­ods may be more ap­pro­pri­ate, but even then, peo­ple may be sus­cep­ti­ble to plug­ging the wrong val­ues into the Bayesian for­mula (again, per­haps due to over-as­sum­ing con­ti­nu­ity in en­vi­ron­men­tal con­di­tions). As in my origi­nal ex­am­ple about hack­ing, it is way too easy for the de­sign­ers of se­cu­rity sys­tems to use the wrong Bayesian pri­ors when they are be­ing ob­served by po­ten­tial hack­ers, who may have an idea about ways that take ad­van­tage of the val­ues of these Bayesian pri­ors.