Reference Classes for Randomness

(Fol­low-up to Ran­dom­ness vs. Ig­no­rance)

I’ve claimed that, if you roll a die, your un­cer­tainty about the re­sult of the roll is ran­dom, be­cause, in 1/​6th of all situ­a­tions where one has just rol­led a die, it will come up a three. Con­versely, if you won­der about the ex­is­tence of a time­less God, what­ever un­cer­tainty you have is ig­no­rance. In this post, I make the case that this dis­tinc­tion isn’t just an ana­log to prob­a­bil­ity in­side vs. out­side a model, but is ac­tu­ally fun­da­men­tal (if some more ideas are added).

The ran­dom­ness in the above ex­am­ple doesn’t come from some in­her­ent “true ran­dom­ness” of the die. In fact, this no­tion of ran­dom­ness is com­pat­i­ble with de­ter­minism. (You could then ar­gue it is not real ran­dom­ness but just ig­no­rance in dis­guise, but please just ac­cept the term ran­dom­ness, when­ever I bold it, as a work­ing defi­ni­tion.) This ran­dom­ness is sim­ply the re­sult of tak­ing all situ­a­tions which are iden­ti­cal to the cur­rent one from your per­spec­tive, and ob­serv­ing that, among those, one in six will have the die come up a three. This is a gen­eral prin­ci­ple that can be ap­plied to any situ­a­tion: a fair die, a bi­ased die, de­lay in traf­fic, what­ever.

The “iden­ti­cal” in the last para­graph needs un­pack­ing. If you roll a die and we con­sider only the situ­a­tions that are ex­actly iden­ti­cal from your per­spec­tive, then the die will come up a three ei­ther in a lot more or a lot less than 1/​6th of them. Re­gard­less of whether the uni­verse is fully de­ter­minis­tic or not, the cur­rent state of the die is sure to at least cor­re­late with the chance for a three to end up on top.

How­ever, you are not ac­tu­ally able to dis­t­in­guish be­tween the situ­a­tion where you just rol­led a die in such a way that it will come up a three, and the situ­a­tion where you just rol­led a die in such a way that it will come up a five, and thus you need to group both situ­a­tions to­gether. More pre­cisely, you need to group all situ­a­tions that, to you, look in­dis­t­in­guish­able with re­spect to the re­sult of the die, into one class. Then, if among all situ­a­tions that be­long to this class, the die comes up a three in 1/​6th of them, your un­cer­tainty with re­spect to the die roll is ran­dom with prob­a­bil­ity for a three. This group­ing is based both on com­pu­ta­tional limi­ta­tions (you see the die but can’t com­pute how it’ll land) and on miss­ing in­for­ma­tion (you don’t see the die). If you were re­placed by a su­per­in­tel­li­gent agent, their refer­ence class would be smaller, but some group­ing based on hid­den in­for­ma­tion would re­main. For­mally, think of an equiv­alence re­la­tion on the set of all brain states.

So at this point, I’ve based the defi­ni­tion of ran­dom­ness both on a fre­quen­tist prin­ci­ple (count­ing the num­ber of situ­a­tions where the die comes up a three vs not a three) and on a more Bayesian-like prin­ci­ple of sub­jec­tive un­cer­tainty (tak­ing your abil­ities as a ba­sis for the choice of refer­ence class). Maybe this doesn’t yet look like a par­tic­u­larly smart way to do it. But with this post, I am only ar­gu­ing that this model is con­sis­tent: all un­cer­tainty can be viewed as made up of ran­dom­ness and/​or ig­no­rance and no con­tra­dic­tions arise. In the next post, I’ll ar­gue that it’s also quite use­ful, in that sev­eral con­tro­ver­sial prob­lems are an­swered im­me­di­ately by adopt­ing this view.