Scott Aaronson on Born Probabilities

This post at­tempts to pop­u­larize some of Scott Aaron­son’s lec­tures and re­search re­sults re­lat­ing to Born prob­a­bil­ities. I think they rep­re­sent a sig­nifi­cant step to­wards an­swer­ing the ques­tion “Why Born’s rule?” but do not seem to be very well known. Prof. Aaron­son writes fre­quently on his pop­u­lar blog, Shtetl-Op­ti­mized, but is ap­par­ently too mod­est to use it to do much pro­mo­tion of his own ideas. I hope he doesn’t mind that I take up this task (and that he for­gives any er­rors and mi­s­un­der­stand­ings I may have com­mit­ted here).

Be­fore I be­gin, I want to point out some­thing that has been bug­ging me about the fic­tional Eb­bo­rian physics, which will even­tu­ally lead us to Aaron­son’s ideas. So, let’s first re­call the fol­low­ing pas­sage from Eliezer’s story:

“And we also dis­cov­ered,” con­tinues Po’mi, “that our very planet of Eb­bore, in­clud­ing all the peo­ple on it, has a four-di­men­sional thick­ness, and is con­stantly fis­sion­ing along that thick­ness, just as our brains do. Only the fis­sioned sides of our planet do not re­main in con­tact, as our new selves do; the sides sep­a­rate into the fourth-di­men­sional void.”

“Well,” says Po’mi, “when the world splits down its four-di­men­sional thick­ness, it does not always split ex­actly evenly. In­deed, it is not un­com­mon to see nine-tenths of the four-di­men­sional thick­ness in one side.”

...

“Now,” says Po’mi, “if fun­da­men­tal physics has noth­ing to do with con­scious­ness, can you tell me why the sub­jec­tive prob­a­bil­ity of find­ing our­selves in a side of the split world, should be ex­actly pro­por­tional to the square of the thick­ness of that side?”

Ok, so the part that’s been bug­ging me is, sup­pose an Eb­bo­rian world splits twice, first into 13 and 23 of the origi­nal thick­ness (slices A and B re­spec­tively), then the B slice splits ex­actly in half, into two 13 thick­ness slices (C and D). Be­fore the split­ting, with what prob­a­bil­ity should you an­ti­ci­pate end­ing up in the slices A, C and D? Well, ac­cord­ing to the squar­ing rule, you have 15 chance of end­ing up in A, and 45 chance of end­ing up in B. Those in B then have equal chance of end­ing up in C and D, so each of them gets a fi­nal prob­a­bil­ity of 25.

Well, that’s not how quan­tum branch­ing works! There, the prob­a­bil­ity of end­ing up in any branch de­pends only on the fi­nal am­pli­tude of that branch, not on the or­der in which branch­ing oc­curred. This makes perfect sense since de­co­her­ence is not a in­stan­ta­neous pro­cess, and think­ing of it as branch­ing is only an ap­prox­i­ma­tion be­cause wor­lds never com­pletely split off and be­come to­tally in­de­pen­dent of one an­other. In QM, the “or­der of branch­ing” is not even well defined, so how can prob­a­bil­ities de­pend on it?

Sup­pose we want to con­struct an Eb­bo­rian physics where, like in QM, the prob­a­bil­ity of end­ing up in any slice de­pends only on the thick­ness of that slice, and not on the or­der in which split­ting oc­curs, how do we go about do­ing that? Sim­ple, we just make that prob­a­bil­ity a func­tion of the ab­solute thick­ness of a slice, in­stead of hav­ing it de­pend on the rel­a­tive thick­ness at each split­ting.

So let’s say that the sub­jec­tive prob­a­bil­ity of end­ing up in any slice is pro­por­tional to the square of the ab­solute thick­ness of that slice, and con­sider the above ex­am­ple again. When the world splits into A and B, the prob­a­bil­ities are again 15 and 45 re­spec­tively. But when B splits again into C and D, A goes from prob­a­bil­ity 15 to 13, and C and D each get 13. That’s pretty weird… what’s go­ing on this time?

To use Aaron­son’s lan­guage, split­ting is not a 2-norm pre­serv­ing trans­for­ma­tion; it only pre­serves the 1-norm. Or to state this more plainly, split­ting con­serves the sum of the in­di­vi­d­ual slices’ thick­nesses, but not the sum of the squares of the in­di­vi­d­ual thick­nesses. So in or­der to ap­ply the squar­ing rule and get a set of prob­a­bil­ities that sum to 1 at the end, we have to renor­mal­ize, and this renor­mal­iz­ing can cause the prob­a­bil­ity of a slice to go up or down, de­pend­ing purely on what hap­pens to other slices that it oth­er­wise would have noth­ing to do with.

Note that in quan­tum me­chan­ics, the evolu­tion of a wave­func­tion always pre­serves its 2-norm, not its 1-norm (nor p-norm for any p≠2). If we were to use any prob­a­bil­ity rule other than the squar­ing rule in QM, we would have to renor­mal­ize and thereby en­counter this same is­sue: the prob­a­bil­ity of a branch would go up or down de­pend­ing on other parts of the wave­func­tion that it oth­er­wise would have lit­tle in­ter­ac­tion with.

At this point you might ask, “Ok, this seems un­usual and coun­ter­in­tu­itive, but lots of physics are coun­ter­in­tu­itive. Is there some other ar­gu­ment that the prob­a­bil­ity rule shouldn’t in­volve renor­mal­iza­tion?” And the an­swer to that is yes, be­cause to live in a world with prob­a­bil­ity renor­mal­iza­tion would be to have mag­i­cal pow­ers, in­clud­ing the abil­ity to solve NP-com­plete prob­lems in polyno­mial time. (And to turn this into a full an­thropic ex­pla­na­tion of the Born rule, similar to the an­thropic ex­pla­na­tions for other phys­i­cal laws and con­stants, we just have to note that in­tel­li­gence seems to have lit­tle evolu­tion­ary value in such a world. But that’s my po­si­tion, not Aaron­son’s, or at least he hasn’t ar­gued for this ad­di­tional step in pub­lic, as far as I know.)

Aaron­son ac­tu­ally proved that prob­lems in PP, which are com­monly be­lieved to be even harder than NP prob­lems, can be solved in polyno­mial time us­ing “fan­tasy” quan­tum com­put­ers that use var­i­ants of Born’s rule where the ex­po­nent doesn’t equal 2. But it turns out that the power of these com­put­ers has noth­ing to do with quan­tum com­put­ing, but in­stead has ev­ery­thing to do with prob­a­bil­ity renor­mal­iza­tion. So here I’ll show how we can solve NP-com­plete (in­stead of PP since it’s eas­ier to think about) prob­lems in polyno­mial time us­ing the mod­ified Eb­bo­rian physics that I de­scribed above.

The idea is ac­tu­ally very easy to un­der­stand. Each time an Eb­bo­rian world slice splits, its de­scen­dant slices de­crease in to­tal prob­a­bil­ity, while ev­ery other slice in­creases in prob­a­bil­ity. (Re­call how when B split, A’s prob­a­bil­ity went from 15 to 13, and B’s 45 be­came a to­tal of 23 for C and D.) So to take ad­van­tage of this, we first split our world into an ex­po­nen­tial num­ber of slices of equal thick­ness, and let each slice try a differ­ent pos­si­ble solu­tion to the NP-com­plete prob­lem. If a slice finds that its can­di­date solu­tion is a cor­rect one, then it does noth­ing, oth­er­wise it splits it­self a large num­ber of times. Since that greatly de­creases their own prob­a­bil­ities, and in­creases the prob­a­bil­ities of the slices that didn’t split at the end, we should ex­pect to find our­selves in one of the lat­ter kind of slices when the com­pu­ta­tion finishes, which (sur­prise!) hap­pens to be one that found a cor­rect solu­tion. Pretty neat, right?

ETA: The lec­ture notes and pa­pers I linked to also give ex­pla­na­tions for other as­pects of quan­tum me­chan­ics, such as why it is lin­ear, and why it pre­serves the 2-norm and not some other p-norm. Read them to find out more.