Any two theories can be made compatible if allowing for some additional correction factor (e.g. a “leverage penalty”) designed to make them compatible. As such, all the work rests with “is the leverage penalty justified?”
For said justification, there has to some sort of justifiable territory-level reasoning, including “does it carve reality at its joints?” and such, “is this the world we live in?”.
The problem I see with the leverage penalty is that there is no Bayesian updating way that will get you to such a low prior. It’s the mirror from “can never process enough bits to get away from such a low prior”, namely “can never process enough bits to get to assigning such low priors” (the blade cuts both ways).
The reason for that is in part that your entire level of confidence you have in the governing laws of physics, and the causal structure and dependency graphs and such is predicated on the sensory bitstream of your previous life—no more, it’s a strictly upper bound. You can gain confidence that a prior to affect a googleplex people is that low only by using that lifetime bitstream you have accumulated—but then the trap shuts, just as you can’t get out of such a low prior, you cannot use any confidence you gained in the current system by ways of your lifetime sensory input to get to such a low prior. You can be very sure you can’t affect that many, based on your understanding of how causal nodes are interconnected, but you can’t be that sure (since you base your understanding on a comparatively much smaller number of bits of evidence):
It’s a prior ex machina, with little more justification than just saying “I don’t deal with numbers that large/small in my decision making”.
Two quick thoughts:
Any two theories can be made compatible if allowing for some additional correction factor (e.g. a “leverage penalty”) designed to make them compatible. As such, all the work rests with “is the leverage penalty justified?”
For said justification, there has to some sort of justifiable territory-level reasoning, including “does it carve reality at its joints?” and such, “is this the world we live in?”.
The problem I see with the leverage penalty is that there is no Bayesian updating way that will get you to such a low prior. It’s the mirror from “can never process enough bits to get away from such a low prior”, namely “can never process enough bits to get to assigning such low priors” (the blade cuts both ways).
The reason for that is in part that your entire level of confidence you have in the governing laws of physics, and the causal structure and dependency graphs and such is predicated on the sensory bitstream of your previous life—no more, it’s a strictly upper bound. You can gain confidence that a prior to affect a googleplex people is that low only by using that lifetime bitstream you have accumulated—but then the trap shuts, just as you can’t get out of such a low prior, you cannot use any confidence you gained in the current system by ways of your lifetime sensory input to get to such a low prior. You can be very sure you can’t affect that many, based on your understanding of how causal nodes are interconnected, but you can’t be that sure (since you base your understanding on a comparatively much smaller number of bits of evidence):
It’s a prior ex machina, with little more justification than just saying “I don’t deal with numbers that large/small in my decision making”.