Without having looked closely at the rest of your comment yet:
What an ‘Occamian’ weighting buys us is not consistency with our experience of a structured universe (because a Boltzmann brain hypothesis already gives us that) but the ability to use science to decide what to believe—and thus what to do—rather than descend into a pit of nihilism and despair.
Here I risk a meaningless map/territory distinction, and yet it seems straightforwardly possible that the local universe—the thing we care about most—is perfectly well modeled by a universal prior, whereas the ensemble—say, a stack of universal prior pancakes infinitely high with each pancake having a unique Turing language along the real number line—is more accurately described with something vaguely like a uniform prior. (I have no idea if this is useful, but maybe this is clearer if it wasn’t already painfully sickeningly clear: non-technically, you gotsa cylinder Ensemble made up of infinite infinitely thin mini-cylinder Universes (universal priors), where each mini-cylinder (circle!) is tagged with a “language” that is arbitrarily close to the one above or below it (‘close’ in the sense that the languages of Scheme and Haskell are closer together than The Way Will Newsome Describes The World and Haskell). (As an extremely gratuitous detail I’m imagining the most commonly used strings in each language scribbled along the circumference of each mini-cylinder in exponentially decreasing font size and branching that goes exactly all the way around the circumference. If you zoom out a little bit to examine continuous sets of mini-cylinders, that slightly-less-mini-cylinder too has its own unique language: it’s all overlapping. If you zoom out to just see the whole cylinder you get… nothing! Or, well, everything. If your theory can explain everything you have zero knowledge.)
(In decision theory such a scenario really messes with our notions of timeless control—what does it mean, if anything, to be an equivalent or analogous algorithm of a decision algorithm that is located inside a pancake that is in some far-off part of the pancake stack, and thus written in an entirely different language? It’s a reframing of the “controlling copies of you in rocks” question but where it feels more like you should be able to timelessly control the algorithm.)
I don’t immediately see how your comment argues against this idea, but again I haven’t looked at it closely. (Honestly I immediately very much pattern-matched it to “things that really didn’t convince me in the past”, but I’ll try to see if perhaps I’ve just been missing something obvious.)
Without having looked closely at the rest of your comment yet:
Here I risk a meaningless map/territory distinction, and yet it seems straightforwardly possible that the local universe—the thing we care about most—is perfectly well modeled by a universal prior, whereas the ensemble—say, a stack of universal prior pancakes infinitely high with each pancake having a unique Turing language along the real number line—is more accurately described with something vaguely like a uniform prior. (I have no idea if this is useful, but maybe this is clearer if it wasn’t already painfully sickeningly clear: non-technically, you gotsa cylinder Ensemble made up of infinite infinitely thin mini-cylinder Universes (universal priors), where each mini-cylinder (circle!) is tagged with a “language” that is arbitrarily close to the one above or below it (‘close’ in the sense that the languages of Scheme and Haskell are closer together than The Way Will Newsome Describes The World and Haskell). (As an extremely gratuitous detail I’m imagining the most commonly used strings in each language scribbled along the circumference of each mini-cylinder in exponentially decreasing font size and branching that goes exactly all the way around the circumference. If you zoom out a little bit to examine continuous sets of mini-cylinders, that slightly-less-mini-cylinder too has its own unique language: it’s all overlapping. If you zoom out to just see the whole cylinder you get… nothing! Or, well, everything. If your theory can explain everything you have zero knowledge.)
(In decision theory such a scenario really messes with our notions of timeless control—what does it mean, if anything, to be an equivalent or analogous algorithm of a decision algorithm that is located inside a pancake that is in some far-off part of the pancake stack, and thus written in an entirely different language? It’s a reframing of the “controlling copies of you in rocks” question but where it feels more like you should be able to timelessly control the algorithm.)
I don’t immediately see how your comment argues against this idea, but again I haven’t looked at it closely. (Honestly I immediately very much pattern-matched it to “things that really didn’t convince me in the past”, but I’ll try to see if perhaps I’ve just been missing something obvious.)