I think a reasonable “Theory X” candidate is “treat different centered worlds as different hypotheses, and construct objective worlds as emergent higher-level objects.” It extends nicely to cases where you want to include a bunch more information that ruins the nice symmetrical “S*A” cleanliness.
Combining this with Solomonoff induction solves things like Boltzmann brains, but also leads to some weirdness that I’d be interested in your thoughts on (though I think part of it is just confusion on how to locate people within a physical hypothesis in Solomonoff induction).
I think a reasonable “Theory X” candidate is “treat different centered worlds as different hypotheses, and construct objective worlds as emergent higher-level objects.” It extends nicely to cases where you want to include a bunch more information that ruins the nice symmetrical “S*A” cleanliness.
Combining this with Solomonoff induction solves things like Boltzmann brains, but also leads to some weirdness that I’d be interested in your thoughts on (though I think part of it is just confusion on how to locate people within a physical hypothesis in Solomonoff induction).