Do you have a link to that argument? I think Bayesean updates include either reducing a prior or increasing it, and then renormalizing all related probabilities. Many updatable observations take the form of replacing an estimate of future experience (I will observe sunshine tomorrow) by a 1 or zero (I did or did not observe that, possibly not quite 0 or 1 if you want to account for hallucinations and imperfect memory).
Anthropic updates are either bayesean or impossible. The underlying question remains “how does this experience differ from my probability estimate”? For Bayes or for Solomonoff, one has to answer “what has changed for my prediction? In what way am I surprised and have to change my calculation?”
I have a totally non-Solomonoff explanation of what’s going on, which actually goes full G.K. Chesterton—I assign anthropic probabilities because I don’t assume that my not waking is impossible. But I’m not sure how a Solomonoff inductor would see it.
Do you have a link to that argument? I think Bayesean updates include either reducing a prior or increasing it, and then renormalizing all related probabilities. Many updatable observations take the form of replacing an estimate of future experience (I will observe sunshine tomorrow) by a 1 or zero (I did or did not observe that, possibly not quite 0 or 1 if you want to account for hallucinations and imperfect memory).
Anthropic updates are either bayesean or impossible. The underlying question remains “how does this experience differ from my probability estimate”? For Bayes or for Solomonoff, one has to answer “what has changed for my prediction? In what way am I surprised and have to change my calculation?”
https://www.alignmentforum.org/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities I think?
I have a totally non-Solomonoff explanation of what’s going on, which actually goes full G.K. Chesterton—I assign anthropic probabilities because I don’t assume that my not waking is impossible. But I’m not sure how a Solomonoff inductor would see it.