Present a particular thought experiment, intended to provoke anthropic reasoning. There are two moderately plausible answers, “50%” and “a billion to one against”.
Assume for the sake of argument, the answer to the thought experiment is 50%. Note that the “50%” answer corresponds to ignoring the color of the room—“not updating on it” in the Bayesian jargon.
The thought experiment is analogous to the Bolzmann-brain hypothesis. In particular, the color of the room corresponds to the ordered-ness of our experiences.
With the exception of the ordered-ness of our experiences, a stochastic-all-experience-generator would be consistent with all observations.
Occam’s Razor: Use the simplest possible hypothesis consistent with observations.
A stochastic-all-experience-generator would be a simple hypothesis.
From 3, 4, 5, and 6, predict that the universe is a stochastic all-experience generator.
From 7, some very unpleasant consequences.
From 8, reject the assumption.
I think the argument can be improved.
According to the minimum description length notion of science, we have a model and a sequence of observations. A “better” model is one that is short and compresses the observations well. The stochastic-all-experience-generator is a short model, but it doesn’t compress our observations. I think this is basically saying that according to the MDL version of Occam’s Razor, 6 is false.
The article claims that the stochastic-all-experience-generator is a “simple” model of the world and would defeat more common-sense models of the world in an Occam’s Razor-off in the absence of some sort of anthropic defense. That claim (6) might be true, but it needs more support.
Isn’t the argument in one false? If one applies bayes’ theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don’t you get 500000000 to one chances?
I think you may be sincerely confused. Would you please reword your question?
If your question is whether someone (either me or the OP) has committed a multiplication error—yes, it’s entirely possible, but multiplication is not the point—the point is anthropic reasoning and whether “I am a Bolzmann brain” is a simple hypothesis.
It reminds me of one remark of Eliezer in his diavlog with Scott about the multiple world interpretation of QM. There he also said something to the effect that Occam’s razor is only about the theory, but not about the “amount of stuff”.
I think that was the same fallacy. When Using MDL, you have to give a short description for your actual observation history, or at least give an upper bound for the compressed length. In multiple world theories these bounds can become very nontrivial, and the observations can easily dominate the description length, therefore Occam’s razor cannot be applied without thorough quantitative analysis.
Of course, in that special context it was true that a random state-reduction is not better than a multiple world hypothesis, in fact: slightly worse. However, one should add, a deterministic (low complexity) state reduction would be far superior.
Regardless: such lighthearted remarks about the “amount of stuff” in Occam’s razor are misleading at least.
The skeleton of the argument is:
Present a particular thought experiment, intended to provoke anthropic reasoning. There are two moderately plausible answers, “50%” and “a billion to one against”.
Assume for the sake of argument, the answer to the thought experiment is 50%. Note that the “50%” answer corresponds to ignoring the color of the room—“not updating on it” in the Bayesian jargon.
The thought experiment is analogous to the Bolzmann-brain hypothesis. In particular, the color of the room corresponds to the ordered-ness of our experiences.
With the exception of the ordered-ness of our experiences, a stochastic-all-experience-generator would be consistent with all observations.
Occam’s Razor: Use the simplest possible hypothesis consistent with observations.
A stochastic-all-experience-generator would be a simple hypothesis.
From 3, 4, 5, and 6, predict that the universe is a stochastic all-experience generator.
From 7, some very unpleasant consequences.
From 8, reject the assumption.
I think the argument can be improved.
According to the minimum description length notion of science, we have a model and a sequence of observations. A “better” model is one that is short and compresses the observations well. The stochastic-all-experience-generator is a short model, but it doesn’t compress our observations. I think this is basically saying that according to the MDL version of Occam’s Razor, 6 is false.
The article claims that the stochastic-all-experience-generator is a “simple” model of the world and would defeat more common-sense models of the world in an Occam’s Razor-off in the absence of some sort of anthropic defense. That claim (6) might be true, but it needs more support.
Isn’t the argument in one false? If one applies bayes’ theorem, with initial prob. 50% and new likelihood ratio of a billion to one, don’t you get 500000000 to one chances?
I think you may be sincerely confused. Would you please reword your question?
If your question is whether someone (either me or the OP) has committed a multiplication error—yes, it’s entirely possible, but multiplication is not the point—the point is anthropic reasoning and whether “I am a Bolzmann brain” is a simple hypothesis.
I Agree very much.
It reminds me of one remark of Eliezer in his diavlog with Scott about the multiple world interpretation of QM. There he also said something to the effect that Occam’s razor is only about the theory, but not about the “amount of stuff”.
I think that was the same fallacy. When Using MDL, you have to give a short description for your actual observation history, or at least give an upper bound for the compressed length. In multiple world theories these bounds can become very nontrivial, and the observations can easily dominate the description length, therefore Occam’s razor cannot be applied without thorough quantitative analysis.
Of course, in that special context it was true that a random state-reduction is not better than a multiple world hypothesis, in fact: slightly worse. However, one should add, a deterministic (low complexity) state reduction would be far superior.
Regardless: such lighthearted remarks about the “amount of stuff” in Occam’s razor are misleading at least.
“That claim (6) might be true, but it needs more support.” Agreed.