But for today, suppose you reply “50%”. Thinking, perhaps: “I don’t understand this whole consciousness rigamarole, I wouldn’t try to program a computer to update on it, and I’m not going to update on it myself.”
In that case, why don’t you believe you’re a Boltzmann brain?
This sounds backwards (sideways?); the reason to (strongly) believe one is a Boltzmann brain is that there are very many of them in some weighting compared to the “normal” you, which corresponds to accepting probability of 1 to the billion in this thought experiment. If you don’t update, then the other billion people are (epistemically) irrelevant, and in exactly the same way so are Boltzmann brains. It doesn’t at all matter how many visual cortexes spontaneously form in the Chaos.
In other words, there are two parts to not updating: you can’t place a greater weight on particular states of the world, arguing that this particular kind of situations is privileged, but at the same time you can’t be disturbed by an argument that there is huge weight on that other class of crazy situations which leave your privileged situation far behind. You can’t refute the assertion that you are a Boltzmann brain, but you are undisturbed by the assertion that there are Boltzmann brains.
Of course, in all cases some situations may be preferentially privileged. You don’t care about what happens to a Boltzmann brain, or more likely just can’t do much for it anyway. In the rooms with a billion copies, you may care about whether only one person makes a mistake, or a whole billion of them (total utilitarianism). But that’s utility of the situation, not probability, and the construction of the thought experiment clearly doesn’t try to make utility symmetrical, hence the skewed intuition.
The confusion between probability and utility seems to explain the intuition: weighting is there, just not in the probability, and in fact it can’t be represented as probability (in which case the weighting is not so much in utility, since there is no anthropic utility just as there is no anthropic probability, but in how the global preference responds to actions performed in particular situations).
The problem is that if you don’t update on the proportions of sentients who have your particular experience, then there are much simpler hypotheses than our current physical model which would generate and “explain” your experiences, namely, “Every experience happens within the dust.”
To put it another way, the dust hypothesis is extremely simple and explains why this experience exists. It just doesn’t explain why an ordered experience instead of a disordered one, when ordered experiences are such a tiny fraction of all experiences. If you think the latter is a non-consideration then you should just go with the simplest explanation.
Traditional explanations are for updating; this is probably a relevant tension. If you don’t update, you can’t explain in the sense of updating. The notion of explanation itself has to be revised in this light.
Are the Boltzmann brain hypothesis and the dust hypothesis really simpler than the standard model of the universe, in the sense of Occam’s razor? It seems to me that it isn’t.
I’m thinking specifically about Solomonoff induction here. A Boltzmann brain hypothesis would be a program that correctly predicts all my experiences up to now, and then starts predicting unrelated experiences. Such a program of minimal length would essentially emulate the standard model until output N, and then start doing something else. So it would be longer than the standard model by however many bits it takes to encode the number N.
This sounds backwards (sideways?); the reason to (strongly) believe one is a Boltzmann brain is that there are very many of them in some weighting compared to the “normal” you, which corresponds to accepting probability of 1 to the billion in this thought experiment. If you don’t update, then the other billion people are (epistemically) irrelevant, and in exactly the same way so are Boltzmann brains. It doesn’t at all matter how many visual cortexes spontaneously form in the Chaos.
In other words, there are two parts to not updating: you can’t place a greater weight on particular states of the world, arguing that this particular kind of situations is privileged, but at the same time you can’t be disturbed by an argument that there is huge weight on that other class of crazy situations which leave your privileged situation far behind. You can’t refute the assertion that you are a Boltzmann brain, but you are undisturbed by the assertion that there are Boltzmann brains.
Of course, in all cases some situations may be preferentially privileged. You don’t care about what happens to a Boltzmann brain, or more likely just can’t do much for it anyway. In the rooms with a billion copies, you may care about whether only one person makes a mistake, or a whole billion of them (total utilitarianism). But that’s utility of the situation, not probability, and the construction of the thought experiment clearly doesn’t try to make utility symmetrical, hence the skewed intuition.
The confusion between probability and utility seems to explain the intuition: weighting is there, just not in the probability, and in fact it can’t be represented as probability (in which case the weighting is not so much in utility, since there is no anthropic utility just as there is no anthropic probability, but in how the global preference responds to actions performed in particular situations).
The problem is that if you don’t update on the proportions of sentients who have your particular experience, then there are much simpler hypotheses than our current physical model which would generate and “explain” your experiences, namely, “Every experience happens within the dust.”
To put it another way, the dust hypothesis is extremely simple and explains why this experience exists. It just doesn’t explain why an ordered experience instead of a disordered one, when ordered experiences are such a tiny fraction of all experiences. If you think the latter is a non-consideration then you should just go with the simplest explanation.
Traditional explanations are for updating; this is probably a relevant tension. If you don’t update, you can’t explain in the sense of updating. The notion of explanation itself has to be revised in this light.
Are the Boltzmann brain hypothesis and the dust hypothesis really simpler than the standard model of the universe, in the sense of Occam’s razor? It seems to me that it isn’t.
I’m thinking specifically about Solomonoff induction here. A Boltzmann brain hypothesis would be a program that correctly predicts all my experiences up to now, and then starts predicting unrelated experiences. Such a program of minimal length would essentially emulate the standard model until output N, and then start doing something else. So it would be longer than the standard model by however many bits it takes to encode the number N.