OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.
OK, that’s a good example. Let’s say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you’d want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn’t. I just don’t see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I’d want to say that both of their beliefs are in fact justified. The red-glove one’s belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.
In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.
(See also the edit to the grandparent comment, it could be the case that we already agree.)
Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove’s color without actual empirical contact with the glove, then its belief is unjustified. I don’t think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy’s judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.
ETA: OK, I just saw the edit. We’re closer to agreement than I thought, but I still don’t get the “unsatisfactory” part. In the example I gave, I don’t think there’s anything unsatisfactory about the green-glove copy’s belief formation mechanism. It’s a paradigm example of forming a belief through a reliable process.
The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it’s not entirely clear to me how that works.