Let’s say you have a computer set up to measure the temperature in a particular room to a high precision. It does this using input from sensors placed around the room. The computer is processing information about the room’s temperature. Anthropomorphizing a little, one could say it has evidence of the room’s temperature; evidence it received from the sensors.
Now suppose there’s another identical computer somewhere else running the same software. Instead of receiving inputs from temperature sensors, however, it is receiving inputs from a bored teenager randomly twiddling a dial. By a weird coincidence, the inputs are exactly the same as the ones on your computer, to the point that the physical states of the two computers are identical throughout the processes.
Do you want to say the teenager’s computer also has evidence of the room’s temperature? I hope not. Would your answer be different if the computers were sophisticated enough to have phenomenal experience?
As for your example, the criterion of ontological identity you offer seems overly strict. I don’t think failing to eat the sandwich would have turned you into a different person, such that my duplicate’s beliefs would have been about something else. But this does seem like a largely semantic matter. Let’s say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable. This doesn’t seem like that big of a bullet to bite.
Let’s say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable.
But they have no information about what I actually ate for breakfast! What is the “evidence” that allows them to be distinguished?
This term “evidentially distinguishable” is not the best because it potentially mixes up whether you have evidence now, with whether you could obtain evidence in the future. You and your duplicate might somehow gain evidence, one day, regarding what I had for breakfast; but in the present, you do not possess such evidence.
This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.
But they have no information about what I actually ate for breakfast! What is the “evidence” that allows them to be distinguished?
I have evidence that Mitchell1 thinks there are problems with the MWI. My duplicate has evidence that Mitchell2 thinks there are problems with the MWI. Mitchell1 and Mitchell2 are not identical, so me and my duplicate have different pieces of evidence. Of course, in this case, neither of us knows (or even believes) that we have different pieces of evidence, but that is compatible with us in fact having different evidence. In the Boltzmann brain case, however, I actually know that I have evidence that my Boltzmann brain duplicate does not, so the evidential distinguishability is even more stark.
This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.
I don’t think I’m failing to distinguish between these. Our mental representations involve concepts, but they are not (generally) representations of concepts. My beliefs about Obama involve my concept of Obama, but they are not (in general) about my concept of Obama. They are about Obama, the actual person in the external world. When I talk of the content of a representation, I’m not talking about what the representation is built out of, I’m talking about what the representation is about. Also, I’m pretty sure you are using the word “knowledge” in an extremely non-standard way (see my comment below).
Do you want to say the teenager’s computer also has evidence of the room’s temperature?
Yes. It has not proven that input is not connected to sensors in that room. There is a finite prior probability that they are. As such, that output is more likely given that that room is that temperature.
We could set up the thought experiment so that it’s extraordinarily unlikely that the teenager’s computer is receiving input from the sensors. It could be outside the light cone, say. This might still leave a finite prior probability of this possibility, but it’s low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration.
In any case, the analog of your argument in the Boltzmann brain case is that there might be some mechanism by which the brain is actually getting information about Obama, and its belief states are appropriately caused by that information. I agree that if this were the case then the Boltzmann brain would in fact have beliefs about Obama. But the whole point of the Boltzmann brain hypothesis is that its brain state is the product of a random fluctuation, not coherent information from a distant planet. So in this case, the hypothesis itself involves the assumption that the teenager’s computer is causally disconnected from the temperature sensors.
Do you agree that if the teenager’s computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room’s temperature?
If the computer doesn’t know it’s outside of the lightcone, that’s irrelevant. The room may not even exist, but as long as the computer doesn’t know that, it can’t eliminate the possibility that it’s in that room.
but it’s low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration.
The probability of it being that specific room is far too low to be raised to serious consideration. That said, the utility function of the computer is such that that room or anything even vaguely similar will matter just about as much.
Do you agree that if the teenager’s computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room’s temperature?
Only if the computer knows it’s not receiving input from the sensors.
It has no evidence of the temperature of the room given that it’s not receiving input from the sensors, but it does have evidence of the temperature of the room given that it is receiving input from the sensors, and the probability that it’s receiving input from the sensors is finite (it isn’t, but it doesn’t know that), so it ends up with evidence of the temperature of the room.
Let’s say you have a computer set up to measure the temperature in a particular room to a high precision. It does this using input from sensors placed around the room. The computer is processing information about the room’s temperature. Anthropomorphizing a little, one could say it has evidence of the room’s temperature; evidence it received from the sensors.
Now suppose there’s another identical computer somewhere else running the same software. Instead of receiving inputs from temperature sensors, however, it is receiving inputs from a bored teenager randomly twiddling a dial. By a weird coincidence, the inputs are exactly the same as the ones on your computer, to the point that the physical states of the two computers are identical throughout the processes.
Do you want to say the teenager’s computer also has evidence of the room’s temperature? I hope not. Would your answer be different if the computers were sophisticated enough to have phenomenal experience?
As for your example, the criterion of ontological identity you offer seems overly strict. I don’t think failing to eat the sandwich would have turned you into a different person, such that my duplicate’s beliefs would have been about something else. But this does seem like a largely semantic matter. Let’s say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable. This doesn’t seem like that big of a bullet to bite.
But they have no information about what I actually ate for breakfast! What is the “evidence” that allows them to be distinguished?
This term “evidentially distinguishable” is not the best because it potentially mixes up whether you have evidence now, with whether you could obtain evidence in the future. You and your duplicate might somehow gain evidence, one day, regarding what I had for breakfast; but in the present, you do not possess such evidence.
This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.
I have evidence that Mitchell1 thinks there are problems with the MWI. My duplicate has evidence that Mitchell2 thinks there are problems with the MWI. Mitchell1 and Mitchell2 are not identical, so me and my duplicate have different pieces of evidence. Of course, in this case, neither of us knows (or even believes) that we have different pieces of evidence, but that is compatible with us in fact having different evidence. In the Boltzmann brain case, however, I actually know that I have evidence that my Boltzmann brain duplicate does not, so the evidential distinguishability is even more stark.
I don’t think I’m failing to distinguish between these. Our mental representations involve concepts, but they are not (generally) representations of concepts. My beliefs about Obama involve my concept of Obama, but they are not (in general) about my concept of Obama. They are about Obama, the actual person in the external world. When I talk of the content of a representation, I’m not talking about what the representation is built out of, I’m talking about what the representation is about. Also, I’m pretty sure you are using the word “knowledge” in an extremely non-standard way (see my comment below).
Yes. It has not proven that input is not connected to sensors in that room. There is a finite prior probability that they are. As such, that output is more likely given that that room is that temperature.
We could set up the thought experiment so that it’s extraordinarily unlikely that the teenager’s computer is receiving input from the sensors. It could be outside the light cone, say. This might still leave a finite prior probability of this possibility, but it’s low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration.
In any case, the analog of your argument in the Boltzmann brain case is that there might be some mechanism by which the brain is actually getting information about Obama, and its belief states are appropriately caused by that information. I agree that if this were the case then the Boltzmann brain would in fact have beliefs about Obama. But the whole point of the Boltzmann brain hypothesis is that its brain state is the product of a random fluctuation, not coherent information from a distant planet. So in this case, the hypothesis itself involves the assumption that the teenager’s computer is causally disconnected from the temperature sensors.
Do you agree that if the teenager’s computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room’s temperature?
If the computer doesn’t know it’s outside of the lightcone, that’s irrelevant. The room may not even exist, but as long as the computer doesn’t know that, it can’t eliminate the possibility that it’s in that room.
The probability of it being that specific room is far too low to be raised to serious consideration. That said, the utility function of the computer is such that that room or anything even vaguely similar will matter just about as much.
Only if the computer knows it’s not receiving input from the sensors.
It has no evidence of the temperature of the room given that it’s not receiving input from the sensors, but it does have evidence of the temperature of the room given that it is receiving input from the sensors, and the probability that it’s receiving input from the sensors is finite (it isn’t, but it doesn’t know that), so it ends up with evidence of the temperature of the room.