But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain’t just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain’s environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn’t know. Which means I can have evidence the brain doesn’t have. Subjective indistinguishability does not entail evidential indistinguishability.
No. A boltzmann brain can’t have correct beliefs about Obama, but it may very well have it’s neurons (or whatever) arranged in what looks to it like beliefs about Obama.
I once ran across OP’s argument as an illustration of the Twin Earth example applied to the simulation/brain-in-a-vat argument: “you can’t be a brain in a vat because your beliefs refer to something outside yourself!” My reaction was, how do you know what beliefs-outside-your-head feel like as compared to the fake vat alternative? If there is no subjective difference, then it does no epistemological work.
It was Putnam who started the idea of refuting the brain-in-vat hypothesis, with sematic externalism, in this paper. The money quote:
By what was just said, when the brain in a vat (in the world where every sentient being is and always was a brain in a vat) thinks ‘There is a tree in front of me’, his thought does not refer to actual trees. On some theories that we shall discuss it might refer to trees in the image, or to the electronic impulses that cause tree experiences, or to the features of the program that are responsible for those electronic impulses. These theories are not ruled out by what was just said, for there is a close causal connection between the use of the word ‘tree’ in vat-English and the presence of trees in the image, the presence of electronic impulses of a certain kind, and the presence of certain features in the machine’s program. On these theories the brain is right, not wrong in thinking ‘There is a tree in front of me.’ Given what ‘tree’ refers to in vat-English and what ‘in front of’ refers to, assuming one of these theories is correct, then the truth conditions for ‘There is a tree in front of me’ when it occurs in vat-English are simply that a tree in the image be ‘in front of’ the ‘me’ in question — in the image — or, perhaps, that the kind of electronic impulse that normally produces this experience be coming from the automatic machinery, or, perhaps, that the feature of the machinery that is supposed to produce the ‘tree in front of one’ experience be operating. And these truth conditions are certainly fulfilled.
By the same argument, ‘vat’ refers to vats in the image in vat-English, or something related (electronic impulses or program features), but certainly not to real vats, since the use of ‘vat’ in vat-English has no causal connection to real vats (apart from the connection that the brains in a vat wouldn’t be able to use the word ‘vat’, if it were not for the presence of one particular vat — the vat they are in; but this connection obtains between the use of every word in vat-English and that one particular vat; it is not a special connection between the use of the particular word ‘vat’ and vats). Similarly, ‘nutrient fluid’ refers to a liquid in the image in vat-English, or something related (electronic impulses or program features). It follows that if their ‘possible world’ is really the actual one, and we are really the brains in a vat, then what we now mean by ‘we are brains in a vat’ is that we are brains in a vat in the image or something of that kind (if we mean any thing at all). But part of the hypothesis that we are brains in a vat is that we aren’t brains in a vat in the image (i.e. what we are ‘hallucinating’ isn’t that we are brains in a vat). So, if we are brains in a vat, then the sentence ‘We are brains in a vat’ says something false (if it says anything). In short, if we are brains in a vat, then ‘We are brains in a vat’ is false. So it is (necessarily) false.
And a nice counterargument from Nagel’s The View From Nowhere:
If I accept the argument, I must conclude that a brain in a vat can’t think truly that it is a brain in a vat, even though others can think this about it. What follows? Only that I can’t express my skepticism by saying “Perhaps I’m a brain in a vat.” Instead I must say: “Perhaps I can’t even think the truth about what I am, because I lack the necessary concepts and my circumstances make it impossible for me to acquire them!” If this doesn’t qualify as skepticism, I don’t know what does.
Why do you think you know who Obama is? Because your neurons are arranged with information that refers to some Obama character. From the inside, you think “Obama” and images of a nice black man in a suit saying things about change play through your mind. The point of the Boltzmann brain is that it is arranged to have the same instantaneous thoughts as you.
That’s not all there is to my belief that I know who Obama is. The arrangement of neurons in my brain is just syntax. Syntax doesn’t come pre-equipped with semantic content. The semantics of my belief—the fact that it’s a belief about Obama, for instance—comes from causal interactions between my brain and the external world. Causal interactions that the Boltzmann brain has not had. The particular pattern of neuronal activation (or set of such patterns) that instantiates my concept of Obama corresponds to a concept of Obama because it is appropriately correlated with the physical object Barack Obama. The whole point of semantic externalism is that the semantic content of our mental representations isn’t just reducible to how they feel from the inside.
Just to make sure I understand your claim… a question.
My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I’m wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be.
Suppose B contains two things, B1 and B2 (among others). Suppose B1 derives from causal interactions with, and is correlated with, the shoes I’m wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown.
Suppose B2 is not correlated with the shoes I’m wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium.
If I’m understanding you correctly, you would say that B1 is a belief about my shoes. I’m moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?)
Supposing that’s right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I’m understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?)
Supposing I’ve followed you so far, what would you call C1 and C2?
“Correlation” was a somewhat misleading word for me to use. The sense in which I meant it is that there’s some sort of causal entanglement (to use Eliezer’s preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes.
C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I’d say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I’d say they were brain states of the sort that could instantiate beliefs, but weren’t actually beliefs due to lack of content.
First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you’re using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No?
Second, and unrelatedly: OK, let’s suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs.
Now, consider brain states C3 and C4, which similarly correspond to my actual brain’s beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren’t in fact beliefs. (Yes?)
Now, we’ve agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we’ve said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4.
If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we’ve already established that this is beside the point; B2 is false as well, but is nevertheless a belief.)
Yes?
If I’ve followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)? For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
I’m not sure I’d call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says “B1 and B2 are beliefs about brown shoes.” Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the ”… about brown shoes” part), actual objects in the external world. The corresponding entanglement is lacking for C5.
If you change B5 to “B1 and B2 are beliefs”, then I think I’d agree that C5 is also a belief, a false belief that says “C1 and C2 are beliefs.” Of course this is complicated by the fact that we don’t actually have internal access to our brain states. I can refer to my brain states indirectly, as “the brain state instantiating my belief that Obama is President”, for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don’t exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I’m skeptical. Still, let me grant this assumption in order to answer your final questions.
should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)?
Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states.
For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
No. A boltzmann brain can’t have correct beliefs about Obama, but it may very well have it’s neurons (or whatever) arranged in what looks to it like beliefs about Obama.
I once ran across OP’s argument as an illustration of the Twin Earth example applied to the simulation/brain-in-a-vat argument: “you can’t be a brain in a vat because your beliefs refer to something outside yourself!” My reaction was, how do you know what beliefs-outside-your-head feel like as compared to the fake vat alternative? If there is no subjective difference, then it does no epistemological work.
It was Putnam who started the idea of refuting the brain-in-vat hypothesis, with sematic externalism, in this paper. The money quote:
And a nice counterargument from Nagel’s The View From Nowhere:
Our teacher always made us read the original papers, so this must be it.
How could it know its beliefs look like they are about Obama? How does it even know who Obama is?
Why do you think you know who Obama is? Because your neurons are arranged with information that refers to some Obama character. From the inside, you think “Obama” and images of a nice black man in a suit saying things about change play through your mind. The point of the Boltzmann brain is that it is arranged to have the same instantaneous thoughts as you.
That’s not all there is to my belief that I know who Obama is. The arrangement of neurons in my brain is just syntax. Syntax doesn’t come pre-equipped with semantic content. The semantics of my belief—the fact that it’s a belief about Obama, for instance—comes from causal interactions between my brain and the external world. Causal interactions that the Boltzmann brain has not had. The particular pattern of neuronal activation (or set of such patterns) that instantiates my concept of Obama corresponds to a concept of Obama because it is appropriately correlated with the physical object Barack Obama. The whole point of semantic externalism is that the semantic content of our mental representations isn’t just reducible to how they feel from the inside.
Just to make sure I understand your claim… a question.
My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I’m wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be.
Suppose B contains two things, B1 and B2 (among others).
Suppose B1 derives from causal interactions with, and is correlated with, the shoes I’m wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown. Suppose B2 is not correlated with the shoes I’m wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium.
If I’m understanding you correctly, you would say that B1 is a belief about my shoes. I’m moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?)
Supposing that’s right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I’m understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?)
Supposing I’ve followed you so far, what would you call C1 and C2?
“Correlation” was a somewhat misleading word for me to use. The sense in which I meant it is that there’s some sort of causal entanglement (to use Eliezer’s preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes.
C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I’d say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I’d say they were brain states of the sort that could instantiate beliefs, but weren’t actually beliefs due to lack of content.
Cool, thanks for the clarification.
Two things.
First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you’re using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No?
Second, and unrelatedly: OK, let’s suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs.
Now, consider brain states C3 and C4, which similarly correspond to my actual brain’s beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren’t in fact beliefs. (Yes?)
Now, we’ve agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we’ve said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4.
If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we’ve already established that this is beside the point; B2 is false as well, but is nevertheless a belief.)
Yes?
If I’ve followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)? For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
I’m not sure I’d call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says “B1 and B2 are beliefs about brown shoes.” Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the ”… about brown shoes” part), actual objects in the external world. The corresponding entanglement is lacking for C5.
If you change B5 to “B1 and B2 are beliefs”, then I think I’d agree that C5 is also a belief, a false belief that says “C1 and C2 are beliefs.” Of course this is complicated by the fact that we don’t actually have internal access to our brain states. I can refer to my brain states indirectly, as “the brain state instantiating my belief that Obama is President”, for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don’t exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I’m skeptical. Still, let me grant this assumption in order to answer your final questions.
Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states.
No.
OK, cool. I think I now understand the claim you’re making… thanks for taking the time to clarify.