Just to make sure I understand your claim… a question.
My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I’m wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be.
Suppose B contains two things, B1 and B2 (among others). Suppose B1 derives from causal interactions with, and is correlated with, the shoes I’m wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown.
Suppose B2 is not correlated with the shoes I’m wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium.
If I’m understanding you correctly, you would say that B1 is a belief about my shoes. I’m moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?)
Supposing that’s right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I’m understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?)
Supposing I’ve followed you so far, what would you call C1 and C2?
“Correlation” was a somewhat misleading word for me to use. The sense in which I meant it is that there’s some sort of causal entanglement (to use Eliezer’s preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes.
C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I’d say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I’d say they were brain states of the sort that could instantiate beliefs, but weren’t actually beliefs due to lack of content.
First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you’re using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No?
Second, and unrelatedly: OK, let’s suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs.
Now, consider brain states C3 and C4, which similarly correspond to my actual brain’s beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren’t in fact beliefs. (Yes?)
Now, we’ve agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we’ve said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4.
If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we’ve already established that this is beside the point; B2 is false as well, but is nevertheless a belief.)
Yes?
If I’ve followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)? For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
I’m not sure I’d call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says “B1 and B2 are beliefs about brown shoes.” Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the ”… about brown shoes” part), actual objects in the external world. The corresponding entanglement is lacking for C5.
If you change B5 to “B1 and B2 are beliefs”, then I think I’d agree that C5 is also a belief, a false belief that says “C1 and C2 are beliefs.” Of course this is complicated by the fact that we don’t actually have internal access to our brain states. I can refer to my brain states indirectly, as “the brain state instantiating my belief that Obama is President”, for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don’t exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I’m skeptical. Still, let me grant this assumption in order to answer your final questions.
should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)?
Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states.
For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
Just to make sure I understand your claim… a question.
My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I’m wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be.
Suppose B contains two things, B1 and B2 (among others).
Suppose B1 derives from causal interactions with, and is correlated with, the shoes I’m wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown. Suppose B2 is not correlated with the shoes I’m wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium.
If I’m understanding you correctly, you would say that B1 is a belief about my shoes. I’m moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?)
Supposing that’s right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I’m understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?)
Supposing I’ve followed you so far, what would you call C1 and C2?
“Correlation” was a somewhat misleading word for me to use. The sense in which I meant it is that there’s some sort of causal entanglement (to use Eliezer’s preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes.
C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I’d say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I’d say they were brain states of the sort that could instantiate beliefs, but weren’t actually beliefs due to lack of content.
Cool, thanks for the clarification.
Two things.
First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you’re using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No?
Second, and unrelatedly: OK, let’s suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs.
Now, consider brain states C3 and C4, which similarly correspond to my actual brain’s beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren’t in fact beliefs. (Yes?)
Now, we’ve agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we’ve said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4.
If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we’ve already established that this is beside the point; B2 is false as well, but is nevertheless a belief.)
Yes?
If I’ve followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren’t)? For example, would it somehow know that C1-C4 aren’t beliefs, but C5-C6 are?
I’m not sure I’d call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says “B1 and B2 are beliefs about brown shoes.” Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the ”… about brown shoes” part), actual objects in the external world. The corresponding entanglement is lacking for C5.
If you change B5 to “B1 and B2 are beliefs”, then I think I’d agree that C5 is also a belief, a false belief that says “C1 and C2 are beliefs.” Of course this is complicated by the fact that we don’t actually have internal access to our brain states. I can refer to my brain states indirectly, as “the brain state instantiating my belief that Obama is President”, for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don’t exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I’m skeptical. Still, let me grant this assumption in order to answer your final questions.
Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states.
No.
OK, cool. I think I now understand the claim you’re making… thanks for taking the time to clarify.