Essentially you are saying that Q1=S1. This is certainly not true.
Clearly Q1 and S1 are related. If we could vanish a large contiguous chunk of Q1, we might see a chunk of squirrel disappear in S1; so they have some time-space context in common.
But Q1 describes a system of quarks and S1 describes a system of a squirrel and a nut. They are represented in different “languages”; to compare them you must convert them to a common “language”. The relationship between Q1 and S1 is this process of language conversion—it is the layered process of interactions and interpretations that result in S1, for some context that includes Q1.
The process that generates S1 -- in part from observations ultimately derived from Q1 -- includes the recognition of squirrels and nuts; and that part of the process occurs within the human mind.
But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.
No. In general you are not guaranteed “equally accurate” descriptions when you convert from one language to another, from one perspective to another, from one domain abstraction to another. For example the fraction 1⁄9 is exact, but its decimal representation limited to three decimal places, 0.111, is only approximate.
Q1 is, in fact, a description of a squirrel eating a nut
I addressed this above. Q1 is a system of quarks that is part of the context that led to S1, it is not S1.
That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I’m using an English-language description to refer to it makes it an English-language-level system.
For the purpose of efficient communication mixing perspectives in this way is generally fine. To answer certain questions on existence and meaning—for example to identify if arithmetic has an existence that is independent of humans and our artifacts—we need to be more careful.
You seem to be failing to attend here to the difference between descriptions and the systems they describe.
I’m not saying Q1=S1. That’s a category error; Q1 is a description of S1. The map is not the territory.
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system, and that although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
Agreed that there are gains and losses in going from one form of representation to another. But the claim “‘a squirrel eating a nut’ is a description of that system over there” is just as accurate as the claim “Q1 is a description of that system over there.” So I stand by the statement that I can as accurately make one claim as the other.
The map is not the territory.
…
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system...
The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.
Consider this case. We show a picture E (evidence) to Frank and Glen. Frank’s response is “cat”. Glen’s response is “cute”.
By your prior statements I assume that you would say that “cat” and “cute” are both accurate descriptions of E, the picture.
Then Frank says “No, Glen is wrong—that funny looking cat is ugly!”
Glen responds, “No, Frank is wrong—that is a small fluffy dog!”
This conflict is caused by a false belief—not by a false belief about E—but by a false belief about what “cat” and “cute” actually describe.
Frank’s response “cat” describes F(E) -- Frank’s interpretation of the evidence. Glen’s response “cute” describes G(E) -- Glen’s interpretation of the evidence. Both statements are correct in that they are reasonable expressions of personal belief. From this perspective there is no conflict.
It is wrong to arbitrarily split out E and claim that any high level interpretation describes it.
Let’s say that Frank and Glen talk, and then they both conclude that E is a picture of a “cute dog”. Are they now describing E? No—and they are still not describing the same thing. When Frank says “cute dog” he is thinking about how he finds small dogs cute. When Glen says “cute dog” he is thinking about how he finds fluffy animals cute. So even though they have both encoded their conclusion to the same phrase “cute dog”, they do not mean the same thing.
Back to squirrel’s and quarks.
The chain of inference that leads to Q1 and the chain that leads to “a squirrel eating a nut” are different, even if at some level they share similar time-space constraints. Therefore Q1 and “a squirrel eating a nut” are not two different descriptions of the same system—they are different descriptions of different systems.
I know that this perspective violates our common understand of the world, but it is our understanding that is wrong.
although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
We seem to agree that some stuff doesn’t need the human mind to exist—but perhaps we disagree on how to carve the world into what does and what doesn’t.
For clarity on this problem, let’s formalize it a bit:
Let S1 refer to the description “a squirrel eating a nut”.
Let Z refer to the system that S1 describes.
You claim that Z does not depend on a human mind to generate it; however Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. This human body/mind portion is everything from the moment that photons start entering the eye to the point where we come to the conclusion “hey, that’s a squirrel eating a nut”. So Z does depend in part on a human mind.
To deal with this, let’s split Z into two parts:
Let Ze refer to the part of Z that is entirely outside of the human body—the environment.
Let Zh refer to the rest of Z—the part that occurs within the human body.
Also, existence requires context. There are reasonable normative contexts that we could assume for this case, but let’s be specific:
Let R refer to the physical reality of the universe (whatever that is).
From this perspective I think that we can agree—both Ze and Zh exist within R and that the existence of Ze within R does not depend in any way on the processing that occurs within Zh. For that matter the existence of Zh within R doesn’t depend on the processing that occurs within Zh.
I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
Let S1 refer to the description “a squirrel eating a nut”. Let Z refer to the system that S1 describes.
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.
Essentially you are saying that Q1=S1. This is certainly not true.
Clearly Q1 and S1 are related. If we could vanish a large contiguous chunk of Q1, we might see a chunk of squirrel disappear in S1; so they have some time-space context in common.
But Q1 describes a system of quarks and S1 describes a system of a squirrel and a nut. They are represented in different “languages”; to compare them you must convert them to a common “language”. The relationship between Q1 and S1 is this process of language conversion—it is the layered process of interactions and interpretations that result in S1, for some context that includes Q1.
The process that generates S1 -- in part from observations ultimately derived from Q1 -- includes the recognition of squirrels and nuts; and that part of the process occurs within the human mind.
No. In general you are not guaranteed “equally accurate” descriptions when you convert from one language to another, from one perspective to another, from one domain abstraction to another. For example the fraction 1⁄9 is exact, but its decimal representation limited to three decimal places, 0.111, is only approximate.
I addressed this above. Q1 is a system of quarks that is part of the context that led to S1, it is not S1.
For the purpose of efficient communication mixing perspectives in this way is generally fine. To answer certain questions on existence and meaning—for example to identify if arithmetic has an existence that is independent of humans and our artifacts—we need to be more careful.
You seem to be failing to attend here to the difference between descriptions and the systems they describe.
I’m not saying Q1=S1. That’s a category error; Q1 is a description of S1. The map is not the territory.
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system, and that although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
Agreed that there are gains and losses in going from one form of representation to another. But the claim “‘a squirrel eating a nut’ is a description of that system over there” is just as accurate as the claim “Q1 is a description of that system over there.” So I stand by the statement that I can as accurately make one claim as the other.
The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.
Consider this case. We show a picture E (evidence) to Frank and Glen. Frank’s response is “cat”. Glen’s response is “cute”.
By your prior statements I assume that you would say that “cat” and “cute” are both accurate descriptions of E, the picture.
Then Frank says “No, Glen is wrong—that funny looking cat is ugly!”
Glen responds, “No, Frank is wrong—that is a small fluffy dog!”
This conflict is caused by a false belief—not by a false belief about E—but by a false belief about what “cat” and “cute” actually describe.
Frank’s response “cat” describes F(E) -- Frank’s interpretation of the evidence. Glen’s response “cute” describes G(E) -- Glen’s interpretation of the evidence. Both statements are correct in that they are reasonable expressions of personal belief. From this perspective there is no conflict.
It is wrong to arbitrarily split out E and claim that any high level interpretation describes it.
Let’s say that Frank and Glen talk, and then they both conclude that E is a picture of a “cute dog”. Are they now describing E? No—and they are still not describing the same thing. When Frank says “cute dog” he is thinking about how he finds small dogs cute. When Glen says “cute dog” he is thinking about how he finds fluffy animals cute. So even though they have both encoded their conclusion to the same phrase “cute dog”, they do not mean the same thing.
Back to squirrel’s and quarks.
The chain of inference that leads to Q1 and the chain that leads to “a squirrel eating a nut” are different, even if at some level they share similar time-space constraints. Therefore Q1 and “a squirrel eating a nut” are not two different descriptions of the same system—they are different descriptions of different systems.
I know that this perspective violates our common understand of the world, but it is our understanding that is wrong.
We seem to agree that some stuff doesn’t need the human mind to exist—but perhaps we disagree on how to carve the world into what does and what doesn’t.
For clarity on this problem, let’s formalize it a bit: Let S1 refer to the description “a squirrel eating a nut”. Let Z refer to the system that S1 describes.
You claim that Z does not depend on a human mind to generate it; however Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. This human body/mind portion is everything from the moment that photons start entering the eye to the point where we come to the conclusion “hey, that’s a squirrel eating a nut”. So Z does depend in part on a human mind.
To deal with this, let’s split Z into two parts: Let Ze refer to the part of Z that is entirely outside of the human body—the environment. Let Zh refer to the rest of Z—the part that occurs within the human body.
Also, existence requires context. There are reasonable normative contexts that we could assume for this case, but let’s be specific: Let R refer to the physical reality of the universe (whatever that is).
From this perspective I think that we can agree—both Ze and Zh exist within R and that the existence of Ze within R does not depend in any way on the processing that occurs within Zh. For that matter the existence of Zh within R doesn’t depend on the processing that occurs within Zh.
I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.