There exists a relationship between how many nuts squirrel B eats, and how many times squirrel A deposited a nut in the tree.
That relationship does not depend on my observations.
“1+1+1+1=4” is a statement of arithmetic that expresses one aspect of that relationship; specifically, the aspect of it related to counting.
“1+1+1=3” is a different statement of arithmetic that expresses the same aspect of a different relationship, one that could be implemented in a different story, and likely was.
“1000+1000+1000=3000” is yet another statement of arithmetic that expresses the same aspect of a different relationship, one that has probably never been implemented in terms of nuts and squirrels, although in principle it could be.
“1+1+1=4” expresses the same aspect of yet another relationship, one which probably has never been implemented that way, and which probably can’t be.
And there are other kinds of relationships, implementable and otherwise, which can be expressed by other kinds of statements of mathematics.
None of those relationships depend on my observations, either. And you say that none of those relationships are arithmetic relationships, precisely because they don’t involve us interpreting our observations.
For convenience, let’s call them X, instead. You aren’t denying the existence of X, merely asserting that X isn’t arithmetic.
Well, OK. I’m not sure what I would expect to experience differently if those relationships were or weren’t arithmetic, so I don’t know how to evaluate the truth or falsehood of that statement.
But I will say that if that’s true, then arithmetic isn’t very interesting, except perhaps linguistically. Sure, maybe arithmetic only occurs in minds, or in human minds, or in English-speaking minds. I can’t see why I ought to care much about that.
Thanks for sticking with this, I am trying to hone my arguments on this topic and you are helping.
There exists a relationship between how many nuts squirrel B eats, and how many times squirrel A deposited a nut in the tree.
That relationship does not depend on my observations.
Yes it does.
You are implying that there is some sense of reality that is independent of how we think about it. I agree with that. But your statement adopts a “human mind” centric interpretation which makes it false.
For example, from the perspective of the universe at the level of quarks, the reality within the story’s space-time is unchanged by our later observations of the written story. It is independent of our observations.
However, the relationship that you identified has no meaning from the quark perspective. We wouldn’t know if a squirrel ate a nut or if a nut ate a squirrel. At that level, there are no concepts for squirrels and nuts—or counting; those are higher level abstractions.
For convenience, let’s call them X, instead. You aren’t denying the existence of X, merely asserting that X isn’t arithmetic.
The relationship you identified is real and it has meaning; but that meaning is found within the context of your mind and does not describe some intrinsic property of the universe, it describes an interpretation of your observations.
But I will say that if that’s true, then arithmetic isn’t very interesting, except perhaps linguistically. Sure, maybe arithmetic only occurs in minds, or in human minds, or in English-speaking minds. I can’t see why I ought to care much about that.
The interesting thing is X.
Here is why you should care:
Here at LW we are working toward rationality. We want to improve the correspondence between our map and the territory. We want to know what the truth is and how to carve reality and its joints. We want to make ourselves immune to obvious fallacies such as the mind projection fallacy.
My claim is that the context principle—that all meaning is context dependent—is essential to understanding existence, truth and knowledge; it provides traction for solving problems and toward achieving our goals.
Consider a particular system, S1, of a squirrel eating a nut.
S1 can be described in a lot of different ways. The way I just described it is, I agree with you, a human-mind-centric description.
But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.
Those aren’t particularly human-mind-centric descriptions, but they nevertheless describe the same system. Q1 is, in fact, a description of a squirrel eating a nut, even though there’s no way I could tell from analyzing Q1 whether it describes a squirrel eating a nut, or a nut eating a squirrel, or a bushel of oranges.
That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I’m using an English-language description to refer to it makes it an English-language-level system.
And Q1continues to be a quark-level description of a system comprising a squirrel eating a nut even if nobody observes it.
Essentially you are saying that Q1=S1. This is certainly not true.
Clearly Q1 and S1 are related. If we could vanish a large contiguous chunk of Q1, we might see a chunk of squirrel disappear in S1; so they have some time-space context in common.
But Q1 describes a system of quarks and S1 describes a system of a squirrel and a nut. They are represented in different “languages”; to compare them you must convert them to a common “language”. The relationship between Q1 and S1 is this process of language conversion—it is the layered process of interactions and interpretations that result in S1, for some context that includes Q1.
The process that generates S1 -- in part from observations ultimately derived from Q1 -- includes the recognition of squirrels and nuts; and that part of the process occurs within the human mind.
But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.
No. In general you are not guaranteed “equally accurate” descriptions when you convert from one language to another, from one perspective to another, from one domain abstraction to another. For example the fraction 1⁄9 is exact, but its decimal representation limited to three decimal places, 0.111, is only approximate.
Q1 is, in fact, a description of a squirrel eating a nut
I addressed this above. Q1 is a system of quarks that is part of the context that led to S1, it is not S1.
That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I’m using an English-language description to refer to it makes it an English-language-level system.
For the purpose of efficient communication mixing perspectives in this way is generally fine. To answer certain questions on existence and meaning—for example to identify if arithmetic has an existence that is independent of humans and our artifacts—we need to be more careful.
You seem to be failing to attend here to the difference between descriptions and the systems they describe.
I’m not saying Q1=S1. That’s a category error; Q1 is a description of S1. The map is not the territory.
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system, and that although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
Agreed that there are gains and losses in going from one form of representation to another. But the claim “‘a squirrel eating a nut’ is a description of that system over there” is just as accurate as the claim “Q1 is a description of that system over there.” So I stand by the statement that I can as accurately make one claim as the other.
The map is not the territory.
…
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system...
The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.
Consider this case. We show a picture E (evidence) to Frank and Glen. Frank’s response is “cat”. Glen’s response is “cute”.
By your prior statements I assume that you would say that “cat” and “cute” are both accurate descriptions of E, the picture.
Then Frank says “No, Glen is wrong—that funny looking cat is ugly!”
Glen responds, “No, Frank is wrong—that is a small fluffy dog!”
This conflict is caused by a false belief—not by a false belief about E—but by a false belief about what “cat” and “cute” actually describe.
Frank’s response “cat” describes F(E) -- Frank’s interpretation of the evidence. Glen’s response “cute” describes G(E) -- Glen’s interpretation of the evidence. Both statements are correct in that they are reasonable expressions of personal belief. From this perspective there is no conflict.
It is wrong to arbitrarily split out E and claim that any high level interpretation describes it.
Let’s say that Frank and Glen talk, and then they both conclude that E is a picture of a “cute dog”. Are they now describing E? No—and they are still not describing the same thing. When Frank says “cute dog” he is thinking about how he finds small dogs cute. When Glen says “cute dog” he is thinking about how he finds fluffy animals cute. So even though they have both encoded their conclusion to the same phrase “cute dog”, they do not mean the same thing.
Back to squirrel’s and quarks.
The chain of inference that leads to Q1 and the chain that leads to “a squirrel eating a nut” are different, even if at some level they share similar time-space constraints. Therefore Q1 and “a squirrel eating a nut” are not two different descriptions of the same system—they are different descriptions of different systems.
I know that this perspective violates our common understand of the world, but it is our understanding that is wrong.
although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
We seem to agree that some stuff doesn’t need the human mind to exist—but perhaps we disagree on how to carve the world into what does and what doesn’t.
For clarity on this problem, let’s formalize it a bit:
Let S1 refer to the description “a squirrel eating a nut”.
Let Z refer to the system that S1 describes.
You claim that Z does not depend on a human mind to generate it; however Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. This human body/mind portion is everything from the moment that photons start entering the eye to the point where we come to the conclusion “hey, that’s a squirrel eating a nut”. So Z does depend in part on a human mind.
To deal with this, let’s split Z into two parts:
Let Ze refer to the part of Z that is entirely outside of the human body—the environment.
Let Zh refer to the rest of Z—the part that occurs within the human body.
Also, existence requires context. There are reasonable normative contexts that we could assume for this case, but let’s be specific:
Let R refer to the physical reality of the universe (whatever that is).
From this perspective I think that we can agree—both Ze and Zh exist within R and that the existence of Ze within R does not depend in any way on the processing that occurs within Zh. For that matter the existence of Zh within R doesn’t depend on the processing that occurs within Zh.
I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
Let S1 refer to the description “a squirrel eating a nut”. Let Z refer to the system that S1 describes.
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.
There exists a relationship between how many nuts squirrel B eats, and how many times squirrel A deposited a nut in the tree.
That relationship does not depend on my observations.
“1+1+1+1=4” is a statement of arithmetic that expresses one aspect of that relationship; specifically, the aspect of it related to counting.
“1+1+1=3” is a different statement of arithmetic that expresses the same aspect of a different relationship, one that could be implemented in a different story, and likely was.
“1000+1000+1000=3000” is yet another statement of arithmetic that expresses the same aspect of a different relationship, one that has probably never been implemented in terms of nuts and squirrels, although in principle it could be.
“1+1+1=4” expresses the same aspect of yet another relationship, one which probably has never been implemented that way, and which probably can’t be.
And there are other kinds of relationships, implementable and otherwise, which can be expressed by other kinds of statements of mathematics.
None of those relationships depend on my observations, either. And you say that none of those relationships are arithmetic relationships, precisely because they don’t involve us interpreting our observations.
For convenience, let’s call them X, instead. You aren’t denying the existence of X, merely asserting that X isn’t arithmetic.
Well, OK. I’m not sure what I would expect to experience differently if those relationships were or weren’t arithmetic, so I don’t know how to evaluate the truth or falsehood of that statement.
But I will say that if that’s true, then arithmetic isn’t very interesting, except perhaps linguistically. Sure, maybe arithmetic only occurs in minds, or in human minds, or in English-speaking minds. I can’t see why I ought to care much about that.
The interesting thing is X.
Thanks for sticking with this, I am trying to hone my arguments on this topic and you are helping.
Yes it does.
You are implying that there is some sense of reality that is independent of how we think about it. I agree with that. But your statement adopts a “human mind” centric interpretation which makes it false.
For example, from the perspective of the universe at the level of quarks, the reality within the story’s space-time is unchanged by our later observations of the written story. It is independent of our observations.
However, the relationship that you identified has no meaning from the quark perspective. We wouldn’t know if a squirrel ate a nut or if a nut ate a squirrel. At that level, there are no concepts for squirrels and nuts—or counting; those are higher level abstractions.
The relationship you identified is real and it has meaning; but that meaning is found within the context of your mind and does not describe some intrinsic property of the universe, it describes an interpretation of your observations.
Here is why you should care:
Here at LW we are working toward rationality. We want to improve the correspondence between our map and the territory. We want to know what the truth is and how to carve reality and its joints. We want to make ourselves immune to obvious fallacies such as the mind projection fallacy.
My claim is that the context principle—that all meaning is context dependent—is essential to understanding existence, truth and knowledge; it provides traction for solving problems and toward achieving our goals.
Consider a particular system, S1, of a squirrel eating a nut.
S1 can be described in a lot of different ways. The way I just described it is, I agree with you, a human-mind-centric description.
But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.
Those aren’t particularly human-mind-centric descriptions, but they nevertheless describe the same system. Q1 is, in fact, a description of a squirrel eating a nut, even though there’s no way I could tell from analyzing Q1 whether it describes a squirrel eating a nut, or a nut eating a squirrel, or a bushel of oranges.
That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I’m using an English-language description to refer to it makes it an English-language-level system.
And Q1continues to be a quark-level description of a system comprising a squirrel eating a nut even if nobody observes it.
Essentially you are saying that Q1=S1. This is certainly not true.
Clearly Q1 and S1 are related. If we could vanish a large contiguous chunk of Q1, we might see a chunk of squirrel disappear in S1; so they have some time-space context in common.
But Q1 describes a system of quarks and S1 describes a system of a squirrel and a nut. They are represented in different “languages”; to compare them you must convert them to a common “language”. The relationship between Q1 and S1 is this process of language conversion—it is the layered process of interactions and interpretations that result in S1, for some context that includes Q1.
The process that generates S1 -- in part from observations ultimately derived from Q1 -- includes the recognition of squirrels and nuts; and that part of the process occurs within the human mind.
No. In general you are not guaranteed “equally accurate” descriptions when you convert from one language to another, from one perspective to another, from one domain abstraction to another. For example the fraction 1⁄9 is exact, but its decimal representation limited to three decimal places, 0.111, is only approximate.
I addressed this above. Q1 is a system of quarks that is part of the context that led to S1, it is not S1.
For the purpose of efficient communication mixing perspectives in this way is generally fine. To answer certain questions on existence and meaning—for example to identify if arithmetic has an existence that is independent of humans and our artifacts—we need to be more careful.
You seem to be failing to attend here to the difference between descriptions and the systems they describe.
I’m not saying Q1=S1. That’s a category error; Q1 is a description of S1. The map is not the territory.
I am saying that Q1 and “a squirrel eating a nut” are two different descriptions of the same system, and that although “a squirrel eating a nut” depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.
Agreed that there are gains and losses in going from one form of representation to another. But the claim “‘a squirrel eating a nut’ is a description of that system over there” is just as accurate as the claim “Q1 is a description of that system over there.” So I stand by the statement that I can as accurately make one claim as the other.
The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.
Consider this case. We show a picture E (evidence) to Frank and Glen. Frank’s response is “cat”. Glen’s response is “cute”.
By your prior statements I assume that you would say that “cat” and “cute” are both accurate descriptions of E, the picture.
Then Frank says “No, Glen is wrong—that funny looking cat is ugly!”
Glen responds, “No, Frank is wrong—that is a small fluffy dog!”
This conflict is caused by a false belief—not by a false belief about E—but by a false belief about what “cat” and “cute” actually describe.
Frank’s response “cat” describes F(E) -- Frank’s interpretation of the evidence. Glen’s response “cute” describes G(E) -- Glen’s interpretation of the evidence. Both statements are correct in that they are reasonable expressions of personal belief. From this perspective there is no conflict.
It is wrong to arbitrarily split out E and claim that any high level interpretation describes it.
Let’s say that Frank and Glen talk, and then they both conclude that E is a picture of a “cute dog”. Are they now describing E? No—and they are still not describing the same thing. When Frank says “cute dog” he is thinking about how he finds small dogs cute. When Glen says “cute dog” he is thinking about how he finds fluffy animals cute. So even though they have both encoded their conclusion to the same phrase “cute dog”, they do not mean the same thing.
Back to squirrel’s and quarks.
The chain of inference that leads to Q1 and the chain that leads to “a squirrel eating a nut” are different, even if at some level they share similar time-space constraints. Therefore Q1 and “a squirrel eating a nut” are not two different descriptions of the same system—they are different descriptions of different systems.
I know that this perspective violates our common understand of the world, but it is our understanding that is wrong.
We seem to agree that some stuff doesn’t need the human mind to exist—but perhaps we disagree on how to carve the world into what does and what doesn’t.
For clarity on this problem, let’s formalize it a bit: Let S1 refer to the description “a squirrel eating a nut”. Let Z refer to the system that S1 describes.
You claim that Z does not depend on a human mind to generate it; however Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. This human body/mind portion is everything from the moment that photons start entering the eye to the point where we come to the conclusion “hey, that’s a squirrel eating a nut”. So Z does depend in part on a human mind.
To deal with this, let’s split Z into two parts: Let Ze refer to the part of Z that is entirely outside of the human body—the environment. Let Zh refer to the rest of Z—the part that occurs within the human body.
Also, existence requires context. There are reasonable normative contexts that we could assume for this case, but let’s be specific: Let R refer to the physical reality of the universe (whatever that is).
From this perspective I think that we can agree—both Ze and Zh exist within R and that the existence of Ze within R does not depend in any way on the processing that occurs within Zh. For that matter the existence of Zh within R doesn’t depend on the processing that occurs within Zh.
I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.