I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
Let S1 refer to the description “a squirrel eating a nut”. Let Z refer to the system that S1 describes.
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.
I’m not really following your overall line of reasoning, so here’s a few responses to specific points:
Agreed that F(E) = “an ugly funny-looking cat” and G(E) = “a cute small fluffy dog” are both descriptions of E.
Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.
Agreed that to claim that F(E), or G(E), or any other “high-level interpretation” of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.
I’m not at all sure what you mean by “arbitrarily splitting out E” in this example.
Agreed that if F2(E)=”a picture of a cute-by-virtue-of-being-small dog”, and G2(E)=”a picture of a cute-by-virtue-of-being-a-fluffy-animal dog,” then F2(E) != G2(E) -- that is, Frank and Glen don’t actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances (“E is a picture of a cute dog”), which are the same.
So, agreed that they “do not mean the same thing”—that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.
I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating “a squirrel eating a nut” are different. I don’t see how it follows that “they are [..] descriptions of different systems.”
OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.
I don’t think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you’ve defined it depends on a human mind.
But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn’t include any photons or human eyes or human conclusions, and Z does.
I agree that Ze and Zh exist within R (although I don’t see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn’t depend on Zh. I also agree that the existence of Zh doesn’t depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.