The notion of truth and reality I wish to debunk is the denial that they’re human mental creations. My argument is a mild form of reductio ad absurdum. That is, I first make “factual” claims as if they’re independent of our mental creations. In particular, I take the current scientific worldview that entropy-exporting metabolism is the basis for life. That then leads to the conclusion that entropy-exporting (via predictive modeling) must therefore also be basis of the syntax and semantics of human language. Thus our notions of truth and reality, and whatever narratives and semantics we attach to them, must also be mental creations.
Note that this conclusion is generally considered unproblematic for basically all other human words (e.g. “beauty”, “pizza”, etc). I can think of no reason why “truth” and “reality” should get a special carve-out clause.
I still don’t understand. Suppose our notion of a pizza is in some sense a “mental creation”. What is the significance of that, in your argument? I don’t think you’re denying that pizzas exist.
You’re correct, I don’t deny pizzas exist. I don’t even deny that truth and reality exist. But I am arguing for what I believe is a more robust semantic model for the word “exist”. My point is that semantic models aren’t set in stone or fall from the sky; they’re necessarily human creations. In fact every human carries a slightly different semantic model for all our words, but we rarely notice it because our use of them normally coincides so well. That’s how we can all play Wittgenstein’s language game and feel we understand each other. (Which LLMs do as well, but in their case we have no idea what models they use).
One might think that, if their use coincides so well, who cares what semantic model is behind it all?But even in everyday life there are many, many cases where our human semantic models diverge. Even for seemingly unproblematic words like “exist”. For example, does a rainbow exist? What about the greatest singer of all time? Do the past and future exist? Or finally, back to the pizza: let’s say I drop it and the slices scatter on the floor—does it still exist?
These examples and many more illustrate, to me at least, that the canonical semantic model for “exist” (that is, a model that insists it somehow transcends human modeling) has too many failure modes to be serviceable (apart from it being principally incoherent).
On the other hand, a semantic model that simply accepts that all words and concepts are products of human modeling seems to me robustly unproblematic. But I can see I need to do a better job of spelling that out in my follow-up essay.
On the other hand, a semantic model that simply accepts that all words and concepts are products of human modeling
If the words and concepts are, it doesn’t follow that their referents are. “Moon” is a word we invented for something we didn’t invent. You can’t claim that the moon as such is a human invention any more than you can claim it is four letters long.
And note that there is not a single contrary theory on the lines of “human concepts are wholly caused and necessitated by the external non-mental world”. The contrary theory need be only that concepts can refer to non concepts. Are you familiar with the use/reference distinction? The concept of a pointer in programming?
Thanks again for your comments, they’re a great help. Hopefully my response below also addresses your other comments.
So yes, I’m familiar with the use/reference aka map/territory distinction. The latter is a very good way to phrase the issue here, so I’ll go with that nomenclature here.
Normally map v. territory is an extremely useful distinction that science has made fantastic progress with. But there are also well-known historical cases where it has been a conceptual trap. For example, Newton and Kant assumed that our “maps” of space and time are distinct from the “territory” of space and time. Einstein argued instead that they’re only meaningful through their operational definitions—i.e their maps. Only in dropping the distinction between map and territory here can one get to relativity theory. In quantum mechanics, at least in the Copenhagen interpretation, there is also no distinction between map and territory for physical observables. Even if one rejects the Copenhagen view, Heisenberg’s original insight came from treating physical observables as maps, not territory. To take a more mundane example: earlier cultures saw rainbows as territory; today we generally accept that rainbows, like faces in clouds, are only human maps.
If I may paraphrase what I take to be your view: your response to the above might be to say “yes, the face in the cloud is just a map, but the cloud itself is not. It is real territory regardless of my map.”
My response to that is: “Actually, what you refer to as ‘cloud’ is also just a map: namely, conceptual shorthand for a loose agglomerate of water vapor and other particles reflecting enough visible light to create a signal in your visual system. Your brain models that by mapping it all into ‘cloud’.”
Your response to that might be: “OK, even if ‘cloud’ is just a map, the water vapor and other particles are certainly real territory regardless of my map”
My response to that is: “Actually, water vapor and particulate matter are also just maps. ‘Water’ is a chemist’s map of a bound state of two hydrogen and one oxygen atom. Same goes for the other particulate matter. And we can continue like this: ‘atoms’ are physics shorthand for bound states of elementary particles, which are in turn shorthand for certain energy states of quantum fields, vibrating superstrings, or whatever the physics theory du jour claims they are.
My point is that every attempt to claim a thing as “real territory” always winds up being a human map. Even a broader claim like “It can’t all be maps. There must be some ‘real territory’ in a ‘world beyond’” is also a human mapping exercise. After all, we humans can only think and communicate with our human maps.
Whenever we distinguish between map and territory what we are doing is creating an internal model consisting of two parts: a “my maps” part and “the territory aka the world beyond” part. Again, that is usually a wonderfully helpful way to partition our maps, but, so I argue, not always.
In traditional philosophy, there’s a three way distinction between nominalism , conceptualism and realism. Those are three different theories in ended to explain three sets of issues: the existence of similarities, differences and kinds in the world, the territory; the way concept formation does and should work in humans; and issues to done with truth and meaning, relating the map and territory.
But conceptualism comes in two varieties.
One the one hand, there is is the theory that correct concepts “carve nature at the joints” or “identify clusters in thingspace”, the theory Aristotle and Ayn Rand. On the other hand is the “cookie cutter” theory, the idea that the categories are made by (and for) man, Kant’s “Copernican revolution”.
In the first approach, the world/territory is the determining factor, and the mind/map can do no better than reflect it accurately. In the second approach, the mind makes its own contribution.
Which is not to say that it’s all map, or that the mind is entirely in the driving seat. The idea that there is no territory implies solipsism (other people only exist in the territory, which doesn’t exist) and magic (changing the map changes the territory, or at least, future observations). Even if concepts are human constructions, the territory still has a role, which is determining the truth and validity of concepts. Even if the “horse” concept” is a human construct, it is more real than the “unicorn” concept. In cookie cutter terms, the territory supplies the dough, the map supplies the outline.
So Kantianism isn’t a completely idealistic or all-in-the-map philosophy...in Kant’s own terminology it’s empirical realism as well as transcendental idealism. I’s not as idealistic as Hegel’s system, for instance. Similarly, Aristoteleanism isn’t as realistic as Platonism—Plato holds that there aren’t just mind-independent conceits, but theyre in their own independent realm.
So, although the conceptualisms are different, they are both somewhere in the middle
My point is that every attempt to claim a thing as “real territory” always winds up being a human map.
And I’m saying it’s not a binary. Even if we use human made concepts to talk about the territory, we are still talking about the territory.
The point I’m trying to express (and clearly failing at) isn’t conceptualism or solipsism, at least not in the way my own semantic modeling interprets them. As I interpret them, the idealism of, say, Berkeley, Buddhism et al amounts to a re-branding of reality from being “out there” to “in my mind” (or “God’s mind”). I mean it differently, but because I refer constantly to our mental models, I can see why my argument looks a lot like that. Ironically, my failure may be a sort of illustration of the point itself. Namely, the limits of using language to discuss the limitations of language.
In fact, the point I’m trying to get to is not so much about “the nature of reality” but about the profound limitations of language. And that our semantic models tend to fool us into assigning a power to language that it doesn’t have. Specifically, we can’t use the language game to transcend the language game. Our theories of ontology and epistomology can’t coherently claim to refer to things beyond human language when these theories are wholly expressed in human language. Whatever model of reality we have, it’s still a model.
The objection of realism is that our models are not created in isolation, but by “actual reality” interacting with our modeling apparatus. My response is: that is a very useful way to model our modeling, but like all models, it has limitations. That is, I can make a mental model called “realism” in which there are mental models on the one hand and “real reality” on the other. I can further imagine the two interact in such a way that my models “carve reality at the joints”, or “identify clusters in thingspace”. But all of that is itself manifestly a mental model. So if I then want to coherently claim a particular model is more than just a model, I have to create a larger model in which the first model is imagined to be so. That can be fine as far as it goes. But realism – the claim of a “reality” independent of ANY model—commits one to an infinite nesting of mental models, each trying to escape their nature as mental models.
This situation is a close analog to the notion of “truth” in mathematics. Here the language game is explicitly limited to theorem-proving within formal systems. But we know there are unprovable statements within any formal system. So if I want a particular unprovable statement to count as “true”, I need a larger meta-system that makes it so. That’s fine as far as that goes. But to use the language game of formal systems to claim an unprovable statement is true independent of ANY proof, I would need an infinite nesting of meta-systems. That’s clearly incoherent, so when mathematicians want to claim “truth” in this way they have to exit the language game of formal systems – i.e. appeal to informal language and the philosophy of Platonism.
Personally I’m not a fan of Platonism, but it works as a philosophy of mathematics in so far as it passes the buck from formal to informal language. But that’s also where the buck stops. The sum of formal and informal language has no other system to appeal to, at least not one that can be expressed in language. To sum it all up with another metaphor: the semantic modeling behind the philosophy of realism overloads the word “reality” with more weight than the human language game can carry.
The point I’m trying to express (and clearly failing at) isn’t conceptualism or solipsism, at least not in the way my own semantic modeling interprets them. As I interpret them, the idealism of, say, Berkeley, Buddhism et al amounts to a re-branding of reality from being “out there” to “in my mind” (or “God’s mind”). I mean it differently, but because I refer constantly to our mental models, I can see why my argument looks a lot like that.
That’s your objection to solipsism. What’s your objection to conceptualism?
And that our semantic models tend to fool us into assigning a power to language that it doesn’t have.
Who’s “us”? Some philosophers? All philosophers? Some laypeople? All laypeople?
Our theories of ontology and epistomology can’t coherently claim to refer to things beyond human language
Except that you just did. Well, you did in general. Theres a problem in referring to specific things behind our language. But who’s doing that? Kant isn’t. He keeps saying that the thing in itself is unknowable. So what’s the problem with Kantian conceptualism?
Whatever model of reality we have, it’s still a model.
Whatever reality is, it’s still reality. You still haven’t said how the two are related.
But all of that is itself manifestly a mental model.
A model of something real. “Is a model” doesn’t mean “is false”.
So if I then want to coherently claim a particular model is more than just a model, I have to create a larger model in which the first model is imagined to be so.
Does “more than a model” mean “true”?
But realism – the claim of a “reality” independent of ANY model—commits one to an infinite nesting of mental models, each trying to escape their nature as mental models.
I don’t see why. And if you reject realism, you have solipsism, which you also reject.
So if I want a particular unprovable statement to count as “true”, I need a larger meta-system that makes it so
You can do that with larger systems, adding the theorem as an axiom, but you can also do that with different systems.
But that’s all rather beside the point… minimally realism requires some things to be true, and truth to be something to do with the territory.
Personally I’m not a fan of Platonism, but it works as a philosophy of mathematics in so far as it passes the buck from formal to informal language
Theres no reason why meaning and truth in maths have to work like meaning and truth in not-maths, or vice versa.
To sum it all up with another metaphor: the semantic modeling behind the philosophy of realism overloads the word “reality” with more weight than the human language game can carry.
You need to notice the difference between truth and justifcation/proof. Truth, even realistic truth, is so easy to obtain that you can a certain amount by ransoming guessing. The tricky thing is knowing why it is true...justification.
This is bit of a side note but still may interesting: I suppose the history of scientific paradigm shifts can be framed as updates to our “map” v. “territory” partitions. A good scientific theory (in my account) is exactly what converts what was ostensibly “territory” into explicit mathematical models i.e. “maps”.
must therefore also be basis of the syntax and semantics of human language.
“Basis” is ambiguous. What makes language work causally, what makes it meaningful, where it is, and what makes it true, where it is, are different questions. If truth is a relationship between a living organism and a world beyond it , you can’t reduce it to just the metabolism of the organism, for instance.
Thanks for the comment. Honestly it took me a while to disambiguate (i.e. translate to myself what you’re getting at). So I take it as an interesting example of the point I was actually trying to make to Mitchell Porter previously. Namely, that our semantic models of normally unproblematic words can diverge quite a bit. E.g., my model for “truth” is not “a relationship between a living organism and a world beyond it”. Rather in my model, “the world beyond” is ultimately also part of our internal modeling. That’s because the very fact that we humans imagine and form narratives around “the world beyond” makes it per se a product of our internal models. Only magical thinking can escape this conclusion, but then we jettison the whole project of rationalism and science, imv.
BTW I do totally get how uncomfortable, frustrating and head-spinning this view is. But it wouldn’t be the first frustrating, head-spinning thing we’ve had to face about ourselves and “the world beyond”. Gödel’s Theorem, quantum mechanics and general relativity are all about head-spinning epistemic limitations. (That’s NOT to claim my little argument is on par with these illustrious examples!). But once we get used to them, they’re also a rich source of new scientific insights. In particular, I believe the view I argue for has quite serviceable benefits in that regard—at least it has for me. But I need to lay that out in another essay.
Rather in my model, “the world beyond” is ultimately also part of our internal modeling.
The word “beyond” *means” “not in our heads”. You’re just not respecting that.
That’s because the very fact that we humans imagine and form narratives around “the world beyond” makes it per se a product of our internal models.
It’s possible to put that in a non head spinning way: the world is the world and not in our heads; our thoughts about the world are in our heads.
It’s also possible to put it in a non head spinning way.
Many words can be used in an “in the head”/”on the map” way, and also in a “in the world”/”in the territory” way...and it’s also possible to disambiguate by using special phrases like “per se” and “as such” ..or “for me” and “in my view”. That way finger/moon confusions are avoided.
BTW I do totally get how uncomfortable, frustrating and head-spinning this view
It’s unnecessarily uncomfortable, etc. If you simply keep track of whether you are using a word to a territory feature , or a map feature, the confusion vanishes.
Only magical thinking can escape this conclusion,
Believing that you thought the world per se into existence is magical thinking!
but then we jettison the whole project of rationalism and science, imv.
Correct use of.language can remove conclusion.
The “world per se” should refer to the territory , not our models of it.
The phrase “per se” *means” “not in our heads”. You’re just not respecting that.
The notion of truth and reality I wish to debunk is the denial that they’re human mental creations. My argument is a mild form of reductio ad absurdum. That is, I first make “factual” claims as if they’re independent of our mental creations. In particular, I take the current scientific worldview that entropy-exporting metabolism is the basis for life. That then leads to the conclusion that entropy-exporting (via predictive modeling) must therefore also be basis of the syntax and semantics of human language. Thus our notions of truth and reality, and whatever narratives and semantics we attach to them, must also be mental creations.
Note that this conclusion is generally considered unproblematic for basically all other human words (e.g. “beauty”, “pizza”, etc). I can think of no reason why “truth” and “reality” should get a special carve-out clause.
I still don’t understand. Suppose our notion of a pizza is in some sense a “mental creation”. What is the significance of that, in your argument? I don’t think you’re denying that pizzas exist.
Thanks for the questions.
You’re correct, I don’t deny pizzas exist. I don’t even deny that truth and reality exist. But I am arguing for what I believe is a more robust semantic model for the word “exist”. My point is that semantic models aren’t set in stone or fall from the sky; they’re necessarily human creations. In fact every human carries a slightly different semantic model for all our words, but we rarely notice it because our use of them normally coincides so well. That’s how we can all play Wittgenstein’s language game and feel we understand each other. (Which LLMs do as well, but in their case we have no idea what models they use).
One might think that, if their use coincides so well, who cares what semantic model is behind it all?But even in everyday life there are many, many cases where our human semantic models diverge. Even for seemingly unproblematic words like “exist”. For example, does a rainbow exist? What about the greatest singer of all time? Do the past and future exist? Or finally, back to the pizza: let’s say I drop it and the slices scatter on the floor—does it still exist?
These examples and many more illustrate, to me at least, that the canonical semantic model for “exist” (that is, a model that insists it somehow transcends human modeling) has too many failure modes to be serviceable (apart from it being principally incoherent).
On the other hand, a semantic model that simply accepts that all words and concepts are products of human modeling seems to me robustly unproblematic. But I can see I need to do a better job of spelling that out in my follow-up essay.
If the words and concepts are, it doesn’t follow that their referents are. “Moon” is a word we invented for something we didn’t invent. You can’t claim that the moon as such is a human invention any more than you can claim it is four letters long.
And note that there is not a single contrary theory on the lines of “human concepts are wholly caused and necessitated by the external non-mental world”. The contrary theory need be only that concepts can refer to non concepts. Are you familiar with the use/reference distinction? The concept of a pointer in programming?
Thanks again for your comments, they’re a great help. Hopefully my response below also addresses your other comments.
So yes, I’m familiar with the use/reference aka map/territory distinction. The latter is a very good way to phrase the issue here, so I’ll go with that nomenclature here.
Normally map v. territory is an extremely useful distinction that science has made fantastic progress with. But there are also well-known historical cases where it has been a conceptual trap. For example, Newton and Kant assumed that our “maps” of space and time are distinct from the “territory” of space and time. Einstein argued instead that they’re only meaningful through their operational definitions—i.e their maps. Only in dropping the distinction between map and territory here can one get to relativity theory. In quantum mechanics, at least in the Copenhagen interpretation, there is also no distinction between map and territory for physical observables. Even if one rejects the Copenhagen view, Heisenberg’s original insight came from treating physical observables as maps, not territory. To take a more mundane example: earlier cultures saw rainbows as territory; today we generally accept that rainbows, like faces in clouds, are only human maps.
If I may paraphrase what I take to be your view: your response to the above might be to say “yes, the face in the cloud is just a map, but the cloud itself is not. It is real territory regardless of my map.”
My response to that is: “Actually, what you refer to as ‘cloud’ is also just a map: namely, conceptual shorthand for a loose agglomerate of water vapor and other particles reflecting enough visible light to create a signal in your visual system. Your brain models that by mapping it all into ‘cloud’.”
Your response to that might be: “OK, even if ‘cloud’ is just a map, the water vapor and other particles are certainly real territory regardless of my map”
My response to that is: “Actually, water vapor and particulate matter are also just maps. ‘Water’ is a chemist’s map of a bound state of two hydrogen and one oxygen atom. Same goes for the other particulate matter. And we can continue like this: ‘atoms’ are physics shorthand for bound states of elementary particles, which are in turn shorthand for certain energy states of quantum fields, vibrating superstrings, or whatever the physics theory du jour claims they are.
My point is that every attempt to claim a thing as “real territory” always winds up being a human map. Even a broader claim like “It can’t all be maps. There must be some ‘real territory’ in a ‘world beyond’” is also a human mapping exercise. After all, we humans can only think and communicate with our human maps.
Whenever we distinguish between map and territory what we are doing is creating an internal model consisting of two parts: a “my maps” part and “the territory aka the world beyond” part. Again, that is usually a wonderfully helpful way to partition our maps, but, so I argue, not always.
In traditional philosophy, there’s a three way distinction between nominalism , conceptualism and realism. Those are three different theories in ended to explain three sets of issues: the existence of similarities, differences and kinds in the world, the territory; the way concept formation does and should work in humans; and issues to done with truth and meaning, relating the map and territory.
But conceptualism comes in two varieties.
One the one hand, there is is the theory that correct concepts “carve nature at the joints” or “identify clusters in thingspace”, the theory Aristotle and Ayn Rand. On the other hand is the “cookie cutter” theory, the idea that the categories are made by (and for) man, Kant’s “Copernican revolution”.
In the first approach, the world/territory is the determining factor, and the mind/map can do no better than reflect it accurately. In the second approach, the mind makes its own contribution.
Which is not to say that it’s all map, or that the mind is entirely in the driving seat. The idea that there is no territory implies solipsism (other people only exist in the territory, which doesn’t exist) and magic (changing the map changes the territory, or at least, future observations). Even if concepts are human constructions, the territory still has a role, which is determining the truth and validity of concepts. Even if the “horse” concept” is a human construct, it is more real than the “unicorn” concept. In cookie cutter terms, the territory supplies the dough, the map supplies the outline.
So Kantianism isn’t a completely idealistic or all-in-the-map philosophy...in Kant’s own terminology it’s empirical realism as well as transcendental idealism. I’s not as idealistic as Hegel’s system, for instance. Similarly, Aristoteleanism isn’t as realistic as Platonism—Plato holds that there aren’t just mind-independent conceits, but theyre in their own independent realm.
So, although the conceptualisms are different, they are both somewhere in the middle
And I’m saying it’s not a binary. Even if we use human made concepts to talk about the territory, we are still talking about the territory.
The point I’m trying to express (and clearly failing at) isn’t conceptualism or solipsism, at least not in the way my own semantic modeling interprets them. As I interpret them, the idealism of, say, Berkeley, Buddhism et al amounts to a re-branding of reality from being “out there” to “in my mind” (or “God’s mind”). I mean it differently, but because I refer constantly to our mental models, I can see why my argument looks a lot like that. Ironically, my failure may be a sort of illustration of the point itself. Namely, the limits of using language to discuss the limitations of language.
In fact, the point I’m trying to get to is not so much about “the nature of reality” but about the profound limitations of language. And that our semantic models tend to fool us into assigning a power to language that it doesn’t have. Specifically, we can’t use the language game to transcend the language game. Our theories of ontology and epistomology can’t coherently claim to refer to things beyond human language when these theories are wholly expressed in human language. Whatever model of reality we have, it’s still a model.
The objection of realism is that our models are not created in isolation, but by “actual reality” interacting with our modeling apparatus. My response is: that is a very useful way to model our modeling, but like all models, it has limitations. That is, I can make a mental model called “realism” in which there are mental models on the one hand and “real reality” on the other. I can further imagine the two interact in such a way that my models “carve reality at the joints”, or “identify clusters in thingspace”. But all of that is itself manifestly a mental model. So if I then want to coherently claim a particular model is more than just a model, I have to create a larger model in which the first model is imagined to be so. That can be fine as far as it goes. But realism – the claim of a “reality” independent of ANY model—commits one to an infinite nesting of mental models, each trying to escape their nature as mental models.
This situation is a close analog to the notion of “truth” in mathematics. Here the language game is explicitly limited to theorem-proving within formal systems. But we know there are unprovable statements within any formal system. So if I want a particular unprovable statement to count as “true”, I need a larger meta-system that makes it so. That’s fine as far as that goes. But to use the language game of formal systems to claim an unprovable statement is true independent of ANY proof, I would need an infinite nesting of meta-systems. That’s clearly incoherent, so when mathematicians want to claim “truth” in this way they have to exit the language game of formal systems – i.e. appeal to informal language and the philosophy of Platonism.
Personally I’m not a fan of Platonism, but it works as a philosophy of mathematics in so far as it passes the buck from formal to informal language. But that’s also where the buck stops. The sum of formal and informal language has no other system to appeal to, at least not one that can be expressed in language. To sum it all up with another metaphor: the semantic modeling behind the philosophy of realism overloads the word “reality” with more weight than the human language game can carry.
That’s your objection to solipsism. What’s your objection to conceptualism?
Who’s “us”? Some philosophers? All philosophers? Some laypeople? All laypeople?
Except that you just did. Well, you did in general. Theres a problem in referring to specific things behind our language. But who’s doing that? Kant isn’t. He keeps saying that the thing in itself is unknowable. So what’s the problem with Kantian conceptualism?
Whatever reality is, it’s still reality. You still haven’t said how the two are related.
A model of something real. “Is a model” doesn’t mean “is false”.
Does “more than a model” mean “true”?
I don’t see why. And if you reject realism, you have solipsism, which you also reject.
You can do that with larger systems, adding the theorem as an axiom, but you can also do that with different systems.
But that’s all rather beside the point… minimally realism requires some things to be true, and truth to be something to do with the territory.
Theres no reason why meaning and truth in maths have to work like meaning and truth in not-maths, or vice versa.
You need to notice the difference between truth and justifcation/proof. Truth, even realistic truth, is so easy to obtain that you can a certain amount by ransoming guessing. The tricky thing is knowing why it is true...justification.
This is bit of a side note but still may interesting: I suppose the history of scientific paradigm shifts can be framed as updates to our “map” v. “territory” partitions. A good scientific theory (in my account) is exactly what converts what was ostensibly “territory” into explicit mathematical models i.e. “maps”.
“Basis” is ambiguous. What makes language work causally, what makes it meaningful, where it is, and what makes it true, where it is, are different questions. If truth is a relationship between a living organism and a world beyond it , you can’t reduce it to just the metabolism of the organism, for instance.
Thanks for the comment. Honestly it took me a while to disambiguate (i.e. translate to myself what you’re getting at). So I take it as an interesting example of the point I was actually trying to make to Mitchell Porter previously. Namely, that our semantic models of normally unproblematic words can diverge quite a bit. E.g., my model for “truth” is not “a relationship between a living organism and a world beyond it”. Rather in my model, “the world beyond” is ultimately also part of our internal modeling. That’s because the very fact that we humans imagine and form narratives around “the world beyond” makes it per se a product of our internal models. Only magical thinking can escape this conclusion, but then we jettison the whole project of rationalism and science, imv.
BTW I do totally get how uncomfortable, frustrating and head-spinning this view is. But it wouldn’t be the first frustrating, head-spinning thing we’ve had to face about ourselves and “the world beyond”. Gödel’s Theorem, quantum mechanics and general relativity are all about head-spinning epistemic limitations. (That’s NOT to claim my little argument is on par with these illustrious examples!). But once we get used to them, they’re also a rich source of new scientific insights. In particular, I believe the view I argue for has quite serviceable benefits in that regard—at least it has for me. But I need to lay that out in another essay.
The word “beyond” *means” “not in our heads”. You’re just not respecting that.
It’s possible to put that in a non head spinning way: the world is the world and not in our heads; our thoughts about the world are in our heads.
It’s also possible to put it in a non head spinning way.
Many words can be used in an “in the head”/”on the map” way, and also in a “in the world”/”in the territory” way...and it’s also possible to disambiguate by using special phrases like “per se” and “as such” ..or “for me” and “in my view”. That way finger/moon confusions are avoided.
It’s unnecessarily uncomfortable, etc. If you simply keep track of whether you are using a word to a territory feature , or a map feature, the confusion vanishes.
Believing that you thought the world per se into existence is magical thinking!
Correct use of.language can remove conclusion. The “world per se” should refer to the territory , not our models of it. The phrase “per se” *means” “not in our heads”. You’re just not respecting that.