What about mathematical concepts which are made out of their relationships with one another, for example that of a vector?
Although humans almost certainly learn language by observing examples of referents of words, this doesn’t explain how large language models with access only to text learnt the meanings of the same words. I would argue that the meaning of these words was contained in the way in which they were embedded in a semantic web of other words, allowing the LLM to learn the meanings of all of them simultaneously. Similarly, even if a human first learnt about vectors by seeing physical displacements between objects denoted by arrows, it would be possible to learn about vectors by defining them as entities which can be added together with other vectors in a particular way, etc.
As far as I know it’s an open question of how LLMs learn language. It’s clearly different from humans because they don’t have the same kind of learning-from-sense-experience process that we do to ground the meaning of words defined in terms of other words. It’s possible they don’t learn words the same way we do and the sense in which an LLM gives a word meaning is different from how a human does it, and maybe that happens in such a way as to be meaningful even if humans and LLMs can communicate and each experiences the other as producing words that they interpret as grounded in the way they ground meaning.
As for mathematical concepts made out of relationships to each other, the story from the human perspective is the same: grounded in experience with words that back up those other words. When we try to do mathematics where everything floats free, it’s possible, but those symbols seem to come to take on meaning only because they get grounded by how they are used, which, arguably, is what humans do, so maybe that’s what LLMs do, too, only they don’t have the point-at-a-thing-to-know-it operation, so their use of words have meaning grounded in use, but not use to describe sensory experience.
“those symbols seem to come to take on meaning only because they get grounded by how they are used,” I would argue that they don’t need to be applied to anything other than pure mathematics in order to take on meaning. Therefore they are not grounded in empiricism, even if our understanding of them tends to be related to it.
If they are being applied to pure mathematics they are being used to do mathematics. Math, in an important sense, doesn’t exist when it’s not being done.
What about mathematical concepts which are made out of their relationships with one another, for example that of a vector?
Although humans almost certainly learn language by observing examples of referents of words, this doesn’t explain how large language models with access only to text learnt the meanings of the same words. I would argue that the meaning of these words was contained in the way in which they were embedded in a semantic web of other words, allowing the LLM to learn the meanings of all of them simultaneously. Similarly, even if a human first learnt about vectors by seeing physical displacements between objects denoted by arrows, it would be possible to learn about vectors by defining them as entities which can be added together with other vectors in a particular way, etc.
As far as I know it’s an open question of how LLMs learn language. It’s clearly different from humans because they don’t have the same kind of learning-from-sense-experience process that we do to ground the meaning of words defined in terms of other words. It’s possible they don’t learn words the same way we do and the sense in which an LLM gives a word meaning is different from how a human does it, and maybe that happens in such a way as to be meaningful even if humans and LLMs can communicate and each experiences the other as producing words that they interpret as grounded in the way they ground meaning.
As for mathematical concepts made out of relationships to each other, the story from the human perspective is the same: grounded in experience with words that back up those other words. When we try to do mathematics where everything floats free, it’s possible, but those symbols seem to come to take on meaning only because they get grounded by how they are used, which, arguably, is what humans do, so maybe that’s what LLMs do, too, only they don’t have the point-at-a-thing-to-know-it operation, so their use of words have meaning grounded in use, but not use to describe sensory experience.
“those symbols seem to come to take on meaning only because they get grounded by how they are used,” I would argue that they don’t need to be applied to anything other than pure mathematics in order to take on meaning. Therefore they are not grounded in empiricism, even if our understanding of them tends to be related to it.
If they are being applied to pure mathematics they are being used to do mathematics. Math, in an important sense, doesn’t exist when it’s not being done.