The chain of abstraction can, in humans, be continued indefinitely. On every level of abstraction we can build a new one. In this, we differ from other creatures.
This seems quite valuable, but I’m not convinced modern LLMs actually ground out any worse than your average human does here.
Hayakawa contrasts two different ways one might respond to the question, “what is red?” We could go, “Red is a colour.” “What is a colour?” “A perception.” “What is a perception?” “A sensation.” And so on, up the ladder of abstraction. Or we can go down the ladder of abstraction and point to examples of red things, saying, “these are red.” Philosophers, and Korzybski, call these two approaches “intensional” (with an “s”) and “extensional” respectively.
I’m pretty sure any baseline LLM out there can handle all of this.
This seems quite valuable, but I’m not convinced modern LLMs actually ground out any worse than your average human does here.
I’m pretty sure any baseline LLM out there can handle all of this.