I’m kind of baffled that people are so willing to say that LLMs understand X, for various X. LLMs do not behave with respect to X like a person who understands X, for many X.
Just think of anything that you’ve wanted to use a gippity to understand, but it didn’t quickly work and you tried to ask it followup questions and it didn’t understand what was happening / didn’t propagate propositions / didn’t clarify / etc.
Do you have two or three representative examples?
Just think of anything that you’ve wanted to use a gippity to understand, but it didn’t quickly work and you tried to ask it followup questions and it didn’t understand what was happening / didn’t propagate propositions / didn’t clarify / etc.