I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.
I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.