One way to put my point is that it can look for patterns and meaning, but only because it is capable of meaning things of its own. And it is not possible to program this into something, it has to be got by experience.
Again, why? We had a program capable of understanding simple sentences forty years ago, SHRDLU:
Person: Pick up a big red block.
Computer: OK.
Person: Grasp the pyramid.
Computer: I don’t understand which pyramid you mean.
Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.
Computer: By “it”, I assume you mean the block which is taller than the one I am holding.
Computer: OK.
Person: What does the box contain?
Computer: The blue pyramid and the blue block.
Person: What is the pyramid supported by?
Computer: The box.
Person: How many blocks are not in the box?
Computer: Four of them.
Person: Is at least one of them narrower than the one which I told you to pick up?
Computer: Yes, the red cube.
etc.
I don’t see why it would be impossible to make something much better by “just programming it in”. Is there some kind of reading level that no algorithm can surpass if it doesn’t learn by experience?
I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.
Again, why? We had a program capable of understanding simple sentences forty years ago, SHRDLU:
I don’t see why it would be impossible to make something much better by “just programming it in”. Is there some kind of reading level that no algorithm can surpass if it doesn’t learn by experience?
I guess this comes down to the very complicated question of what ‘understanding a language’ amounts to. I take it we can agree that SHRDLU wasn’t thinking in any sense comparable to a human being or an AGI (since I take it we agree that SHRDLU wasn’t an AGI). But also notice that if your example is one of language-learning, you’ve picked a case where the learning thing already knows (some substantial part of) a language.
And lastly, I wouldn’t consider this a counterexample to the claim that learning a language requires a familiarity with a shared world. The machine you describe is obviously making reference to a shared world in its conversation.