Generally, these things work by faking understanding, not by understanding.
The original ALICE was just a set of pattern-matching substitution rules designed to behave like a non-directive Rogerian psychotherapist (a current thing of the time). If you mentioned something, it might respond “Tell me more about [insert thing].” If you went along with the charade, you could have what felt like deep and meaningful conversations with it, but you could just as easily surface its basic cluelessness by saying things like “My my!” to which it might respond “Tell me more about your your.”
If GPT2 is the state of the art in text generation, then to judge by this, it’s no more lifelike than ALICE.
Generally, these things work by faking understanding, not by understanding.
The original ALICE was just a set of pattern-matching substitution rules designed to behave like a non-directive Rogerian psychotherapist (a current thing of the time). If you mentioned something, it might respond “Tell me more about [insert thing].” If you went along with the charade, you could have what felt like deep and meaningful conversations with it, but you could just as easily surface its basic cluelessness by saying things like “My my!” to which it might respond “Tell me more about your your.”
If GPT2 is the state of the art in text generation, then to judge by this, it’s no more lifelike than ALICE.
Actually, it would probably respond with “Tell me more about your my.”