Here is the wiki page for those confused by point 3 of this summary: it seems to be a generalized version of the argument that speech synthesis is difficult to do realistically when the algorithm doing it is deaf, and that speech synthesis would be done better by some kind of bidirectional program that recognized human speech (this is partly inspired by the analogous situation with deaf people).
Here is the wiki page for those confused by point 3 of this summary: it seems to be a generalized version of the argument that speech synthesis is difficult to do realistically when the algorithm doing it is deaf, and that speech synthesis would be done better by some kind of bidirectional program that recognized human speech (this is partly inspired by the analogous situation with deaf people).