Incidentally, humans have been known to fail Turing tests once in a while, being mistaken for computers instead.
Also, there has been one ELIZA variation that’s managed to successfully come across as human to people who were aware that they might fool people: it imitates a paranoid schizophrenic who believes the Mafia is out to get him, and if asked about something else, it insists on talking about the Mafia instead. Apparently, psychiatrists couldn’t tell it wasn’t a real patient… which probably says more about people with severe mental illness than it does about artificial intelligence.
Sure. And there’s a vast number of artificial systems out there that can successfully emulate catatonic humans over a text interface, or even perfectly healthy sleeping humans. As you say, none of that says much about AI.