Perhaps the Turing test would work better if instead of having to pass for a human, the bot’s insightfulness were rated and it had to be at the same level as a human’s. Insightfulness seems harder to fake than “sounds superficially like a human” and it’s what we care about anyway.
As a plus, it will make it easier for autistic people to pass the Turing test.
If autistic people get classed as non-humans then that’s a failure on the part of the assessing human beings and merely forms part of the baseline to which you are comparing the machines.
The humans are your control so that you can’t set silly standards for the machines.
The humans can’t fail any more than the control rats in a drug trial can fail.
If you think insightfullness is a good way to test AI’s then the person having the conversation with the AI can just say: “Hey, do you have an insight that you can share on X?”
Perhaps the Turing test would work better if instead of having to pass for a human, the bot’s insightfulness were rated and it had to be at the same level as a human’s. Insightfulness seems harder to fake than “sounds superficially like a human” and it’s what we care about anyway.
As a plus, it will make it easier for autistic people to pass the Turing test.
How do they “fail”
If autistic people get classed as non-humans then that’s a failure on the part of the assessing human beings and merely forms part of the baseline to which you are comparing the machines.
The humans are your control so that you can’t set silly standards for the machines. The humans can’t fail any more than the control rats in a drug trial can fail.
If you think insightfullness is a good way to test AI’s then the person having the conversation with the AI can just say: “Hey, do you have an insight that you can share on X?”
Yes, but in the standard Turing test, the AI is then judged on how human it seems, not how insightful.