ZM, it’s clear where your biases are. What do you propose to do to overcome them?
The stakes are very high for this “guess”. The ethical implications of getting it wrong are huge. There are strong arguments both ways, as I expect you know. (Chinese room argument, for instance.)
The designers of the simulation or emulation fully intend to pass the Turing test; that is, it is the explicit purpose of the designers of the software to fool the interviewer. That alone makes me doubt the reliability of my own judgment on the matter.
By the way, I have been talking about this stuff with non-AI-obsessed people the last few days. Several of them independently pointed out that some humans would fail the Turing test. Does that mean that it is OK to turn them off?
Ultimately the Turing test is about the interviewer as much as the interviewee; it’s about what it takes to kick off the interviewer’s empathy circuits.
The idea of asking anyone besides an AI-obsessed person whether they “have qualia” is amusing by the way. The best Turing-Test passing answer, most likely, is “huh?”