Yes, there plenty of people who don’t pass the Turing test — e.g., those who don’t speak the right language. For this reason, the Turing test, with a human or machine judge, is not a good nonperson predicate, contrary to the OP. But it can be taken as a person predicate. That is, if something passes a strict enough Turing test, it’s reasonable to regard that thing as a person.
But then this defeats the whole purpose: if we don’t want the AI to be a person, then it won’t be able to pass the Turing test, and then it is unclear whether it would be able to use the test to tell people from non-people.
Yes, there plenty of people who don’t pass the Turing test — e.g., those who don’t speak the right language. For this reason, the Turing test, with a human or machine judge, is not a good nonperson predicate, contrary to the OP. But it can be taken as a person predicate. That is, if something passes a strict enough Turing test, it’s reasonable to regard that thing as a person.
But then this defeats the whole purpose: if we don’t want the AI to be a person, then it won’t be able to pass the Turing test, and then it is unclear whether it would be able to use the test to tell people from non-people.
Indeed.