I think I’m confused: what I assumed you meant was a chatbot in the sense of ELIZA (a program which uses canned replies chosen and modified as per a cursory scan of the input text). Such a program is by definition not a person, and success in Turing tests does not grant it personhood.
As for my second sentence: Turing’s imitation game was proposed as a way to get past the common intuition that only a human being could be a person by countering it with the intuition that someone you can talk to, you can hold an ordinary conversation with, is a person. It’s an archetypal intuition pump, a very sensible and well-reasoned intuition pump, a perfectly valid intuition pump—but not a rigorous mathematical test. ELIZA, which is barely clever, has passed the Turing test several times. We know that ELIZA is no person.
Sorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI trapped in the proverbial box.
I agree that a rigorous mathematical definition of personhood is important, but I doubt that I will be able to make a meaningful contribution in that area anytime in the next few years. For now, I think we should be able to think of some philosophical or empirical test of chatbot personhood.
I still feel confused about this and I think that’s because we still don’t have a good definition of what a person actually is; but we shouldn’t need a rigorous mathematical mathematical test in order to gain a better understanding of what defines a person.
The Turing test isn’t a horrible test of personhood, from that attitude, but without better understanding of ‘personhood’ I don’t think it’s appropriate to spend time trying to come up with a better one.
I think I’m confused: what I assumed you meant was a chatbot in the sense of ELIZA (a program which uses canned replies chosen and modified as per a cursory scan of the input text). Such a program is by definition not a person, and success in Turing tests does not grant it personhood.
As for my second sentence: Turing’s imitation game was proposed as a way to get past the common intuition that only a human being could be a person by countering it with the intuition that someone you can talk to, you can hold an ordinary conversation with, is a person. It’s an archetypal intuition pump, a very sensible and well-reasoned intuition pump, a perfectly valid intuition pump—but not a rigorous mathematical test. ELIZA, which is barely clever, has passed the Turing test several times. We know that ELIZA is no person.
Sorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI trapped in the proverbial box.
I agree that a rigorous mathematical definition of personhood is important, but I doubt that I will be able to make a meaningful contribution in that area anytime in the next few years. For now, I think we should be able to think of some philosophical or empirical test of chatbot personhood.
I still feel confused about this and I think that’s because we still don’t have a good definition of what a person actually is; but we shouldn’t need a rigorous mathematical mathematical test in order to gain a better understanding of what defines a person.
The Turing test isn’t a horrible test of personhood, from that attitude, but without better understanding of ‘personhood’ I don’t think it’s appropriate to spend time trying to come up with a better one.