More things are intelligent than pass the Turing test (unless it is merely offered as a definition)
Yes, I am only considering the Turing test as a potential definition for intelligence, and I think this is obvious from the OP and all of my comments. See Chapter 7 of David Deutsch’s new book, The Beginnings of Infinity. Something arbitrarily slow can’t pass a Turing test that depends on real time interaction, so complexity theory allows us to treat a Turing test as a zero-knowledge proof that the agent who passes it possess something computationally more tractable than a lookup table. I also dismiss the lookup tables, but the reason why is that iterating conversation in a Turing test is Bayesian evidence that the agent interacting with me can’t be using an exponentially slow lookup table.
I agree with you that a major component of intelligence is how the knowledge is embedded in the program. If the knowledge is embedded solely by some external creator, then we don’t want to label that as intelligent. But how do we detect whether creator-embedded knowledge is a likely explanation? That has to do with the hardware it is implemented on. Since Watson is implemented on such massive resources, the explanation that it produces answers from searching a store of data has more likelihood. That is more plausible because of Watson’s hardware. If Watson achieved the same results with much less capable hardware, it would make the hypothesis that Watson’s responses are “merely pre-sorted embedded knowledge” less likely (assuming I knew no details of the software that Watson used, which is one of the conditions of a Turing test).
If you tell me something can converse with me, but that it takes 340 years to formulate a response to any sentence I utter, then I strongly suspect the implementation is arranged such that it is not intelligent. Similarly, if you tell me something can converse with me, and it only takes 1 second to respond reasonably, but it requires the resources of 10,000 humans and can’t produce responses of any demonstrably better quality than humans, then I also suspect it is just a souped-up version of a stupid algorithm, and thus not intelligent.
The behavior alone is not enough. I need details of how the behavior happens, and if I’m lacking detailed explanations of the software program, then details about the hardware resources it requires also tell me something.
If the laws of physics are very different than I think they are, one could fit a lookup table inside a human-sized body. That would not make it intelligent any more than expanding the size of a human brain would make it cease to be intelligent.
But it would mean that having a conversation with a person was not conclusive evidence that he or she wasn’t a lookup table implemented in a human substrate.
Would you say that neurosurgery is “teaching”, if one manipulates the brain’s bits such that the patient knows a new fact?
Yes, absolutely. “Regular” teaching is just exactly that, but achieved more slowly by communication over a noisy channel.
Yes, I am only considering the Turing test as a potential definition for intelligence, and I think this is obvious from the OP and all of my comments. See Chapter 7 of David Deutsch’s new book, The Beginnings of Infinity. Something arbitrarily slow can’t pass a Turing test that depends on real time interaction, so complexity theory allows us to treat a Turing test as a zero-knowledge proof that the agent who passes it possess something computationally more tractable than a lookup table. I also dismiss the lookup tables, but the reason why is that iterating conversation in a Turing test is Bayesian evidence that the agent interacting with me can’t be using an exponentially slow lookup table.
I agree with you that a major component of intelligence is how the knowledge is embedded in the program. If the knowledge is embedded solely by some external creator, then we don’t want to label that as intelligent. But how do we detect whether creator-embedded knowledge is a likely explanation? That has to do with the hardware it is implemented on. Since Watson is implemented on such massive resources, the explanation that it produces answers from searching a store of data has more likelihood. That is more plausible because of Watson’s hardware. If Watson achieved the same results with much less capable hardware, it would make the hypothesis that Watson’s responses are “merely pre-sorted embedded knowledge” less likely (assuming I knew no details of the software that Watson used, which is one of the conditions of a Turing test).
If you tell me something can converse with me, but that it takes 340 years to formulate a response to any sentence I utter, then I strongly suspect the implementation is arranged such that it is not intelligent. Similarly, if you tell me something can converse with me, and it only takes 1 second to respond reasonably, but it requires the resources of 10,000 humans and can’t produce responses of any demonstrably better quality than humans, then I also suspect it is just a souped-up version of a stupid algorithm, and thus not intelligent.
The behavior alone is not enough. I need details of how the behavior happens, and if I’m lacking detailed explanations of the software program, then details about the hardware resources it requires also tell me something.
But it would mean that having a conversation with a person was not conclusive evidence that he or she wasn’t a lookup table implemented in a human substrate.
Yes, absolutely. “Regular” teaching is just exactly that, but achieved more slowly by communication over a noisy channel.