Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.
Harnad talks a lot about whether a body “has a mind”: whether a Turing Test could show if a body “has a mind”, how we know a body “has a mind”, etc.
What on earth does he mean by “mind”? Not… the same thing that most of us here at LessWrong mean by it, I should think.
He also refers to artificial intelligence as “computer models”. Either he is using “model” quite strangely as well… or he has some… very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It’s really rather distressing.)
Searle has shown that a mindless symbol-manipulator could pass the [Turing Test] undetected.
This has surely got to be one of the most ludicrous pronouncements I’ve ever seen a philosopher make.
people can do a lot more than just communicating verbally by teletype. They can recognize and identify and manipulate and describe real objects, events and states of affairs in the world. [italics added]
One of these things is not like the others...
Similar arguments can be made against behavioral “modularity”: It is unlikely that our chess-playing capacity constitutes an autonomous functional module, independent of our capacity to see, move, manipulate, reason, and perhaps even to speak.
Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.
Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that’s the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word “mind”, then it’s “you know, that thing that makes us different from machines”. So, we are different from AIs because we are different from AIs. It’s obvious when you put it that way !
Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.
Harnad talks a lot about whether a body “has a mind”: whether a Turing Test could show if a body “has a mind”, how we know a body “has a mind”, etc.
What on earth does he mean by “mind”? Not… the same thing that most of us here at LessWrong mean by it, I should think.
He also refers to artificial intelligence as “computer models”. Either he is using “model” quite strangely as well… or he has some… very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It’s really rather distressing.)
This has surely got to be one of the most ludicrous pronouncements I’ve ever seen a philosopher make.
One of these things is not like the others...
Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.
Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.
Yeah, I think that’s the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word “mind”, then it’s “you know, that thing that makes us different from machines”. So, we are different from AIs because we are different from AIs. It’s obvious when you put it that way !