That’s true, and unfortunately you could link it back to a phenomenonal argument relatively straightforwardly by saying something like “AI will never be developed because anything technology does is just a computation, not thinking.”
In fact, laying out the argument explicitly just shows how weak it is, since it’s essentially just asserting AI is impossible by definition. Yet, there are still people who would still agree with the argument anyway. For instance, I was looking up an example of a debate about the possibility of AI, (linked here http://www.debate.org/debates/Artificial-Intelligence-is-impossible/1/ ) and one side said:
“Those are mere programs, not AI.”
Now, later, the person said
“Yes but in your case, Gamecube or Debate.org is simply programming, not AI. There is a difference between simple programming and human-like AI.”
and then:
“This is not learning. These devices are limited by their programming, they cannot learn.”
But I suppose my point is that this gets first summed up with an extremely weak lead in argument which is essentially: “You are wrong by definition!” which then has to be peeled back to get to a content argument like “Learning” “Godel” or “Free Will”
And that it happens so often it has it’s own name rather than just being an example of a no true Scotsman.
The most famous proponent of this “those are mere programs” view may be John Searle and his Chinese Room. I wouldn’t call that the weakest argument against AI, although I think his argument is flawed.
Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle’s Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.
To this day, I’m still shocked that anyone considers Searle’s argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply ‘defined away’ the problem. That I had been told this was a ‘critically important criticism’ was even more shocking.
I’ve since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won’t able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.
He didn’t define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process “understood” Chinese. Since he’s a component of the process, it doesn’t matter whether -he- understands Chinese, only whether the -process- understands Chinese.
One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.
That argument is primarily about what the word AI means, rather than an argument against AI as a phenomenon.
That’s true, and unfortunately you could link it back to a phenomenonal argument relatively straightforwardly by saying something like “AI will never be developed because anything technology does is just a computation, not thinking.”
In fact, laying out the argument explicitly just shows how weak it is, since it’s essentially just asserting AI is impossible by definition. Yet, there are still people who would still agree with the argument anyway. For instance, I was looking up an example of a debate about the possibility of AI, (linked here http://www.debate.org/debates/Artificial-Intelligence-is-impossible/1/ ) and one side said:
“Those are mere programs, not AI.” Now, later, the person said “Yes but in your case, Gamecube or Debate.org is simply programming, not AI. There is a difference between simple programming and human-like AI.” and then: “This is not learning. These devices are limited by their programming, they cannot learn.”
But I suppose my point is that this gets first summed up with an extremely weak lead in argument which is essentially: “You are wrong by definition!” which then has to be peeled back to get to a content argument like “Learning” “Godel” or “Free Will”
And that it happens so often it has it’s own name rather than just being an example of a no true Scotsman.
The most famous proponent of this “those are mere programs” view may be John Searle and his Chinese Room. I wouldn’t call that the weakest argument against AI, although I think his argument is flawed.
Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle’s Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.
To this day, I’m still shocked that anyone considers Searle’s argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply ‘defined away’ the problem. That I had been told this was a ‘critically important criticism’ was even more shocking.
I’ve since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won’t able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.
He didn’t define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process “understood” Chinese. Since he’s a component of the process, it doesn’t matter whether -he- understands Chinese, only whether the -process- understands Chinese.
Every time I read something by Searle, my blood pressure rises a couple of standard deviations.
One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.