As far as I can tell, most academic work on this calls the question the “Church-Turing Thesis,” or, more specifically, the “Strong Church-Turing Thesis.” (As the SEP points out here, this is actually a completely new thesis, distinct from the actual thesis advanced by Turing.) This is regarded as an empirical, open question in academia, though almost everybody agrees that it is true. (AFAICT, this applies both to the regular and the strong thesis.) Many papers have mentioned the strong version of the thesis, but most thought about it deals with specific critiques, since the best positive arguments are simply to show AIs doing stuff people do. (There’s also a lot of stuff that’s flat out irrelevant see eg:
An additional specific critique: some (eg Searle and Block) have argued that machines cannot have a mind. Eg Searle has argued that no machine can be conscious, with his Chinese Room argument:
Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences 3 (3): 417–457, doi:10.1017/S0140525X00005756, retrieved May 13, 2009
Similarly, Dreyfus has argued (in practice) that most human thinking is unconscious, and that it will never be possible to program a computer to execute these sorts of unconscious thoughts. He wrote a lot of articles and books to convey this and other critiques, but in general he claimed that early AGI researchers were making unwarranted philosophical assumptions- see here.
I don’t think the Church-Turing Thesis is quite equivalent, because someone might think (even if they don’t have good reasons for thinking it) that some human behavior (say, “having mathematical insights”) is not algorithmicly computable.
As I understand Searle, his views aren’t relevant here, because even if we got to the point where AIs can replace all human workers, Searle would still insist they aren’t really thinking.
Dreyfus sounds interesting, though, will have to look into it.
As far as I can tell, most academic work on this calls the question the “Church-Turing Thesis,” or, more specifically, the “Strong Church-Turing Thesis.” (As the SEP points out here, this is actually a completely new thesis, distinct from the actual thesis advanced by Turing.) This is regarded as an empirical, open question in academia, though almost everybody agrees that it is true. (AFAICT, this applies both to the regular and the strong thesis.) Many papers have mentioned the strong version of the thesis, but most thought about it deals with specific critiques, since the best positive arguments are simply to show AIs doing stuff people do. (There’s also a lot of stuff that’s flat out irrelevant see eg:
Nachum Dershowitz, & Yuri Gurevich (2008). A Natural Axiomatization of Computability and Proof of Church’s Thesis The Bulletin of Symbolic Logic DOI: 10.2178/bsl/1231081370 http://research.microsoft.com/en-us/um/people/gurevich/Opera/188.pdf)
An additional specific critique: some (eg Searle and Block) have argued that machines cannot have a mind. Eg Searle has argued that no machine can be conscious, with his Chinese Room argument:
Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences 3 (3): 417–457, doi:10.1017/S0140525X00005756, retrieved May 13, 2009
Similarly, Dreyfus has argued (in practice) that most human thinking is unconscious, and that it will never be possible to program a computer to execute these sorts of unconscious thoughts. He wrote a lot of articles and books to convey this and other critiques, but in general he claimed that early AGI researchers were making unwarranted philosophical assumptions- see here.
I don’t think the Church-Turing Thesis is quite equivalent, because someone might think (even if they don’t have good reasons for thinking it) that some human behavior (say, “having mathematical insights”) is not algorithmicly computable.
As I understand Searle, his views aren’t relevant here, because even if we got to the point where AIs can replace all human workers, Searle would still insist they aren’t really thinking.
Dreyfus sounds interesting, though, will have to look into it.
Turing machines aren’t even necessary—all that’s necessary is that computing systems be understandable, and then buildable.