His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.
Is this actually right, or is it just based on your piece praising Searle’s pessimism? I don’t recall any breakdown favoring philosophers in the original analysis of the dataset.
I extracted the best I could from Searle’s “non-predictive” argument—I didn’t praise his pessimism ;-)
I’d have phrased it as “there are some pretty good philosophical arguments about AI (eg Omahundro), while timeline predictions seem to be uniformly ungrounded”. I certainly wouldn’t have said that a generic philosophical argument on AI was good (see all the permutations of “Godel’s theorem, hence no AI”).
I extracted the best I could from Searle’s “non-predictive” argument—I didn’t praise his pessimism ;-)
I’d have phrased it as “there are some pretty good philosophical arguments about AI (eg Omahundro), while timeline predictions seem to be uniformly ungrounded”. I certainly wouldn’t have said that a generic philosophical argument on AI was good (see all the permutations of “Godel’s theorem, hence no AI”).
the way he quoted you certainly makes you sound like you think something along those lines.
Quotes are not always entirely accurate. I’m sure this fact is surprising to people here :-P
Actually it’s not that bad, in terms of presenting a complex idea; not what I would have written, but acceptable to get people thinking on the issues.