I don’t put much stock in the specifics of Kurzweil’s schedule, but I’m skeptical of the relevance of some of Allen’s objections. I agree with the first comment on the article, that he’s too focused on replication of human intelligence as a prerequisite for a singularity.
Existing AI may be pretty brittle, but I suspect that by the time we can create AI that can, to use his example, infer that a tightrope walker would have excellent balance, Strong AI is likely not to be far off.
I don’t put much stock in the specifics of Kurzweil’s schedule, but I’m skeptical of the relevance of some of Allen’s objections. I agree with the first comment on the article, that he’s too focused on replication of human intelligence as a prerequisite for a singularity.
Existing AI may be pretty brittle, but I suspect that by the time we can create AI that can, to use his example, infer that a tightrope walker would have excellent balance, Strong AI is likely not to be far off.