I think I understand your point, and have preemptively written a response at http://lesswrong.com/lw/bob/reframing_the_problem_of_ai_progress/. (In short, if Watson becomes smarter-than-human in many domains, it seems inevitable that the technological progress involved will be useful for building FOOMable AIs, even if Watson isn’t itself FOOMable.) If this doesn’t address your point, then I’ve probably misunderstood it, in which case maybe you can restate it in more detail?
I think I understand your point, and have preemptively written a response at http://lesswrong.com/lw/bob/reframing_the_problem_of_ai_progress/. (In short, if Watson becomes smarter-than-human in many domains, it seems inevitable that the technological progress involved will be useful for building FOOMable AIs, even if Watson isn’t itself FOOMable.) If this doesn’t address your point, then I’ve probably misunderstood it, in which case maybe you can restate it in more detail?