Yes, it is indeed a common pattern.
People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).
On the other hand, I don’t think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying to save the world from the disaster, which they perceive to be the greatest danger. As long as response is proportional to actual risk, of course.
But I still agree with you that it is only prudent to treat any such claims with caution, so that we don’t fall into a trap of using data taken from a small group of people working at Asteroid Defense Foundation as our only and true estimates of likelihood and effect of an asteroid impact, without verifying their claims using an unbiased source. It is certainly good to have someone looking at the sky from time to time, just in case their claims prove true, though.
One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:
Constructing Human Level AI requires sufficiently advanced tools.
Constructing sufficiently advanced tools requires sufficiently advanced understanding.
Human brain has “hardware limitations” that prevent it from achieving sufficiently advanced understanding.
Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.