What most people mean when they doubt the reputation of people who claim that risks from AI need to be taken seriously, or who say that AGI might be far off, what those people mean is that risks from AI are too vague to be taken into account at this point, that nobody knows enough to make predictions about the topic right now.
So suppose you have some weak evidence that something might be a problem. One sensible course of action is to invest resources in studying and measuring the problem. Even a vague idea that AGI might be created at some point, and it is possible for AGI to go wrong would suggest the specific actions of setting up a research institute.
So suppose you have some weak evidence that something might be a problem. One sensible course of action is to invest resources in studying and measuring the problem. Even a vague idea that AGI might be created at some point, and it is possible for AGI to go wrong would suggest the specific actions of setting up a research institute.