[Question] Best arguments against worrying about AI risk?

Since so many people here (myself included) are either working to reduce AI risk or would love to enter the field, it seems worthwhile to ask what are the best arguments against doing so. This question is intended to focus on existential/​catastrophic risks and not things like technological unemployment and bias in machine learning algorithms.