[Question] Best arguments against worrying about AI risk?

Since so many peo­ple here (my­self in­cluded) are ei­ther work­ing to re­duce AI risk or would love to en­ter the field, it seems worth­while to ask what are the best ar­gu­ments against do­ing so. This ques­tion is in­tended to fo­cus on ex­is­ten­tial/​catas­trophic risks and not things like tech­nolog­i­cal un­em­ploy­ment and bias in ma­chine learn­ing al­gorithms.

No nominations.
No reviews.