[Question] What is the best critique of AI existential risk arguments?

If you could link to an article or other piece of media, that would be ideal. Writing one up here is fine as well. An equivalent question would be “what is the best argument for the claim that there is a <1% probability of AI existential risk?”