If the AI were superintelligent and would otherwise kill everyone else on Earth—yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.
If the AI were superintelligent and would otherwise kill everyone else on Earth—yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.