The most direct reason why the AI would kill us is that it is costly to be nice, assuming you have goals completely orthogonal to human goals, but still wanting to grab resources, and this cost is way huger than people intuitively think, such that assuming AI is unaligned but has a shard of alignment, billions of humans are likely to be killed, and a future existential catastrophe is surprisingly likely:
The most direct reason why the AI would kill us is that it is costly to be nice, assuming you have goals completely orthogonal to human goals, but still wanting to grab resources, and this cost is way huger than people intuitively think, such that assuming AI is unaligned but has a shard of alignment, billions of humans are likely to be killed, and a future existential catastrophe is surprisingly likely:
https://www.lesswrong.com/posts/xvBZPEccSfM8Fsobt/what-are-the-best-arguments-for-against-ais-being-slightly#wy9cSASwJCu7bjM6H