What’s your #1 reason to care about AI risk?

It’s way late in my time zone and I suspect this question isn’t technically coherent on the grounds that the right answer to “why care about AI risk?” is going to be complicated and have a bunch of parts that can’t be separated from each other. But I’m going to share a thought I had anyway.

It seems to me like probably, the answer to the question of how to make AIs benevolent isn’t vastly more complicated than the answer of how to make them smart. What’s worrisome about our current situation, however, is that we’re currently putting way more effort into making AIs smart than we are into making them benevolent.

Agree? Disagree? Have an orthogonal answers to the title question?