[Question] Do any of the AI Risk evaluations focus on humans as the risk?

I am not up on much of the AI risk discussion but for this outsider most of the focus seems on the AI taking actions.

I recall someone (here I think) posting a comment about how a bio research AI initiative seeking to find beneficial things was asked if the tools could be used to find harmful things. They changed their search and apparently found a number of really bad things really quickly.

Does anyone look at, have concerns or estimates on risk in this area? Is it possible that the AI risk from the emergence of a very powerful AI is not as likely since before that occurs some human with a less powerful AI ends the world first, or at least destroys modern human civilization and we’re back to the stone age hunter gathering world before the AI gets powerful enough do do that for/​to us?

No comments.