[Question] FAI failure scenario

Imagine that research into creating a provably Friendly AI fails. At some point in the 2020s or 2030s it seems that the creation of UFAI is imminent. What measures then could the AI Safety community take?

No answers.
No comments.