Although, in general, I disagree with catastrophic framings of AI risk (which have been exploited by AI CEOs to increase interest in their products, as I recently wrote in my newsletter), the AI safety debate is an important one, and it concerns all of us.
There are differing opinions on the current path of AI development and its possible futures. There are also various gray zones and unanswered questions on possible ways to mitigate risk and avoid harm.
Yudkowsky has been researching AI alignment for over 20 years, and together with Soares, he has built a strong argument for why AI safety concerns are urgent and why action is needed now. Whether you agree with their tone or not, their book is worth reading.
that’s not very consistent with my understanding of the words “endorsed IABIED” from OP
This is what she says: