[Question] What are some of the best introductions/​breakdowns of AI existential risk for those unfamiliar?

Discussion of AI risk has gone mainstream in the past few months, and as usual whenever that happens, many of the people new to the field think they’re an expert and proceed to engage in the exact same arguments over and over. I think it would be convenient to have the following available:

  • An introduction to the core risk, explaining why it’s possible.

  • An index of common bad arguments about AI risk and in-depth responses to them.

  • An index of common good arguments about AI risk, and links to further reading about them.

All of these should require no background knowledge and be accessible to normal people with no philosophical or mathematical experience.

I was thinking of writing up such a guide, but I don’t want to duplicate effort. Does anything like this already exist?

No comments.