Bostrom’s book is a bit out of date, and perhaps isn’t the best reference on the AI safety community’s current concerns. Here are some more recent articles:
Disentangling arguments for the importance of AI safety
A shift in arguments for AI risk
The Main Sources of AI Risk?
Thanks. I’ll further add Paul’s post What Failure Looks Like, and say that the Alignment Forum sequences raise a lot more specific technical concerns.
Bostrom’s book is a bit out of date, and perhaps isn’t the best reference on the AI safety community’s current concerns. Here are some more recent articles:
Disentangling arguments for the importance of AI safety
A shift in arguments for AI risk
The Main Sources of AI Risk?
Thanks. I’ll further add Paul’s post What Failure Looks Like, and say that the Alignment Forum sequences raise a lot more specific technical concerns.