[Question] Good taxonomies of all risks (small or large) from AI?

Are there any good taxonomies or categorizations of risks from AI-enabled systems (broadly defined) that aren’t focused solely on risks to society as a whole /​ global catastrophic risks? Ideally the taxonomy should cover things like accident risks from individual factory robots, algorithmic bias against individuals or groups, privacy and cybersecurity issues, misuse by hackers or terrorists, incidental job loss due to AI, etc. It should ideally also cover the big society-wide or global catastrophic risks, just that it shouldn’t only be about those.

No comments.