AI Incident Sharing—Best practices from other fields and a comprehensive list of existing platforms

Purpose of this post: The purpose of this post is three-fold: 1) highlight the importance of incident sharing and share best practices from adjacent fields to AI safety 2) collect tentative and existing ideas of implementing a widely used AI incident database and 3) serve as a comprehensive list of existing AI incident databases as of June 2023.

Epistemic status: I have spent around 25+ hours researching this topic and this list is by no means meant to be exhaustive. It should give the reader an idea of relevant adjacent fields where incident databases are common practice and should highlight some of the more widely used AI incident databases which exist to date. Please feel encouraged to comment any relevant ideas or databases that I have missed, I will periodically update the list if I find anything new.

Motivation for AI Incident Databases

Sharing incidents, near misses and best practices in AI development decreases the likelihood of future malfunctions and large-scale risk. To mitigate risks from AI systems, it is vital to understand the causes and effects of their failures. Many AI governance organizations, including FLI and CSET, recommend creating a detailed database of AI incidents to enable information-sharing between developers, government and the public. Generally, information-sharing between different stakeholders 1) enables quicker identification of security issues and 2) boosts risk-mitigation by helping companies take appropriate actions against vulnerabilities.

Best practices from other fields

National Transportation Safety Board (NTSB) publishes and maintains a database of aviation accidents, including detailed reports evaluating technological and environmental factors as well as potential human errors causing the incident. The reports include descriptions of the aircraft, how it was operated by the flight crew, environmental conditions, consequences of event, probable cause of accident, etc. The meticulous record-keeping and best-practices recommendations are one of the key factors behind the steady decline in yearly aviation accidents, making air travel one of the safest form of travel.

National Highway Traffic Safety Administration (NHTSA) maintains a comprehensive database recording the number of crashes and fatal injuries caused by automobile and motor vehicle traffic, detailing information about the incidents such as specific driver behavior, atmospheric conditions, light conditions or road-type. NHTSA also enforces safety standards for manufacturing and deploying vehicle parts and equipment.

Common Vulnerabilities and Exposure (CVE) is a cross-sector public database recording specific vulnerabilities and exposures in information-security systems, maintained by Mitre Corporation. If a vulnerability is reported, it is examined by a CVE Numbering Authority (CNA) and entered into the database with a description and the identification of the information-security system and all its versions that it applies to.

Information Sharing and Analysis Centers (ISAC). ISACs are entities established by important stakeholders in critical infrastructure sectors which are responsible for collecting and sharing: 1) actionable information about physical and cyber threats 2) sharing best threat-mitigation practices. ISACs have 247 threat warning and incident reporting services, providing relevant and prompt information to actors in various sectors including automotive, chemical, gas utility or healthcare.

National Council of Information Sharing and Analysis Centers (NCI) is a cross-sector forum designated for sharing and integrating information among sector-based ISACs (Information Sharing and Analysis Centers). It does so through facilitating communication between individual ISACs, organizing drills and exercises to improve security practices and working with federal-level organizations.

U.S. Geological Survey’s “Did You Feel It?” (DYFI) is a public database and questionnaire used to gather detailed reports about the effects on earthquakes in affected areas. It also gives users an opportunity to report earthquakes which are not yet in the Geological Survey’s records.

Tentative ideas and precedents for AI


Important stakeholders in the field of AI could set up an AI ISAC or integrate it under the Information Technology ISAC. An AI ISAC would handle information about AI accidents and best practices and could potentially even offer threat-mitigation services. Alternatively, an AI branch could be set up under the existing Information Technology ISAC (IT ISAC), which specializes in strengthening information infrastructure resilience, operating a Threat Intelligence platform that enables automated sharing and threat analysis among member corporations of the IT ISAC.

AI Incident Database (AIID) is a public repository of AI incidents, which are broadly construed as “situations in which AI systems caused, or very nearly caused, real-world harm”. Incidents can be submitted by any user and are monitored by experts from both the private sector (e.g. bnh.ai, a DC-based law-firm) and academia (CSET). Reports include descriptions of events, identification of the models used, actors involved and links to similar accidents.


AI, Algorithmic and Automation Incidents and Controversies (AIAAIC), a public repository of AI incidents run by a team of volunteers. Similarly to AIID, submissions to the repository are made by the public and are maintained by the editorial volunteer team. Reports are categorized according to technology used or incident-type and include the description and location of incidents, key actors and developers as well as supporting evidence for the incident.


EU AI Act mandates that developers log any information about serious risks and accidents of their high-risk AI systems into a public database maintained by the European Commission (EC). EC will also provide technical and administrative support to the developers.


Awful AI is a database on the GitHub platform where users can report issues with AI systems in several categories such as discrimination, surveillance or misinformation.


AI Vulnerability Database (AVID) organized by the non-profit AI Risk and Vulnerability Alliance (ARVA) is a databaseas well as a comprehensive taxonomy of AI accidents. Its primary purpose is to help developers and product engineers avoid mistakes, share information and better understand the nature of AI-related risks.


Observatory of Algorithms with Social Impact Register (OASI Register) by Eticas Foundation is a database of algorithms used by governments and companies, highlighting their potential negative social impact and checking whether the systems have been audited prior to deployment.


Tracking Automated Government (TAG) Register is a database which tracks identified cases of automated decision-making (AMD) by the U.S. government. It assesses the transparency of the systems being used, whether citizens’ data is being protected as well as potential unequal impacts of AMD usage.


Bias in Artificial Intelligence: Example Tracker is maintained by the Berkeley Haas Center for Equity, Gender & Leadership and its main purpose is to document bias in deployed AI systems. The database includes detailed description of the particular bias displayed and of the AI system in question, the industry in which the incident happened and information about whether the incident was followed up by the relevant authorities.


Badness.ai is a community-operated catalog of harms involving generative AI. It catalogs harms based on category (e.g. producing inaccuracies, deepfakes, aggression or misinformation) or the specific companies or models involved.

AI Observatory is a small database run by Divj Joshi and funded by the Mozilla Foundation which reports harmful usage of Automated Decision Making Systems (ADMS) in India.

List of Algorithm Audits is a GitHub repository managed by the academic Jack Bandy (Northwestern University, USA). It is an extension of Bandy’s 2021 paper ‘Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits’ and documents empirical studies which demonstrated problematic behavior of an algorithm in one of the following categories: discrimination, distortion, exploitation and misjudgement.

Wired Artificial Intelligence Database is a repository of Wired articles about AI systems, many of which involve detailed essays and case studies of real-world harm caused by AI.

Cross-posted from the EA Forum: https://​​forum.effectivealtruism.org/​​posts/​​dikcpP32Q3cg6tvdA/​​ai-incident-sharing-best-practices-from-other-fields-and-a

No comments.