Loading…
Loading…
A public repository (incidentdatabase.ai) that catalogs real-world cases of AI systems causing harm or behaving unexpectedly. Maintained by the Responsible AI Collaborative, it contains thousands of documented incidents — from self-driving car crashes to hiring algorithm bias to chatbot harassment — each tagged by harm type, AI system involved, and affected population. The database is a practical resource for governance teams: browsing incidents in your sector reveals the failure modes most likely to occur, informing your risk assessment and pre-deployment testing priorities. The EU AI Act's incident reporting obligations will generate additional structured incident data over time.
Why this matters for your team
Before deploying AI in your industry or use case, search the AI Incident Database for relevant incidents. Thirty minutes browsing real-world failures in your sector is more valuable than any theoretical risk framework — it shows you the specific failure modes to test for.
Before deploying an AI hiring tool, a team searches the AI Incident Database for 'recruitment' incidents, finds 15 documented cases of algorithmic bias in hiring, and uses the failure patterns to design targeted bias tests for their own system.