Loading…
Loading…
A structured process for evaluating the potential harms, benefits, and societal effects of an AI or algorithmic system before and during deployment. AIAs are required or recommended by a growing number of AI regulations and frameworks, including Canada's Directive on Automated Decision-Making and the Colorado AI Act (for consequential decisions). An AIA typically covers: system description, intended use and affected populations, potential risks and mitigation measures, governance structures, and ongoing monitoring plans. For small teams, a lightweight AIA for each high-stakes AI use case is a practical starting point.
Why this matters for your team
An AIA is a structured 'what could go wrong' document for high-stakes AI use cases. For any AI system that makes or influences consequential decisions about people, a one-page AIA before deployment is your first line of legal and ethical defense.
Before deploying an AI system to screen job applications, a company conducts an algorithmic impact assessment covering potential bias against protected groups, the accuracy of the screening criteria, and the appeals process for rejected candidates.