Loading…
Loading…
A structured evaluation required by the EU AI Act (Article 27) for deployers of high-risk AI systems, assessing the potential impact of the AI system on fundamental rights — including privacy, non-discrimination, freedom of expression, equal treatment, access to justice, and other rights protected under the EU Charter of Fundamental Rights. The assessment must be conducted before deployment and must be documented, stored, and made available to market surveillance authorities on request. It overlaps with but extends beyond a GDPR Data Protection Impact Assessment (DPIA), covering rights that are not purely data-related.
Why this matters for your team
If you deploy a high-risk AI system in the EU, this assessment is mandatory before going live. Start with your DPIA (which overlaps) and extend it to cover non-data rights like equal treatment and access to services. Document it formally — 'we thought about it' is not sufficient for regulatory inspection.
A local government deploying an AI tool to prioritize social benefit payments conducts a fundamental rights impact assessment, identifying that the system may disadvantage non-native language speakers — and modifying the input data pipeline to correct this before launch.