Loading…
Loading…
A person's legal right to receive a meaningful, human-understandable explanation of an automated decision that significantly affects them — covering the logic involved, its significance, and its likely consequences. Established under GDPR Articles 13–15 and 22, and reinforced by the EU AI Act's transparency requirements for high-risk AI systems. The right applies to any automated decision that produces legal effects or similarly significantly affects the data subject, such as credit refusals, job application rejections, insurance denials, and content moderation actions.
Why this matters for your team
If your AI system influences consequential decisions about individuals, design explainability in from the start — not as a compliance afterthought. Meaningful explanations require knowing which features drove the decision, which most black-box models don't surface by default. This is a legal requirement in the EU and increasingly in US state laws.
A job candidate whose application was rejected by an AI screening tool submits a GDPR access request asking for an explanation. The company must provide a meaningful description of the AI's decision logic — 'the algorithm scored you low' is not sufficient.