Loading…
Loading…
The ability to describe how an AI system arrived at a particular output or decision in terms that humans can understand. Explainability is legally required in some contexts (GDPR Article 22 gives individuals the right to an explanation for automated decisions that significantly affect them) and operationally important for debugging, auditing, and building user trust. There is a spectrum from full transparency (the model's reasoning is completely legible) to post-hoc explanation (a separate model generates an explanation after the fact). Most large language models are not inherently explainable — they produce outputs without exposing their reasoning process.
Why this matters for your team
If you use AI to make decisions that significantly affect individuals — credit, hiring, pricing, content moderation — you may have a legal obligation to explain those decisions. GDPR Article 22 and multiple US state laws require explainability for automated decisions. Check whether your AI model can provide it before deploying.