Loading…
Loading…
The tendency for users or operators to defer to AI outputs without sufficient critical evaluation, even when the system is operating outside its area of competence, hallucinating, or producing subtly incorrect guidance. Over-reliance is exacerbated by high-confidence AI presentations, authoritative language, and speed — when an AI responds instantly with apparent certainty, users naturally discount the need to verify. It is a governance failure mode distinct from the AI producing errors: the AI may be behaving exactly as designed while the human fails to apply appropriate skepticism.
Why this matters for your team
The biggest governance failure in many AI deployments isn't a bad model — it's capable users who stopped checking the model's work. Design AI tools to surface uncertainty, require confirmation for high-stakes actions, and build periodic human spot-checks into operating procedures. Train your team on when to trust versus verify.
A customer support team using AI-generated response drafts stops checking citations after a few weeks of good performance. When the model's underlying knowledge becomes stale, support agents continue trusting and sending incorrect information — not because the AI suddenly changed, but because the team stopped reviewing.