Loading…
Loading…
When an AI language model generates plausible-sounding but factually incorrect or entirely fabricated information — with no reliable signal to the user that it is wrong. The model does not 'know' it is hallucinating; it generates the most statistically likely next token given its training. Hallucinations are a core risk in any AI deployment involving factual claims, legal analysis, medical information, financial guidance, or customer-facing communications. Governance controls for hallucination typically include human review requirements, RAG-based grounding, and output verification steps.
Why this matters for your team
If you use AI for anything customer-facing, legal, medical, or financial, hallucination is your top liability risk. Build in human review before AI outputs are acted on, and tell users clearly when content is AI-generated.
A chatbot confidently cites a legal case that does not exist. This is a hallucination — the model generated a plausible-sounding citation with no factual basis.