An AI hallucination is when a generative AI system produces output that sounds plausible and authoritative but is factually wrong. The model isn't lying; it's pattern-matching from training data and producing the most likely-looking continuation.
Common types in mid-market AI workflows: invented citations and references, fabricated facts (specific dollar figures, dates, quotes), false attribution, wrong-but-plausible legal/medical/regulatory text, arithmetic errors in long calculation chains.
Hallucinations are NOT every wrong AI output. They're specifically confident-sounding fabrications presented as fact.