An AI hallucination is when a generative AI system produces output that sounds plausible and authoritative but is factually wrong. The model isn't lying; it's pattern-matching from training data and producing the most likely-looking continuation, which sometimes doesn't correspond to reality.
Common hallucination types in mid-market AI workflows: invented citations and references (the AI cites a court case, study, or document that doesn't exist), fabricated facts presented confidently (specific dollar figures, dates, or quotes that aren't in the source), false attribution (claiming a quote came from a person who didn't say it), wrong but plausible legal/medical/regulatory text (citation-shaped strings that look right but don't match actual rules), and arithmetic errors (especially in long calculation chains).
What hallucinations are NOT: every wrong AI output. AI can be wrong because the input was wrong, because the prompt was ambiguous, because the model didn't have the context, or because it made an explicit "I don't know" guess. Hallucinations specifically are confident-sounding fabrications presented as fact.