When an AI model generates confident-sounding but factually incorrect or completely fabricated information.
When an AI model generates confident-sounding but factually incorrect or completely fabricated information. Language models don't 'know' things — they predict likely next tokens. This means they can smoothly produce plausible-sounding nonsense. One of the biggest challenges for deploying AI in production.
Hallucination is when an AI model generates confident, plausible-sounding information that's factually wrong or completely made up. A language model might cite a study that doesn't exist, invent a historical event, or confidently provide incorrect technical details. The output reads perfectly — the only problem is it's fiction.
This happens because language models are fundamentally pattern-completion machines. They predict what tokens are most likely to come next based on training data. They don't have a fact database they check against — they're producing probable sequences of words. If the pattern of "Author X wrote Book Y" is plausible given the context, the model might generate it even if it's wrong. This is a deep architectural limitation, not a bug that can be easily patched.
Reducing hallucinations is one of the most active areas in AI research. Techniques include RAG (grounding responses in retrieved documents), training models to say "I don't know," and verification systems that cross-check claims. Some applications add citation requirements. But no approach eliminates hallucinations entirely. For any application where accuracy matters — medical, legal, financial — you need human verification or automated fact-checking systems in the loop.
"Always verify AI-generated citations — hallucination means the model might confidently reference a paper that literally doesn't exist."
Connecting an AI model's outputs to verified, factual information sources.
Retrieval-Augmented Generation.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
An autonomous AI system that can perceive its environment, make decisions, and take actions to achieve goals.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.