Hallucination is the term used to describe when models like ChatGPT generate false information and present it as if it were true. Even though the AI may sound confident, sometimes the answers it gives are simply incorrect.
Why does this happen?
AI tools like ChatGPT are trained to predict the next word in a conversation based on the input they receive. They are really good at constructing sentences that sound plausible and realistic. However, these AI models don't understand the meaning behind the words. They lack the logical reasoning to tell whether what they’re saying is factually accurate or makes sense. These models were never designed to be search engines. Instead, they can be thought of as “wordsmiths”—tools for summarizing, outlining, brainstorming, and similar tasks.
Therefore, we can't blindly trust everything they say, even if it sounds convincing. It’s always wise to double-check important information with other reliable sources.
Here’s a tip:
Models grounded in external sources of information (like web search results) hallucinate less often. This is because the model searches for relevant web pages, summarizes the results, and provides links to the pages from which each part of the answer came. This makes it easier to fact-check the output.
Examples of grounded models include Microsoft Copilot, Perplexity, and ChatGPT Plus (paid version).
Adapted from "FAQs about generative AI" by Nicole Hennig, University of Arizona Libraries. Licensed under CC BY 4.0.