Q. What is hallucination (in models like ChatGPT)?

  Jan 07, 2025

Hallucination is the term used to describe when models like ChatGPT generate false information and present it as if it were true. Even though the AI may sound confident, sometimes the answers it gives are simply incorrect.

Why does this happen?
AI tools like ChatGPT are trained to predict the next word in a conversation based on the input they receive. They are really good at constructing sentences that sound plausible and realistic. However, these AI models don't understand the meaning behind the words. They lack the logical reasoning to tell whether what they’re saying is factually accurate or makes sense. These models were never designed to be search engines. Instead, they can be thought of as “wordsmiths”—tools for summarizing, outlining, brainstorming, and similar tasks.

Therefore, we can't blindly trust everything they say, even if it sounds convincing. It’s always wise to double-check important information with other reliable sources.

Here’s a tip:
Models grounded in external sources of information (like web search results) hallucinate less often. This is because the model searches for relevant web pages, summarizes the results, and provides links to the pages from which each part of the answer came. This makes it easier to fact-check the output.

Examples of grounded models include Microsoft Copilot, Perplexity, and ChatGPT Plus (paid version).


Adapted from "FAQs about generative AI" by Nicole Hennig, University of Arizona Libraries. Licensed under CC BY 4.0.


View All Topics

VIEW ALL FAQs chevron_right

Contact Us

Contact Us

email

Email

Email us your research questions and we’ll respond within 24 hours

question_answer

Chat

Talk online to a research librarian 7 days / week

smartphone

Text

Send us your questions at 617-431-2427

call

Call

Call for info or research assistance at 617-353-2700

people

Meet

Make an appointment with a subject specialist librarian over Zoom