Glossary

Hallucination

When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or unsupported.

Definition

What this term means

When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data. Hallucinations can range from minor inaccuracies, such as attributing the wrong feature to your product, to entirely fabricated claims, such as inventing awards or customer testimonials that do not exist.

Why it matters

The business impact

AI hallucinations about your brand can cause real damage: incorrect pricing, fabricated negative reviews, wrong product capabilities, or false claims about your company. These errors spread when users trust AI outputs without verification. Strong authority signals, consistent entity data, and structured markup reduce hallucination risk by giving AI models reliable facts to draw from instead of guessing.

Used in context

How you might use this term

A hotel chain discovered that ChatGPT was fabricating amenities and policies for several of their properties. After publishing comprehensive, structured data for each location with schema markup, the hallucination rate dropped by 80% within six weeks of the next training data refresh.
Ready to improve AI visibility?

Put This Knowledge Into Action

Understanding the language of AI visibility is the first step. See how your brand performs across AI systems with a free scan.