Definition
What this term means
When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data. Hallucinations can range from minor inaccuracies, such as attributing the wrong feature to your product, to entirely fabricated claims, such as inventing awards or customer testimonials that do not exist.
Why it matters
The business impact
AI hallucinations about your brand can cause real damage: incorrect pricing, fabricated negative reviews, wrong product capabilities, or false claims about your company. These errors spread when users trust AI outputs without verification. Strong authority signals, consistent entity data, and structured markup reduce hallucination risk by giving AI models reliable facts to draw from instead of guessing.
Used in context
How you might use this term
“A hotel chain discovered that ChatGPT was fabricating amenities and policies for several of their properties. After publishing comprehensive, structured data for each location with schema markup, the hallucination rate dropped by 80% within six weeks of the next training data refresh.”
Related terms
Explore connected concepts
Authority Signals
The collective evidence that demonstrates a brand's credibility, expertise, and trustworthiness to AI systems and search engines. Authority signals include expert authorship with verifiable credentials, citations from reputable sources, industry awards, professional certifications, longevity of domain, quality of backlink profile, and consistent representation across authoritative platforms such as Wikipedia, industry publications, and government databases.
E-E-A-T
A quality framework standing for Experience, Expertise, Authoritativeness, and Trustworthiness, originally defined by Google for search quality evaluation, now increasingly relevant to AI-generated content curation. E-E-A-T is not a single metric but a collection of signals: first-hand experience with a topic, demonstrated professional expertise, recognised authority within a field, and overall trustworthiness of both the content and the publisher.
Grounding
The process of anchoring AI outputs to verified, factual source material rather than allowing the model to generate responses purely from its parametric knowledge. Grounded AI responses include verifiable claims backed by cited sources, reducing the risk of hallucination and improving accuracy. Google's Gemini and Perplexity AI both use grounding extensively.