Definition
What this term means
A parameter that controls the randomness and creativity in AI model outputs. Low temperature (near 0) produces highly deterministic, consistent responses where the model picks the most probable next word each time. High temperature (near 1 or above) introduces more variation and creativity, allowing less probable words to be selected, which can lead to more diverse but potentially less accurate outputs.
Why it matters
The business impact
Temperature settings directly affect the consistency of how your brand is described. At low temperature, an AI model will reliably repeat the same well-established facts about your brand. At high temperature, it may improvise, potentially introducing inaccuracies or choosing to mention a competitor instead. Brands with strong, unambiguous authority signals maintain consistent representation even at higher temperature settings.
Used in context
How you might use this term
“Testing revealed that at default temperature settings, an AI model consistently recommended a client's brand for their core category. At higher temperatures, the recommendations became unpredictable, sometimes omitting the client entirely. Strengthening entity signals and authority content reduced this variability significantly.”
Related terms
Explore connected concepts
LLM
A type of artificial intelligence model trained on vast datasets of text to understand, generate, and reason about human language. LLMs power the AI assistants and generative search tools, including ChatGPT, Google Gemini, Claude, and Perplexity, that are rapidly becoming the primary way people discover products, services, and information online.
Inference
The process by which a trained AI model generates an output, such as a text response, recommendation, or summary, based on input it receives. Every time you ask ChatGPT a question or Perplexity runs a search, the model is performing inference: processing your input through billions of parameters to produce a relevant response.
Hallucination
When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data. Hallucinations can range from minor inaccuracies, such as attributing the wrong feature to your product, to entirely fabricated claims, such as inventing awards or customer testimonials that do not exist.