Tom Mason, a Leeds-based founder and Lancaster University student has published new independent research exploring how generative artificial intelligence systems appear to decide what information to present as trustworthy.
The research examined how widely used AI tools respond to the same questions over time, analysing sixty identical prompts across three major language models. Rather than checking factual accuracy, the study focused on how AI systems explain confidence, credibility, and reliability in their answers.
The findings suggest that what users experience as “trust” in AI-generated responses is rarely based on direct assessment of authority or expertise. Instead, confidence is inferred indirectly through patterns such as how closely an answer matches the question, how often similar information appears across sources, perceived agreement between results, and how content is ranked or retrieved.
References to formal expertise, credentials, or authoritative sources appeared far less frequently, and usually only when they aligned with these broader patterns rather than driving the response themselves.
While the underlying behaviour was broadly consistent across systems, the research found clear differences in how individual AI models describe and justify their answers. Some presented trust as a structured, engineered process, while others highlighted uncertainty, limitations, and the absence of genuine judgement. These differences matter for organisations and decision makers who increasingly rely on AI-generated summaries and recommendations without visibility into how confidence is formed.
The work has been published as an observational research note in PDF format and is intended to support wider understanding of AI-led information discovery, organisational representation, and the limits of AI-generated confidence in commercial and institutional settings.
The publication builds on more than a year of independent research into generative search behaviour and AI recommendation patterns, and informs the work of AwarenessAI, a consultancy focused on improving accuracy, consistency, and trust in how organisations are represented by AI systems.
Media enquiries: tom@awarenessai.co.uk awarenessai.co.uk