AwarenessAIAwarenessAI
Free Scan

Services

Services OverviewExplore our AI visibility solutionsStarter AuditEntry-level AI visibilitySnapshot AuditQuick visibility assessmentAdvanced AuditComprehensive analysisEnterpriseCustom solutions at scale

Monitoring

MonitoringDaily AI visibility trackingMonitoring+Premium with competitor tracking

Portal

PortalInteractive reporting & insights
Learn about The 6 Pillars methodology
BlogInsights on AI visibilityNewsCompany announcementsResearchStudies and reportsCase StudiesClient success storiesGlossaryAI terms explained
AboutOur storyTestimonialsClient reviewsFAQCommon questionsGovernanceNIST AI RMF alignmentContactGet in touch
Free Scan
Services OverviewStarter AuditSnapshot AuditAdvanced AuditEnterpriseMonitoringMonitoring+
Portal
PortalThe 6 Pillars
BlogNewsResearchCase StudiesGlossary
AboutTestimonialsFAQGovernanceContact
AwarenessAIAwarenessAI
Free Scan

Services

Services OverviewExplore our AI visibility solutionsStarter AuditEntry-level AI visibilitySnapshot AuditQuick visibility assessmentAdvanced AuditComprehensive analysisEnterpriseCustom solutions at scale

Monitoring

MonitoringDaily AI visibility trackingMonitoring+Premium with competitor tracking

Portal

PortalInteractive reporting & insights
Learn about The 6 Pillars methodology
BlogInsights on AI visibilityNewsCompany announcementsResearchStudies and reportsCase StudiesClient success storiesGlossaryAI terms explained
AboutOur storyTestimonialsClient reviewsFAQCommon questionsGovernanceNIST AI RMF alignmentContactGet in touch
Free Scan
Services OverviewStarter AuditSnapshot AuditAdvanced AuditEnterpriseMonitoringMonitoring+
Portal
PortalThe 6 Pillars
BlogNewsResearchCase StudiesGlossary
AboutTestimonialsFAQGovernanceContact
Press & Announcements

Leeds-Based Founder Publishes Independent Research on How Generative AI Systems Approximate Trust in Informational Responses

Tom Mason, a Leeds-based founder and Lancaster University student has published an independent research note examining how generative artificial intelligence systems approximate concepts of trust, credibility, and reliability when responding to informational questions.

15th January 20262 min read
  1. Home
  2. /
  3. News
  4. /
  5. Leeds-Based Founder Publishes Independent Research on How Generative AI Systems Approximate Trust in Informational Responses

Tom Mason, a Leeds-based founder and Lancaster University student has published new independent research exploring how generative artificial intelligence systems appear to decide what information to present as trustworthy.

The research examined how widely used AI tools respond to the same questions over time, analysing sixty identical prompts across three major language models. Rather than checking factual accuracy, the study focused on how AI systems explain confidence, credibility, and reliability in their answers.

The findings suggest that what users experience as “trust” in AI-generated responses is rarely based on direct assessment of authority or expertise. Instead, confidence is inferred indirectly through patterns such as how closely an answer matches the question, how often similar information appears across sources, perceived agreement between results, and how content is ranked or retrieved.

References to formal expertise, credentials, or authoritative sources appeared far less frequently, and usually only when they aligned with these broader patterns rather than driving the response themselves.

While the underlying behaviour was broadly consistent across systems, the research found clear differences in how individual AI models describe and justify their answers. Some presented trust as a structured, engineered process, while others highlighted uncertainty, limitations, and the absence of genuine judgement. These differences matter for organisations and decision makers who increasingly rely on AI-generated summaries and recommendations without visibility into how confidence is formed.

The work has been published as an observational research note in PDF format and is intended to support wider understanding of AI-led information discovery, organisational representation, and the limits of AI-generated confidence in commercial and institutional settings.

The publication builds on more than a year of independent research into generative search behaviour and AI recommendation patterns, and informs the work of AwarenessAI, a consultancy focused on improving accuracy, consistency, and trust in how organisations are represented by AI systems.

Media enquiries: tom@awarenessai.co.uk awarenessai.co.uk

Back to all news

Published by AwarenessAI

On this page

Get your free visibility scan

See how AI currently describes your brand.

Start Free Scan

More News

  • AwarenessAI Founder Featured in Lancaster Guardian Following Launch of Connect Lancaster
  • AwarenessAI Joins UKAI to Support Responsible AI Adoption in the UK
  • AwarenessAI Featured on the Leeds Tech Map 2026
  • AwarenessAI Sponsors Lucas Ahead of Charity Fight Night
  • AwarenessAI Launches New Portal and Dashboard
View All News →

Blogs

  • How to Audit How AI Systems Describe Your Brand
  • How AI Systems Form Opinions About Companies
  • How to Increase AI Citations: Practical Steps for GEO Success
  • How AI Systems Choose Which Brands to Recommend
  • Reputation Management in the Age of AI: Why Generative Engine Optimisation Is the Missing Layer
View All Articles →

Research

  • How Generative AI Infers Trust When Answering Informational Questions
View All Research →
AwarenessAI

AI Recommendation Optimisation

Updated:
22 Feb 2026

Product

  • Features
  • Pricing
  • Monitoring
  • Case Studies
  • Free Scan

Company

  • About
  • Blog
  • Testimonials
  • Contact

Resources

  • Services
  • Glossary
  • News
  • Research

Legal

  • Privacy
  • Terms
  • Cookies
  • Governance

© 2026 AwarenessAI. All rights reserved.

LinkedInXYouTubeInstagramFacebookGitHubCrunchbaseGoogle