Research
How Generative AI Infers Trust When Answering Informational Questions
In an observational study of 60 identical prompts across three leading AI models, over 90% of responses indicated that trust is not explicitly evaluated. Instead, AI systems infer trust through repetition, relevance, and consensus signals.
Authority and expertise were explicitly referenced in fewer than 15% of responses.
This research focuses on behavioural outcomes rather than internal system claims. It examines how generative AI systems describe and explain their own approach to trust when answering informational questions, and what this reveals about how information is synthesised and presented to users.
DOI: https://doi.org/10.13140/RG.2.2.14512.83200