The shift no one is optimising for
When a procurement manager asks ChatGPT "what's the best CRM for a small UK accountancy firm?", something happens in the half second before the answer arrives that almost no marketing team is currently optimising for. The model does not pass that question to a search engine and pick the top result. It quietly breaks the query into five, ten, sometimes more than twenty separate sub-searches, runs each one independently, weighs the results, and then synthesises a single recommendation.
This is called query fan-out, and it is now the single most important mechanic in Generative Engine Optimisation. Research published by Peec AI between Q3 2025 and Q1 2026 found that the average fan-out length of ChatGPT queries more than doubled in just four months. Brands that cannot answer the sub-questions consistently across the web are being silently filtered out before the user sees any result at all.
If your GEO strategy is still focused on "appearing in ChatGPT", you are optimising for the wrong layer. You need to be optimising for the dozen invisible questions the model asks on the user's behalf.
Three converging changes that broke the old playbook
For the last two years, most GEO advice has revolved around a fairly simple idea. Write clear content. Add FAQ schema. Get cited by reputable third parties. Earn brand mentions. All of that is still correct, and it still works. But the underlying mechanics have moved on, and three things have happened in quick succession that change what a winning GEO strategy actually looks like in 2026.
The first is the maturation of retrieval-augmented generation. Models no longer rely solely on what they learned during training. They reach out to live indexes, retrieve content in real time, and assemble answers from whatever they find. Recency now matters in ways it did not eighteen months ago. A guide published in 2024 with no updates is losing ground to a refreshed 2026 article on the same topic, even if the older piece has more backlinks.
The second is query fan-out itself. The model is no longer answering one question. It is answering ten. If your content can clearly answer one of those sub-questions, you get cited. If it can answer three of them, you get recommended. If it cannot answer any of them in a structurally clean way, you are excluded from the result entirely, and there is no impression to count.
The third is the rise of agentic commerce. ChatGPT activated shopping features in early 2026. Google has launched its Universal Commerce Protocol with Walmart, Target, Shopify and more than twenty other partners backing it. McKinsey projects that AI-mediated purchases could drive between three and five trillion dollars globally by 2030. The question is no longer "will AI recommend my brand". The question is "will AI buy on the customer's behalf without ever sending them to my website".
Together, these shifts mean GEO is no longer a content marketing tactic bolted onto a traditional SEO strategy. It is becoming the primary discovery layer for an increasing share of B2B and B2C purchase decisions in the UK and beyond.
How fan-out actually decides whether your brand appears
Imagine a procurement manager asks Gemini "what's the best external attack surface management platform for a mid-sized UK financial services firm?". Behind the scenes, the model is likely to run something close to a dozen sub-searches. It will look for the category itself, hunting for definitions and overviews. It will search for vendor lists. It will search for comparison content. It will pull from review sites and aggregator pages. It will look for case studies in financial services specifically. It will check for compliance considerations relevant to the UK, including FCA expectations and GDPR. It will search for pricing benchmarks, company size fit, integration with adjacent tools, and known limitations of the leading vendors.
For your brand to appear in the final answer, you need to be present, consistent, and clear across most of those sub-searches. Not all of them. But enough of them that when the model assembles the response, your brand keeps surfacing in different contexts. The model interprets that consistency as a credibility signal. If you only appear in your own marketing copy and one third-party review, you are an ambiguous source. The model defaults to vendors it can verify across multiple independent angles.
This is why a single well-optimised landing page is no longer enough. The brands winning in AI search are the ones that have built a deliberate, distributed footprint across the sub-question landscape that defines their category.
The five layers AI agents actually evaluate
Recent research from ARGEO and others has begun to formalise how agentic systems evaluate brands, and the picture that is emerging looks something like five distinct layers.
Identity is the foundational layer. Can the AI confidently say what your company is, what it does, and who it serves, in a single sentence, without ambiguity? If your About page, your LinkedIn, your Crunchbase entry, your Wikipedia mention, and your press coverage all describe you slightly differently, you are creating noise the model has to resolve. It will often resolve it by picking a competitor instead.
Authority is the second layer, and this is the GEO version of E-E-A-T. The model is looking for signals that you have genuine expertise in the area being asked about. Original research, proprietary data, named experts who appear in third-party publications, conference talks, and academic citations all feed into this. Self-published whitepapers do not count for nearly as much as a quote in a respected trade publication.
Specificity is the third layer. AI engines extract claims at the fact level, not at the page level. A 3000 word thought leadership piece without specific, citable, standalone facts is essentially invisible to a generative engine. The same content broken into clearly stated claims with numbers, dates, and sources gets cited repeatedly. This is why the sentence is becoming the new unit of GEO optimisation, not the page. Recency is the fourth layer. Models now weigh freshness heavily for time-sensitive queries. If your category content has not been updated in eighteen months, you are competing at a disadvantage against any vendor publishing fresh material with a clear "last updated" timestamp.
Verifiability is the fifth layer, and increasingly the most important. Can the agent independently verify your claims? This is where Anthropic's Model Context Protocol becomes important. MCP-enabled agents can connect directly to brand-published servers and pull real-time data straight from the source. Brands publishing MCP servers in 2026 are gaining a structural advantage that is invisible in any traditional SEO audit.
What this means for your content strategy in practice
If query fan-out is the new mechanic, then content strategy has to evolve from "ranking pages" to "covering a question landscape". This is a different exercise to traditional SEO.
It starts with mapping the fan-out itself. For each priority topic, list the ten to twenty sub-questions an AI model is likely to generate. Test this directly by opening ChatGPT, Perplexity, and Gemini and asking the same parent question across all three. Pay attention to what each engine returns, what sources it cites, and what angles it covers. The Otterly AI report from late 2025 found that Google AI Mode draws from a substantially different source pool than Google's own AI Overviews. Your traditional ranking does not guarantee visibility in AI-generated recommendations. You need to test each surface independently.
Once you have the fan-out mapped, audit your existing content against it. For each sub-question, do you have a piece of content that answers it clearly, with specific facts, in a structurally clean way? In our experience auditing UK businesses, most have strong coverage on three or four sub-questions and complete blind spots on the rest. Closing those gaps is usually the highest-leverage GEO work you can do in a quarter.
Then comes the distribution layer. Your own content is necessary but rarely sufficient. AI engines tend to favour and cite third-party, independent sources over brand-owned content, often by a significant margin. This is why digital PR has quietly become one of the most valuable inputs into a GEO programme. A single substantive mention in a respected trade publication can outweigh a year of self-published blog posts in terms of citation impact.
Finally, build a measurement layer that does not rely on traditional analytics. Your Google Analytics dashboard cannot see a recommendation that happens inside ChatGPT. You need AI search performance tracking, brand citation monitoring across the major engines, and share of voice tracking against your direct competitors. Platforms like Otterly.ai, Profound, and similar tools are now essential infrastructure, not nice-to-haves.
The window is closing faster than most teams realise
Gartner is forecasting that traditional search engine volume will drop by twenty-five percent by the end of 2026, with longer-term declines projected at fifty percent or more. McKinsey research finds that forty-four percent of consumers now use AI as the main source of information for purchasing decisions. For B2B, Walker Sands data shows that ninety percent of buyers integrate generative AI somewhere in their buying journey. AI-referred web sessions jumped 527 percent year over year in the first five months of 2025 according to Previsible's AI Traffic Report.
These are not gradual numbers. They describe a behavioural shift that has already happened, and that is now compounding month over month. The brands that built early visibility in AI search are seeing sales-qualified lead volumes from generative engines that did not exist twelve months ago. Some are reporting that AI-sourced leads convert at materially higher rates than equivalent leads from traditional search, because the model has effectively pre-qualified the prospect during the conversation. The brands that have not started are now competing on a surface they cannot see, against competitors they cannot benchmark, for buying decisions they will never know happened.
How AwarenessAI helps you close the gap
This is the gap AwarenessAI exists to close. Our AIRO methodology, AI Recommendation Optimisation, is built specifically around the mechanics described in this article. We map the fan-out for your category, audit how the major engines currently represent your brand across ChatGPT, Gemini, Perplexity, and Meta AI, identify the structural and content gaps that are excluding you from recommendations, and build a programme to close them across owned content, third-party citations, and structured data signals.
If you have not yet seen how your brand is represented in AI search, that is the place to start. The £99 AI Starter Audit is designed for exactly this. It shows you what the engines are currently saying about you, where your competitors are appearing in answers you should be winning, and the highest-leverage moves you can make in the next ninety days to start being recommended rather than ignored.
The search bar still exists. But for an increasing share of your customers, the answer is already somewhere else. The question is whether your brand is in it.