A search ranking is a directional signal — users click, traffic flows. An LLM citation is a different kind of signal — the AI has decided your content is authoritative enough to quote inside the synthesised answer it hands to the user. That second kind of trust is harder to earn and measured differently. This post unpacks what we have seen works consistently for brands trying to appear in LLM answers and how it intersects with the SEO discipline you already have.
Context and clear definitions
LLMs build answers by synthesising sources they can parse unambiguously. Content that hedges, that uses branded-jargon without definition, or that assumes reader context does not translate well into a training signal.
What works:
- State the concept in one clear sentence before elaborating
- Define domain-specific terms the first time you use them
- Structure content so each paragraph answers one question
- Avoid inside-baseball references unless you are explaining them
The goal is not dumbed-down writing. It is precise writing — the kind that a model can lift a definition from without losing the nuance.
Semantic relevance and NLP optimisation
The old SEO discipline of "keyword density" is a poor fit for how LLMs understand content. What matters now is semantic relevance — using natural language that embeds the contextual entities the LLM also understands as relevant.
What works:
- Write in natural, fully-formed sentences, not keyword stuffed fragments
- Use the contextual entities around your topic — related concepts, technical terms, adjacent brands — that the model associates with your domain
- Structure content around topical depth, not keyword breadth. One page that genuinely covers a topic outperforms five shallow pages that repeat the same phrase.
When the LLM is deciding what to cite on a query, it is asking a semantic question, not a keyword one. Match the register of the question.
FAQs, direct answers, and AI Overviews
AI Overviews (Google's synthesised answer layer at the top of search results) and LLM responses favour content that is already shaped as an answer. FAQ-style formats, structured how-to content, and informational or exploratory intent material tend to get surfaced.
What works:
- FAQ blocks with concrete, specific questions — not generic marketing ones
- Informational or exploratory intent content — "what is", "how does", "why does" — rather than transactional "buy now"
- Snippet-ready formatting: short paragraphs, clear subheadings, bulleted lists where appropriate
- Structured data (Schema.org FAQ and HowTo) that explicitly marks the answer surfaces
The LLM is not just reading prose. It is looking for content already pre-formatted as a quotable answer.