LLM SEO: how to rank your business in ChatGPT, Claude, and other LLMs
How LLMs decide which brands to cite, and the four signals that matter most when ChatGPT, Claude, Perplexity, and Google AI Mode answer commercial queries.
Originally published April 7, 2026
Buyers no longer choose vendors only from a Google SERP. They paste the question into ChatGPT, Claude, Perplexity, or Google AI Mode and read the model's summary of who is worth considering. If your brand is not in that summary, you are not in the consideration set — regardless of how your SEO looks.
Soar is a community marketing agency that has run 4,200+ community campaigns across 280+ brands since 2017. We treat AI visibility as a downstream effect of the public signals models actually retrieve from — not as an SEO subdiscipline.
Why ranking in LLMs matters now
The marketing question changed in 2025. It is no longer "do we rank for our category keyword on Google" but "what do the AI surfaces say when a buyer types the question into the box." 48% of Google searches trigger AI Overviews as of March 2026, and 93% of Google AI Mode searches end without a single click to an external site. That is buyer behavior moving upstream of your website by an entire interaction.
The category-level consequence: where Google shows ten links, LLMs shortlist two or three brands. ChatGPT retrieves roughly six times more pages than it cites — 85% of retrieved pages are discarded before the answer is written. The shortlist effect compounds on itself, because brands that are surfaced early get more mentions, which raises their entity weight, which keeps them surfaced. Being absent from this layer is not a soft loss of awareness; it is a hard exclusion from the consideration set Sarah's buyers see before they ever consider opening a tab.
How LLMs actually decide who to cite
There is no public ranking formula. OpenAI, Anthropic, and Google have not published the weights, and the systems shift with every model update. What we can read is product behavior, retrieval logs, and large-scale citation studies — and the inputs that consistently correlate with being cited are entity strength, third-party corroboration, freshness, and on-page structure.
The pattern that holds across platforms: models pull from the open web through citation-style retrieval, score candidate sources for relevance and trust, and then choose a handful of sources to ground the answer. Backlinks barely matter at this layer. Unlinked brand mentions correlate at 0.664 with AI citations versus 0.218 for backlinks — almost the inverse of classic SEO. The strongest single overall predictor of being cited is brand search volume itself (0.334 correlation), because the model has more data points to weight you with when more people are searching for and writing about you. That is why "do more SEO" alone has not been moving AI visibility for the brands we work with.
The four signals that consistently correlate with citations
Brand mentions across the open web. Models extract entity facts from repeated, consistent descriptions across editorial, community, and review surfaces. Volume matters less than consistency; press releases, by contrast, are cited in AI answers only 0.04% of the time.
Third-party review and directory profiles. Brands with active G2, Capterra, or Trustpilot profiles are roughly 3x more likely to be cited in AI answers. ChatGPT in particular leans on review aggregators when answering "best X" queries.
Community presence on the platforms each model retrieves from. Perplexity pulls 47% of its top-10 citations from Reddit. Google AI Mode pulls Quora as the #4 most-cited domain (7.25% of responses). Brands with no community footprint are systematically absent from those answers.
Consistent entity descriptions. When your site, profiles, directory listings, and community descriptions frame the brand differently, the model gets a weaker signal and cites a competitor whose entity is more legible.
Platform-by-platform: where each model looks
The single most expensive AI visibility mistake is treating it as one channel. The Ahrefs 78.6M URL study found that only 11% of domains are cited by both ChatGPT and Perplexity, and only 7 websites appear in the top 50 across all three major platforms. A brand visible in ChatGPT can be invisible in Perplexity for entirely structural reasons.
| Platform | Primary retrieval mix | Practical implication |
|---|---|---|
| ChatGPT | Wikipedia, Forbes, G2, Reddit, editorial press | Build a clean Wikipedia entity, earn G2/Capterra coverage, and target editorial mentions in trade press. |
| Perplexity | Reddit (47% of top-10 cites), industry blogs, official docs | If your buyers use Perplexity, you need an active Reddit footprint in the subreddits where your category is discussed. |
| Google AI Mode | Reddit, Quora (#4 most-cited domain), publisher sites, Google Business Profile | Quora answers and Reddit threads are now first-class citation channels for AIO; treat them as distribution, not Q&A. |
| Claude | Anthropic web search (Brave-based), official docs, editorial sources, GitHub for technical queries | Documentation quality and editorial coverage outweigh community signals for technical and policy queries. |
The audit move is straightforward: run your top 30-50 buyer prompts across all four systems, log which sources each one cites, and look for the gap. If competitors are showing up in Perplexity via Reddit threads you are not in, the fix is not on-page — it is community distribution. If they are in ChatGPT via G2 you do not occupy, the fix is review-platform presence. The diagnosis tells you which lever to pull. We covered the full mechanics of running this in how to audit your brand's AI search visibility.
What to fix on your own site
Your site is one of several inputs, but it is the one you fully control — so make it the easiest source for a model to retrieve from. Three changes consistently move citation rate in our client work:
Front-load the claim. The first 30% of page content accounts for 44.2% of all ChatGPT citations. Put the most citable sentence — the one that contains the named entity, the metric, and the source — within the first two paragraphs of every commercial page.
Write 130-160 word sections under question-style H2s. Each section needs to be intelligible without the rest of the page, because retrieval extracts blocks, not whole pages. Add an answer capsule under the heading where it fits naturally; 72% of cited posts in the Search Engine Land study included one.
Keep it fresh. 89.7% of cited pages had been updated within the year and 60.5% were published within the last two. If your category page was last touched in 2023, it is structurally disadvantaged for citation regardless of how authoritative it is.
Schema matters here only when it is attribute-rich. Generic FAQPage markup with two thin Q&As actually underperforms having no schema at all (41.6% citation rate vs 59.8%). Add schema when the page has real Q&A density, not as a checkbox.
What to fix off your site
This is where most of the work lives, and it is the bottleneck for brands that have already done the on-page basics. The off-site signals that move AI citation rate, in roughly the order they pay back:
Community presence on the platforms each model retrieves from. For most B2B and DTC brands, that means earning trust in 10-20 active subreddits and seeding answers on a focused set of Quora questions. This is the slowest input to start and the highest-leverage one to compound — see how community marketing drives AI visibility for the mechanics.
Review platform profiles. If you sell SaaS, your G2 page is a citation channel. If you sell consumer, Trustpilot is. Active profiles with current screenshots, recent reviews, and accurate category placement do work that no amount of on-page schema can substitute for.
Editorial coverage in trade press. Earned mentions in Forbes, TechCrunch, Search Engine Land, or vertical equivalents feed entity strength in a way owned content cannot. 82% of AI citations are earned media, not owned content.
Wikipedia entity (where eligible). Not every brand qualifies, but those that do see disproportionate ChatGPT visibility because Wikipedia is in the retrieval mix on almost every category query.
How long this takes
Be honest with the budget conversation. Site-level fixes — restructuring pages, adding answer capsules, refreshing stale category content — show up in citation behavior within 30 to 60 days. Source-mix fixes — building Reddit presence, earning Quora citations, repairing G2 listings — take four to six months because models retrain on new conversational data on that horizon, and individual posts need time to accumulate the engagement signals that make them retrieval-grade.
The brands that get the cleanest results are the ones that commit to a 6-month minimum and resist measuring week-over-week. AI citation rate is not a paid-ads dashboard; it is closer to content marketing's compounding model. A monthly audit (not weekly) of a fixed prompt set is enough cadence to read direction without amplifying noise. The teams that miss this almost always conclude "AI visibility doesn't work" right before the inflection point.
Is LLM SEO different from regular SEO?
Yes, structurally. Only ~12% of pages cited by AI assistants overlap with Google's top 10. The signals are different — unlinked brand mentions and third-party corroboration matter far more than backlinks, and retrieval rewards entity strength over page-level optimization. A brand can rank #1 in Google for its category and be invisible in ChatGPT for the same query. The two should be measured and resourced as separate channels.
Which LLM should we optimize for first?
The one your buyers use. For most B2B SaaS, that is ChatGPT first, then Perplexity for technical buyers and Claude for policy-sensitive ones. For DTC and consumer, Google AI Mode now dominates because it sits inside the Google search experience. Run a prompt-set audit across all four for a quarter before deciding — assumed buyer behavior is wrong often enough that the audit pays for itself.
Do backlinks still matter for AI visibility?
Backlinks correlate 0.218 with AI citations. Unlinked brand mentions correlate 0.664. Links are not worthless, but they are roughly 3x less predictive of being cited than the volume and consistency of mentions across the open web. For most brands, reallocating budget from link-building to community distribution and review-platform presence produces a better AI visibility outcome at the same spend. The full analysis is in backlinks vs brand mentions.
How do we know if it's working?
Run a fixed prompt list of 30-50 buyer queries across all four major platforms on a monthly cadence. Track four things per prompt: presence, citation (with link) vs mention (in prose), description accuracy, and source mix. Lift shows up first as description quality improving, then as mention frequency, then as cited mentions with links. Anything more frequent than monthly is noise; anything less frequent than monthly cannot distinguish a real shift from a model update.
What does an AI visibility engagement actually cost?
For agency-led programs, expect $5K-$15K per month depending on platform coverage, prompt-set size, and community execution scope. In-house equivalents typically run two to four FTEs across content, community, and analytics. The work is operationally heavy in months 1-3 (audit, content fixes, account infrastructure) and compounds in months 4-12. See AI visibility agency pricing 2026 for the tier breakdown.
:::


