How to get cited by Claude for brand queries

April 13, 2026 in ai-visibility·7 min read
How to get cited by Claude for brand queries

How to get cited by Claude for brand queries

Claude is the engine most brands ignore. That is a mistake. The Claude app crossed roughly 7.38 million monthly active users by end of 2025, and Anthropic's revenue went from a $1 billion run-rate in early 2025 to $14 billion annualized by February 2026. The growth curve is steeper than any other frontier lab, and the audience skews toward developers, researchers, and technical buyers. For B2B brands with a technical ICP, a Claude citation is worth more per impression than a ChatGPT citation. This is the playbook we use to get Soar clients cited inside Claude's answers.

Why Claude matters more than the MAU number suggests

Ranked by raw users, Claude sits at the bottom of the frontier tier. ChatGPT has hundreds of millions weekly. Claude has roughly 7 million on the app side. So why prioritize it?

Two reasons:

  1. Audience profile. Claude's heaviest users are developers writing code, researchers working through papers, and enterprise buyers evaluating technical products. That audience converts at a higher rate per visit than the mixed consumer base on ChatGPT.
  2. Revenue trajectory. Anthropic went from a $1 billion run-rate in early 2025 to $5 billion by August, $9 billion by end of 2025, and $14 billion annualized by February 2026. Brands cited in Claude today are buying into a compounding channel, not a plateaued one.

For a technical B2B SaaS client, we will often weight Claude ahead of Perplexity and Gemini in the first 90 days. For a mass-market DTC brand, we put Claude further down the list. The engine matters in the context of whether its audience overlaps with your buyers.

How Claude's citation pipeline actually works

Claude does not cite the way ChatGPT does. Anthropic runs three distinct crawlers, each feeding a different part of the answer pipeline. The full breakdown is in Anthropic's support docs.

ClaudeBot is the training-data crawler. Content it indexes becomes part of Claude's base knowledge and can surface in answers even when web search is off.

Claude-User is the live-fetch agent. When a user asks Claude to read a URL or Claude needs fresh content, Claude-User fetches in real time. This is the crawler that matters for time-sensitive queries.

Claude-SearchBot feeds Anthropic's in-product search index, which Claude queries when a user enables the web search tool. That tool is available on Claude Opus 4.6 and Sonnet 4.6 per the Anthropic docs.

Anthropic has deprecated two older crawlers: Claude-Web and anthropic-ai. If you still have rules for those, leave them in place but do not expect them to do anything useful.

The mental model: ChatGPT does live search through Bing's index, while Claude does live search through Anthropic's own index. The retrieval layer is different, the source pool is different, and the optimization tactics are different. For the cross-engine comparison, read ChatGPT vs Claude vs Perplexity vs Gemini.

What content Claude actually cites

After hundreds of brand prompts through Claude, we see the same source types appear repeatedly. The pattern is consistent enough to plan around.

Technical documentation. If you sell a developer product, your docs site is your strongest citation surface. Clear H1s, plain English above code, and an obvious primary use case stated upfront.

Long-form explainers. Claude prefers substance over listicles. A 2,000-word article outperforms five 400-word posts on the same keywords. Depth is a citation signal.

Reddit threads. Claude cites Reddit heavily. Threads where your brand is discussed by named users are strong candidates. The Reddit-Anthropic relationship gets its own section below.

Published research and arXiv papers. arXiv papers routinely show up in Claude's answers for technical topics. Medium posts do not.

Wikipedia. If your brand has a page, it feeds Claude's base knowledge directly. If it does not and you meet the notability bar, commission one properly.

GitHub repositories. For developer tools, a well-maintained repo is a citation asset. Clear READMEs, honest changelogs, and visible commit activity all signal trust to Claude's retrieval layer.

Why blocking ClaudeBot is usually a mistake

Anthropic publicly commits to respecting robots.txt. You have a real choice: allow the three crawlers, block them, or mix. The correct default for most brands is to allow all three.

Here is the minimal block rule if you want ClaudeBot out of your training-data pipeline:

User-agent: ClaudeBot
Disallow: /

Blocking ClaudeBot means Claude never learns about your brand from your pages. Blocking Claude-SearchBot means you never appear in the in-product search index. Blocking Claude-User means Claude cannot fetch your pages when a user explicitly asks.

We have seen clients arrive with all three blocked from a blanket anti-AI rule written in early 2024, and watched their Claude mention rate go from zero to meaningful within 30 days of removing the blocks. The argument for blocking is IP protection. The argument for allowing is visibility. For brands selling a product, visibility wins. The full playbook is in the AI bots robots.txt guide.

The Reddit-Anthropic lawsuit and what it means for your strategy

On June 4, 2025, Reddit filed suit against Anthropic, alleging unlicensed scraping of Reddit data between December 2021 and October 2024. Reddit had licensing deals with Google and OpenAI but not with Anthropic. The case is ongoing.

Two practical implications:

  1. Reddit content published before the alleged cutoff is probably already baked into Claude's training data regardless of the legal outcome.
  2. The lawsuit has not changed the fact that Claude cites Reddit threads regularly. Claude-SearchBot and Claude-User both continue to pull from Reddit in real time.

The strategic takeaway: Reddit remains a strong lever for Claude visibility, but do not assume volume alone will compensate for retrieval shifts that may come out of the lawsuit. Aim for high-quality, high-trust presence, not high-volume posting. The broader picture is in how Reddit became the biggest single source of LLM citations.

How to measure Claude visibility specifically

Most off-the-shelf AI visibility tools measure ChatGPT first, Gemini second, and Claude as an afterthought. For a serious Claude program, you need a tool that queries Claude directly and records citations at the source level. Parse is the tool we use. HubSpot AEO Grader covers Claude at a basic level. Profound and Otterly.AI also track Claude.

Build a prompt set of 20 to 50 brand queries. Run them weekly. Record mention rate, citation rate, and source mix (which URLs Claude pulls when it answers). Source mix is the most important metric, because it tells you where retrieval is actually reaching. If Claude cites you through a single Wikipedia paragraph, your source pool is fragile. If it pulls from your docs, blog, Reddit, and GitHub, the brand is durable. For the prompt-set methodology, read how to find the prompts that matter for ChatGPT and Claude visibility.

Conclusion

Claude is the highest-leverage engine for technical B2B visibility in 2026, and it is also the one almost nobody works on deliberately. Brands that treat Claude as a first-class target now are buying into a compounding channel with an audience that converts faster than any other AI engine. The mechanics are clear: allow the three crawlers, build citable long-form content, show up on Reddit, GitHub, and in well-maintained docs, and measure weekly. Discipline, applied to an engine most teams forgot to look at.

How Soar saves you time and money

Running a Claude-first visibility program internally takes most brands two to three months of research before anyone writes the first intervention. The work is not hard, but it is fragmented across robots.txt rules, content audits, Reddit seeding, docs cleanup, and tool selection. Most teams pick the wrong anchor tool, spend six weeks measuring the wrong thing, and then rebuild the program from scratch. We have already absorbed those costs across dozens of client engagements. The baseline report lands in week one, the first intervention ships by day 30, and Claude mention rate shows up as a standalone metric from day one.

The bigger saving is audience-fit. We only prioritize Claude for clients whose ICP actually lives in Claude's user base. For a DTC brand selling to consumers, we will recommend against spending Claude budget and put that effort into ChatGPT or Google AI Overviews instead. That single qualification conversation has saved clients tens of thousands of dollars in wasted content work that would have moved a metric nobody in their buying cycle was paying attention to. Request a proposal and we will tell you whether Claude is the right engine to prioritize for your brand, and if it is, how the first 90 days should run.

Community marketing strategy

Ready to grow through community marketing?

Get a custom strategy tailored to your brand, audience, and the conversations already shaping buying decisions.