How to write a press release that gets cited by AI
The uncomfortable reality for most PR teams: press releases are cited in AI search only 0.04 percent of the time. That is the headline finding from ALM's 2025 research into how LLMs treat wire content. The old model (corporate announcement, push through PR Newswire, hope for pickup) is broken at a rate of 2,500 to 1. Below is what we tell clients to do instead if they want their PR work to actually move visibility inside ChatGPT, Claude, and Perplexity.
Why PR wires fail at AI citations
Three reasons, all structural:
- Wire content is boilerplate-heavy. LLMs have been trained on millions of press releases and learned that the standard format (headline, boilerplate, CEO quote, partner quote, about section) contains little signal. Boilerplate-heavy wire content ranks at the bottom of retrieval.
- Wire distribution is duplicative. The same release appears on 100 sites with identical copy, which is a weak corroboration signal.
- The publications LLMs actually cite are editorial outlets with original reporting, not wire services.
The 0.04 percent number is not a fluke. It is what happens when you push content through a channel LLMs have trained themselves to ignore.
The publications LLMs actually cite
OpenAI has publicly signed publisher partnerships with Associated Press, Axel Springer, Financial Times, Dotdash Meredith, Condé Nast, Hearst, Vox Media, Time, Le Monde, The Guardian, and News Corp. The News Corp deal alone is reported at more than $250 million over five years. Profound's running list has the full set. These partnerships feed content directly into OpenAI's training and retrieval pipelines. For B2B SaaS and tech, the outlets we see cited most often in LLM answers are TechCrunch, Reuters, Bloomberg, Axios, and The Information. For consumer and DTC, it skews toward The Verge, Fast Company, Forbes, and major lifestyle publications. The pattern is consistent: editorial outlets with original reporting get cited, wire services do not.
What to do instead: the news-story structure
Write the release like a news story, not a corporate announcement. That means a clear lede (what happened and why it matters), a second paragraph with quantitative context, and a body that includes at least one external data point or third-party reference. Quotes should sound human. Cut "we are thrilled to announce" entirely. A journalist scanning should extract three facts in 30 seconds. The test we run on every client release: strip the boilerplate and company name. Does what remains look like a news story, or like nothing? If it looks like nothing, the release is not ready to leave the building.
Publish on your own site first
Before syndicating anywhere, publish the full release on your own site with Article and Organization schema and a clear publication date. Set the canonical URL. Make sure it is crawlable (procedure: how to audit whether your site is crawlable by AI bots). This becomes the source of record every subsequent placement points back to. Without one, the citation surface fragments across dozens of low-authority wire mirrors. With one, the LLM retrieval layer has a canonical URL to attach to the claim.
Land coverage in AI-cited publications
This is the step that actually moves the metric. Instead of paying PR Newswire to distribute to hundreds of outlets LLMs ignore, pitch two or three outlets LLMs cite. One TechCrunch or Reuters piece produces more AI visibility than 50 wire drops. Editors at TechCrunch, Reuters, Bloomberg, and Axios do not want press releases. They want a specific angle, a specific data point, and an executive who will get on a call within 24 hours.
Seed the corroboration layer
Semrush's analysis of 150,000 LLM citations found Reddit at 40.1 percent, Wikipedia at 26.3 percent, YouTube at 23.5 percent. After the editorial placement goes live, seed the corroboration layer: a Reddit thread that references the news story with a different angle, a LinkedIn long-form post from a named executive, and, if the announcement is notable enough, a Wikipedia edit that cites the news story. This is not about faking coverage. One TechCrunch piece alone is a signal. TechCrunch plus Reddit plus LinkedIn plus Wikipedia is the signal LLMs actually cite. The full sequence is in how to turn one blog post into citations across ChatGPT, Claude, and Perplexity.
When a traditional wire still makes sense
Two cases where wire distribution is still worth it:
- Regulatory and financial disclosures. If the SEC, FTC, or equivalent regulator requires a wire drop, you run it. AI visibility is not the goal; compliance is.
- Local and trade press where editorial pitching has a poor hit rate.
For everyone else, the 0.04 percent number says the wire is a waste of budget. The honest framing: wires are for compliance, not visibility.
Conclusion
The traditional press release model is optimized for a distribution channel LLMs have been trained to ignore. The 0.04 percent citation rate is a structural outcome of how the wires operate. Brands that want AI visibility from their PR budget need to stop measuring success by wire pickups and start measuring it by editorial placements in AI-cited publications, backed by Reddit, LinkedIn, and Wikipedia corroboration. The budget stays the same. The allocation shifts.
How Soar saves you time and money
Most of our clients arrive with a PR budget running on autopilot. They pay a retainer to a PR firm, the firm drops releases on the wires, and the quarterly report shows "pickups" that translate into zero AI citations. Wasted spend is often $30,000 to $80,000 per year for mid-market brands. Our PR advisory strips out the wire spend, redirects budget toward editorial pitching in the publications LLMs actually cite, and adds the Reddit, LinkedIn, and Wikipedia seeding that creates corroboration.
One strategic placement in Reuters, TechCrunch, or Bloomberg does more for AI visibility than 50 wire drops. We have the editor relationships, we know the pitch patterns, and we run the content-seeding workflow on the same retainer. Request a proposal and we will audit your existing PR spend, show you what percent of your current coverage is showing up in AI answers, and build the reallocation plan for the next 90 days.
Related reading
- The 2026 guide to Generative Engine Optimization
- How Reddit became the biggest single source of LLM citations
- How to turn one blog post into citations across ChatGPT, Claude, and Perplexity
- How LLMs decide what to cite: training data, retrieval, and real-time search
- How to get your brand cited by AI search engines
- The 90-day GEO program: from audit to first citations