
Generative AI services like ChatGPT, Microsoft Copilot and Google Gemini are changing how we search for and access information. While traditional search engines focus more on keyword rankings, these AI systems prioritize information based on credibility, relevance and authoritativeness (you may have seen the acronym E-E-A-T: Experience, Expertise, Authoritativeness and Trustworthiness). This has led to the rise of Generative Engine Optimization (GEO), which looks to improve how brands show up in AI- driven search queries.
What does this mean for public relations? PR is quickly becoming one of the most important tools for building brand visibility, trust and influence in Large Language Models (LLMs)–especially compared to traditional marketing strategies like search engine optimization. And all the data I’m seeing today points in the same direction. Let me explain.
Muck Rack’s Generative Pulse report, “What is AI Reading?” analyzed over one million links from more than 500,000 AI prompts. The study results were eye-opening (and validating): Muck Rack’s analysis found that more than 95% of links cited by AI are non-paid coverage. What’s more, journalistic sources and corporate blogs and other content make up the vast majority of sources in gen AI searches. When a search involves recency (e.g., “latest advancements in…”), nearly half (49%) of links cited by AI are from journalism.
The Role of PR in Gen AI Search
Because gen AI prioritizes credibility, recency and authority, this perfectly aligns with PR strategies designed to deliver consistent media coverage in trusted outlets. While mainstream news outlets dominate these sources, niche, industry-specific sources like trade publications are increasingly cited for specific industries. Not surprisingly, platforms like Substack, Medium, Reddit, Wikipedia and Quora are also key sources for the AI models.
But it’s important to consider that Muck Rack’s report found big differences in the citations between the gen AI models. While ChatGPT pulls heavily from mainstream media outlets like Axios, Reuters and the Associated Press, Claude tends to rely more on technical, academic and government sources. Gemini, on the other hand, will capture even broader sources, such as YouTube transcriptions, Wikipedia entries, and niche industry trade publications and reports.
These differences point to a need to diversify PR efforts to ensure client coverage includes a mix of respected news outlets, influential Substack newsletters and even podcasts that publish to YouTube, which can then be picked up for discussion by Reddit or cited by Wikipedia. Gen AI loves fresh, timely content. And that’s why it’s so important to maintain a steady stream of media coverage. As such, static SEO-optimized pages and older news are at risk of being ignored. With ChatGPT, for example, 56% of articles sourced have been published in the past 12 months.
How the Top Six AI Systems Prioritize Results
For a sample exercise, we asked six of the top gen AI platforms by market share about their methodology for prioritizing results in their responses. Note that their models are constantly changing and updating sources, and the market is very much in the learn and flex stage. Here is the sample prompt we used:
I would like insight into your methodology for evaluating and prioritizing information in response to user queries. When you’re asked to provide information on a subject (for example, “what trends in adoption of usage-based insurance have emerged in the past 12 months?”), through what variables do you verify credibility and determine what’s included in your response? Things like audience reach, domain authority, citations, etc.?
Let’s take a quick look at how each AI system said it would provide information in response to the query:
ChatGPT relies on a structured evaluation process, which favors credible sources like McKinsey, Gartner, Forrester, Harvard Business Review and trade publications (such as Insurance Journal, in the example prompt’s case). By prioritizing recent, data-driven content and balanced perspectives, ChatGPT pulls insights from industry reports, regulatory filings and other trusted outlets while also noting any gaps or inconsistencies if necessary.
Copilot emphasizes trusted, well-researched sources with established reputations and content published within the last 12 months, especially for trend-based or fast-evolving topics. It uniquely considers high web traffic as a sign of “what’s gaining traction in public discourse,” which is a notably different measurement of domain authority than other models that analyze frequent linking to or citation of a source. Using a “multilayered evaluation framework,” Copilot ensures accuracy and trustworthiness by prioritizing sources like McKinsey, Deloitte, Reuters and .gov or .edu domains. It synthesizes information from research briefs, whitepapers and press releases while balancing technical accuracy with accessibility.
Gemini applies Google’s E-E-A-T approach, favoring sources that demonstrate expertise in a given subject, are supported by recognized leaders or bodies in the field, and are verifiably accurate, honest and safe. It emphasizes recency for timely topics and cross-checks claims to ensure accuracy. While Gemini says it does not calculate domain authority directly, its description of the underlying Google Search algorithms’ link analysis suggests nearly identical reasoning in that a frequently cited and linked source is generally considered more authoritative and trustworthy.
DeepSeek uses a set of foundational principles: Source Hierarchy & Authority, Recency & Temporal Relevance, and Corroboration & Consensus. The model categorizes sources into three tiers: top-tier (e.g., academic journals, government bodies, research firms and reputable industry publications), middle-tier (specialized trade publications, white papers and press releases), and lower-tier (news aggregators, mainstream news sites and company blogs). DeepSeek prioritizes timeliness with consideration to the user’s prompt, and searches for consensus across multiple sources. If it cannot reach consensus, it presents different viewpoints or prioritizes the source with the strongest authority and most direct access to data.
Perplexity focuses on credible references, transparent citations, and the recency or relevance of updates. It evaluates trending topics based on how fast they’re evolving and emphasizes quality, comprehensive arguments, favoring well-structured and logically sound narratives that directly address a query over those that offer “fragmented or poorly supported claims.” Perplexity’s preferred sources include peer-reviewed journals, government agencies and major industry reports.
Claude prioritizes transparency, preferring data-backed sources with “concrete numbers, dates, and examples vs. vague generalizations.” It deprioritizes “SEO-optimized content with thin information, undated articles or those with misleading freshness claims, and sources with obvious commercial bias.” This could be problematic for marketing and comms teams that have optimized content for traditional search engine results. Claude recognizes its own limitations in verifying credentials as a human researcher might, and admits to relying on pattern recognition and cross-verification rather than any single authority metric.
Next Steps for B2B Companies
B2B brands already juggle SEO optimization, media placements and the need for constant web refreshes. Adapting to gen AI doesn’t have to be “one more thing” but an opportunity.
Here’s how:
- Conduct a website content audit through an AI lens: Web content must evolve from being stuffed with keywords to being a trusted source of well-structured information (more on that in a bit) that is consistent with PR
- Make your content more AI-friendly: While AI searches tend to prioritize earned media, owned content (e.g., websites, blogs, FAQs) is still AI prefers content that is clear, jargon-free and easy for the crawler bots that train large language models to access. This means simplifying your web content with plain language, clear headings and accessible topic organization.
- Focus on message consistency: Ensure that your spokesperson soundbites, product benefits and ROI claims are identical across media This is important for both press placements and corporate content because gen AI looks for patterns of visibility, credibility and message reinforcement across multiple sources vs. the one-off “viral hit.”
- Keep an eye on your AI presence: If you’ve made it this far and it’s still not clear, let me put a finer point on Brands need to regularly track how they appear in AI-generated summaries and query responses. If you’re not tracking this yet, then you’re not able to identify and correct potential inaccuracies in results or even address gaps where competitors have the upper hand in controlling the narrative.
- Partner with a tech-forward PR agency: PR doesn’t just amplify GEO; earned media is the foundation of credibility, trust and A lack of timely earned placements can cause your brand to be ignored by gen AI tools. And because they are frequently updating training data, PR needs constant auditing.
An agency can work with you to run common user queries. It can also assess where your brand is under-credited or find outdated information in AI responses, and then fine-tune media placements, messaging and corporate content accordingly. An agency partner can also help you track key industry coverage trends, competitor citations, and changes in AI responses over time in order to refine these strategies.
If not already, GEO will be a top priority for marketing and comms teams as we roll into 2026. Understanding how your brand is performing today and putting a plan in place to address gaps and build momentum are key. The landscape will continue to shift and evolve, requiring constant care and feeding to meet your gen AI visibility goals.
Steve Smith is Associate Partner at Voxus PR.