When Google filed its lawsuit against SerpApi last week, accusing the company of bypassing security controls to scrape and resell search results at scale, it seemed like a straightforward intellectual property dispute. But buried in the details of this case is something far more revealing: a clear confirmation that large language models like ChatGPT and Claude still depend on traditional search infrastructure to stay current.
This matters because it cuts through a lot of the hype around “optimizing for AI” that has flooded the digital marketing space over the past year. The truth is simpler and more grounded than most of the advice you’ll read online.
Large Language Models Haven’t Replaced Search—They’ve Built On Top of It
ChatGPT, Claude, and other LLMs have knowledge cutoff dates. They cannot know about events, prices, or information that emerged after their training data was frozen.
When you ask an AI system for current information, it performs a search query (often through intermediaries like SerpApi) to access real-time data from Google’s index.
Google’s recent lawsuit against SerpApi confirms this dependency: AI companies still rely on search infrastructure to stay accurate and current.
The implication is clear: **AI search is a complementary layer on top of traditional search, not a replacement for it.**
AI Visibility Isn’t About Rankings—It’s About Being Trustworthy and Extractable
Traditional SEO:** Optimize to rank higher than competitors in search results (position 3 vs. position 8).
AI Visibility: Optimize to be recognized as a trustworthy, authoritative source that AI systems can safely cite and extract information from.
A website can rank highly in Google but be invisible to ChatGPT if its content isn’t trustworthy or extractable enough.
Conversely, content from a trusted domain with clear structure may be cited by AI systems even if it doesn’t rank in the top three for a keyword.
The real advantage: Brands that are both highly ranked AND trustworthy/extractable will dominate both traditional and AI search.
Good SEO Isn’t Dead—It’s the Foundation for Visibility Across All Search Channels
First-party authoritative content: Your official channels matter more than ever. AI systems trust official websites, published research, and documented expertise.
Clean entity definitions: Define your brand, products, and expertise consistently across the web. Use structured data (schema markup) to make your identity unambiguous to AI systems.
Domain authority and trust signals: Backlinks from trusted sources tell both Google and AI systems that you’re a reliable source worth citing.
Structured information: Clear headings, direct answers, lists, tables, and question-and-answer sections make content easier for both humans and LLMs to understand and use.
Topical depth: Building comprehensive coverage of a topic signals authority. AI systems synthesize answers by pulling from multiple sources—being a known authority on your topic increases your chances of being cited.
The Real Dependency Chain
Let’s start with what we know: OpenAI has been credibly reported to use SerpApi pipelines to access Google Search results for real-time answers inside ChatGPT. This isn’t a rumor or speculation—it’s a documented practice that appears in multiple industry reports and has been discussed openly by researchers.
Why would OpenAI need this? Because LLMs have a fundamental limitation: they have a knowledge cutoff. ChatGPT’s training data has a cutoff date. Claude’s training data has a cutoff date. No matter how sophisticated the model, it cannot know about events, prices, or information that emerged after its training concluded.
This is where search comes in. When you ask ChatGPT “What are the latest developments in AI regulation?” or “What’s the current price of Bitcoin?”, the model doesn’t magically know the answer. Instead, it performs a search query (or receives search results from an intermediary like SerpApi), processes those results, and synthesizes an answer based on what it finds.
Google’s lawsuit against SerpApi is essentially about controlling who gets to be that intermediary. It’s about maintaining Google’s position as the authoritative source of ranked, validated information on the internet.
Why This Matters More Than You Think
The implications of this lawsuit extend far beyond the courtroom. If Google successfully restricts access to its search results through intermediaries like SerpApi, it forces AI companies into one of two positions:
First, they negotiate directly with Google. This is already happening to some extent, with Google providing its own AI search features and presumably having agreements in place with other AI companies. But direct negotiation means less flexibility, more control by Google, and potentially higher costs for AI companies.
Second, they rely more heavily on first-party, authoritative content. If scraped intermediaries become legally risky or unreliable, AI systems will increasingly default to sources they can trust implicitly: official company websites, established publications, academic institutions, and other known entities.
This second point is crucial for anyone thinking about visibility in AI search systems. It means the old playbook of “optimize for Google’s algorithm” isn’t becoming obsolete—it’s becoming more important, not less. But with a twist.

The Shift From Ranking to Trustworthiness
For years, SEO has been about ranking. You want to be on page one. You want to be in position three instead of position eight. You want to beat your competitors in the search results.
AI search introduces a different dynamic. When ChatGPT or Claude needs information, it doesn’t think in terms of rankings. It thinks in terms of trustworthiness and extractability.
Trustworthiness means the source is known, established, and has a track record. A company’s official website is more trustworthy than a random blog post. A peer-reviewed academic paper is more trustworthy than an unverified claim. A major publication is more trustworthy than an unknown source.
Extractability means the information is structured in a way that an LLM can easily parse and use. A well-organized page with clear headings, direct answers to common questions, and structured data is more extractable than a rambling article with unclear conclusions.
Here’s what this means in practice: You can rank highly in Google for a keyword and still be invisible to ChatGPT if your content isn’t trustworthy or extractable enough.
Conversely, you might not rank in the top three for a keyword in Google, but if your content is authoritative, well-structured, and comes from a trusted domain, you’re more likely to be cited or referenced when an AI system synthesizes an answer.
What Actually Matters for AI Visibility
Rather than chasing “AI optimization” as a separate discipline, think about what actually drives visibility in AI systems:
First-party authoritative content. If you’re a brand or business, your official channels matter more than ever. Your website, your documentation, your published research—these are the sources AI systems will trust implicitly.
Clean entity definitions. AI systems work with entities: companies, products, people, concepts. The clearer and more consistently you define your entity across the web, the more likely AI systems are to recognize and cite you. This means consistent naming, clear descriptions, and structured data (schema markup) that makes your identity unambiguous.
Trusted domains. Domain authority still matters, but for a different reason. Google’s index is still the strongest dataset on the internet, and AI systems still rely on it. A link from a trusted, high-authority domain tells both Google and AI systems that your content is worth paying attention to.
Structured information. AI systems learn from patterns. Information presented in clear, structured formats—lists, tables, comparison matrices, question-and-answer sections—is easier for LLMs to parse and incorporate into their responses.
Topical depth. Rather than optimizing for a single keyword, building comprehensive coverage of a topic signals authority. This is true for Google, but it’s even more important for AI systems. When Claude is synthesizing an answer about your industry, it’s looking for sources that demonstrate deep knowledge, not just surface-level mentions.
The Hype vs. The Reality
You’ve probably seen marketing claims about “optimizing for ChatGPT” or “getting traffic from AI search.” Some of these claims are overblown. The reality is more nuanced:
- AI search is not replacing Google search. ChatGPT, Claude, and other LLMs are complementary tools that people use alongside traditional search. They serve different needs. You’re not choosing between optimizing for Google or optimizing for AI—you’re doing both, and the fundamentals are largely the same.
- There’s no secret formula. The “cheat codes” for getting traffic from AI search are just good SEO practices applied consistently. Clear structure, trustworthy sources, authoritative content, proper attribution. These aren’t new ideas.
- Speed and recency matter. Because AI systems need current information, being first to publish reliable information on a topic gives you an advantage. This favors news organizations, official sources, and businesses that actively publish about their industry.
- Direct traffic from AI systems is still limited. While AI systems can cite your content, they don’t typically drive the same volume of direct traffic that Google does. The value is more about visibility, authority, and being part of the conversation.
What This Means for 2026 and Beyond
As we move into 2026, the landscape will likely shift in a few key ways:
More direct negotiations between AI companies and content sources. We’ll probably see more formal agreements between AI companies and publishers, similar to the deals Google has made with news organizations. This could create new revenue opportunities for content creators, or it could create new barriers to entry.
Increased emphasis on first-party sources. As intermediaries become less reliable, AI systems will lean harder on official websites, published research, and established institutions. This favors brands and organizations that invest in quality, authoritative content.
Evolution of search itself. Google will continue to integrate AI into its search results (as it’s already doing with AI Overviews). This means traditional search and AI search will become increasingly intertwined, not separate channels.
Continued importance of SEO fundamentals. Despite all the talk about AI changing everything, the fundamentals remain: clear structure, authoritative content, trustworthy sources, and proper attribution. These principles work across both traditional and AI search systems.
The Bottom Line
Google’s lawsuit against SerpApi isn’t just about intellectual property. It’s a public confirmation of something that was already true: large language models still depend on search infrastructure to stay current and accurate. They haven’t replaced search. They’ve become another layer on top of it.
For anyone thinking about visibility in search—whether that’s Google, ChatGPT, Claude, or the next AI search system—the strategy is straightforward. Build authoritative, trustworthy content. Structure it clearly. Make it easy for both humans and machines to understand. Establish your domain as a known, reliable source in your field.
This isn’t sexy. It doesn’t involve secret algorithms or hidden optimization techniques. But it works, and it will continue to work as the search landscape evolves.
The hype around “AI optimization” will fade. The fundamentals will remain – get in touch if you’d like to discuss your SEO strategy for 2026