top of page

Google FastSearch: why AI Overviews feel weird, and what it means for your SEO

  • JC Connington
  • Nov 18
  • 7 min read

Somewhere between my 6am swim sets and stalling a Cessna over the Essex coastline, I have been reading court documents. Buried in the rubble of the US antitrust case against Google is a quiet admission that explains why AI Overviews often feel like a slightly drunk version of search.


That admission has a name: FastSearch.


ree

FastSearch is not a new product, not a mode you can toggle, not a shiny API. It's a system that sits behind the scenes, feeding Gemini just enough of the web that it can spit out an AI Overview without melting Google’s data centres. And it does not work like the search I have spent 15 years trying to optimise for! Let’s unpack what FastSearch actually is, how it works, and what this means if you care about being visible in an AI shaped world.

What is Google FastSearch?

The court filing describes FastSearch as Google’s internal tech for grounding Gemini models and generating AI Overviews. It is based on a set of deep learning ranking signals called RankEmbed, which generate a slimmed down list of web results that the model can use as its grounding.


Translated into normal language:

  • Traditional search: full index, lots of signals, relatively expensive.

  • FastSearch: small slice of the index, fewer signals, cheap and fast.


The filing is very blunt about the trade off. FastSearch retrieves fewer documents, so it can respond more quickly, but the resulting quality is lower than fully ranked search results. It is considered ‘good enough for grounding’ rather than best in class for relevance. So, when you see an AI Overview at the top of the SERP, you are not looking at the output of the same process that produced the ten blue links underneath. You are looking at what happens when Google takes a shortcut.


How FastSearch cuts corners

FastSearch makes three big compromises so that Gemini can answer users before they blink.


1. A thinner pool of pages

FastSearch does not rummage through the whole index. It pulls from a smaller, targeted subset of pages that look semantically relevant enough to ground an answer.

That is a big saving in compute, but:

  • If your page is not in that smaller pool, it simply does not exist as far as AIO's are concerned.

  • Having a solid traditional ranking does not guarantee that FastSearch will pick you.


Think of it like training sets in the pool. You might be the best 1500m swimmer in the building, but if the coach only looks at sprint times when picking a relay team, you are invisible.


2. Fewer, more semantic signals

FastSearch leans heavily on RankEmbed signals instead of the full circus of Google’s ranking factors. These are deep learning signals that map queries and documents into the same vector space and judge how closely they line up in meaning, not how many links they have picked up over the years.


In other words, FastSearch cares less about:

  • classic link based authority

  • brand recognition

  • historic popularity

and more about:

  • semantic clarity

  • topical focus

  • how clearly a piece of content addresses an intent.

A smaller blog with a brutally clear explanation can end up feeding an AI Overview ahead of a well known brand that buries the answer under three screens of fluff and marketing.


3. A lower quality bar, on purpose

Google admits in the filing that FastSearch’s quality is lower than fully ranked search, but still acceptable for grounding AI responses.

That single sentence quietly explains years of screenshots:

  • AI Overviews citing thin, outdated or just plain weird pages.

  • Overviews contradicting what the top organic listings say.

  • Occasional hallucinations where the model overconfidently invents things.

The retrieval layer feeding the model has a lower bar. The model then adds its own creativity on top. Of course it sometimes goes sideways.


RankEmbed: the signal sitting quietly behind all this

RankEmbed shows up in the documents as one of Google’s top level deep learning signals, designed to find patterns in huge datasets and judge how closely a document matches the intent behind a query. The important thing here is the question RankEmbed is trying to answer:

How close is this content to what the user actually meant?

Not:

  • How many referring domains is this page sitting on?

  • How famous is this brand?

  • How many times has this exact anchor text pointed here?

That sounds abstract, but it has very practical consequences. If your content is:

  • vague

  • hedged

  • slow to get to the point

it is harder for RankEmbed to confidently stick it next to a specific query in vector space.


If your content:

  • states the problem and audience clearly

  • uses the language people actually search

  • keeps each page tightly focused on one job

then FastSearch can recognise it more easily as a good semantic fit, even if your traditional ranking signals are not world class.

On a clear day in a Cessna over Southend, you can see the shape of the coastline in one glance. RankEmbed is trying to do a similar thing at scale: look at a messy web and say, in one shot, which pages are closest to the user’s intent.


Why AI Overviews often feel slightly unhinged

Once you understand that AI Overviews are powered by FastSearch, not by the full ranking pipeline, the weirdness starts to look inevitable


FastSearch:

  • looks at fewer pages

  • uses a smaller set of semantic signals

  • accepts lower quality as long as it is ‘good enough’ to ground an answer

Gemini then:

  • reads that imperfect shortlist

  • synthesises a confident, polished summary

  • sometimes hallucinates or over generalises

Seen through that lens, AI Overviews are not broken. They are doing the best they can with a cheaper, thinner version of search.


From an SEO point of view, the takeaway is simple.

If you treat AI Overviews like a slightly bigger featured snippet and assume they draw from the same ranking process as your organic listings, you will misread the game entirely. FastSearch is playing by its own rules.


You cannot poke FastSearch directly

One of the more interesting lines in the filing is about Vertex AI. FastSearch is exposed inside Google Cloud so enterprise customers can ground their own models on the web, but they never see the ranked results themselves, only the information extracted from those results. Google keeps the actual ranking output to itself.


Practically, that means:

  • There is no FastSearch report in any tool.

  • There is no way to query FastSearch as a separate engine.

  • You cannot pull a clean FastSearch SERP and compare it to standard search.


The only way to study FastSearch from the outside is to watch:

  • which URLs get cited in AI Overviews

  • how those differ from the URLs winning traditional organic slots

  • what those cited pages have in common in terms of structure and semantics.

This is not a tidy dashboard problem. It is a research and testing problem.


What to actually do with this as a marketer

Knowing that FastSearch exists is interesting. Turning it into advantage is better. Here is where I am landing with clients right now.


Lead with intent, brutally clearly

If FastSearch and RankEmbed care about semantics, you cannot afford slow intros and clever but opaque headings


Practical moves:

  • Make the first two or three sentences spell out who the page is for and what problem it solves.

  • Use the query language in a natural way. If users search ‘is Babbel worth it in 2025’, say that in your H1 or early copy, not a vague ‘our verdict’.

  • Strip vague headings like ‘overview’ and replace with intent led labels such as ‘pricing and plans’ or ‘pros and cons’.

When a model skims your page, it should be able to say, in one glance: this is a detailed answer to that question.


Build clean topic clusters

FastSearch is tuned to see relationships between related pieces of content, not just isolated articles. That pulls architecture into the game.


You want:

  • Pillar pages that define the space.

  • Supporting articles that go deep on adjacent questions.

  • Internal links that mirror how a human would naturally explore the topic.

If your site structure looks like a tangled lane rope after junior swim club, do not be surprised if semantic signals get lost.


Audit for AI readability

Most SEOs have finally accepted that people skim. Fewer have accepted that models skim too.


An AI oriented audit should include:

  • Does each page have a crisp, one line summary near the top that a model can lift as a verdict?

  • Are key facts laid out in simple sentences and tables, or buried in long anecdotes?

  • Is schema mirroring the structure of the page or bolted on in a generic way?

FastSearch is trying to feed the model compact, high signal documents. Help it.


Track AI visibility separately from classic SEO

You need to know:

  • Where your site appears in AI Overviews.

  • Which competitors are winning those citations.

  • How that list differs from the traditional top 10 for the same query.


Treat AI visibility as a separate metric. It is powered by a different system, so it deserves its own line on the dashboard.


Do not throw traditional SEO out of the window

For all the noise, FastSearch currently powers one specific thing: AI Overviews and some related Gemini style answers. The bulk of your traffic is still coming from the main ranking pipeline, with all its messy, boring old signals.


Links still matter there. Crawl still matters. Site hygiene still matters. The technical work is not going away just because Google quietly admitted that it takes shortcuts when feeding its AI.


Where this leaves us

The antitrust case has accidentally given us a rare peek behind the curtain. FastSearch is not the future of all search, but it is a very clear sign of what Google is willing to trade for speed and scalability in an AI first world.


From my lane in the pool and my little cockpit over Southend, the pattern looks like this:

  • Retrieval will get lighter and more semantic.

  • Models will sit in front of more and more queries.

  • The gap between what traditional rankings reward and what AI surfacing rewards will continue to widen.


Our job is not to panic. It is to:

  • Write content that is unambiguously about something.

  • Structure sites so that relationships between pages are obvious.

  • Measure AI visibility with the same discipline we bring to organic.

FastSearch may be a lighter, cheaper sibling of classic search, but it is already deciding what users see in some of the most valuable real estate on the page.

If you care about being part of that conversation, you have to write and build for the way Google’s semantic brain actually works now, not the comfortable link based world we grew up in.

Author: JC Connington 18/11/2025

 
 
 

Comments


© 2024 by ♣️ ♣️Black Jack♣️ ♣️ 

bottom of page