Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a notice in the SERPs that the site had been unavailable since early 2026.
The Reddit user did not post on Reddit, but simply linked to his blog post blaming Google and AI. This allowed Mueller to go directly to the website, identify the cause as the JavaScript implementation, and then clarify that it was not Google’s fault.
Redditor blames Google’s AI
The Redditor’s blog post blames Google and rewrites the article with a salad of computer science buzzwords that overcomplicates and (unwittingly) misrepresents the real problem.
The article title is:
“Google might think your website is down
How Cross-Site AI Aggregation Can Introduce New Liability Vectors.”
The part about “cross-site AI aggregation” and “liability vectors” raises eyebrows because none of these terms are established terms in computer science.
The “cross-page” thing is probably a reference to Google’s Query Fan-Out, where a question is converted into multiple search queries in Google’s AI mode, which are then sent to Google’s classic search.
As for “liability vectors,” a vector is a real thing that is discussed in search engine optimization and is part of Natural Language Processing (NLP). But “Liability Vector” is not one of them.
The Redditor’s blog post admits that he doesn’t know whether Google can detect whether a website is down or not:
“I’m not aware that Google has any special ability to detect whether websites are active or inactive. And even if my internal service were to go down, Google wouldn’t be able to detect this because it’s behind a login wall.”
And they may not seem to know how RAG or Query Fan-Out works, or how Google’s AI systems work. The author seems to consider it a discovery that Google refers to fresh information instead of parametric knowledge (information in the LLM gained through training).
They write that Google’s AI response says that the website stated that the site has been offline since 2026:
“…the wording says that the website declared, not the people who declared; although in the age of LLM uncertainty this distinction may no longer mean much.
…the time frame is clearly mentioned at the beginning of 2026. Since the site didn’t exist until mid-2025, this actually suggests that Google has relatively new information; although LLMs here too!”
A little later in the blog post, the Redditor admits that he doesn’t know why Google says the site is offline.
They explained that they implemented a shot-in-the-dark solution by removing a pop-up. They incorrectly assumed that the pop-up was causing the problem, and this highlights the importance of being clear about the cause of the problem before making changes in the hope that it will fix the problem.
The Redditor shared that he doesn’t know how Google aggregates information about a website in response to a query about the site, and expressed concern that he believes it’s possible for Google to filter out irrelevant information and then display it in response.
You write:
“…we don’t know how exactly Google creates the mix of pages it uses to generate LLM answers.
This is problematic because now everything on your web pages could influence unrelated responses.
…Google’s AI could grab all of that and present it as an answer.”
I can’t blame the author for not knowing how Google’s AI search works. I’m pretty sure it’s not widely known. It’s easy to get the impression that it’s an AI that answers questions.
However, fundamentally, AI search is based on classic search, where the AI converts the content found online into a natural language answer. It’s like asking someone a question, Googling it, and then explaining the answer based on what they learned from reading the website pages.
Google’s John Mueller explains what’s going on
Mueller responded neutrally and politely to the person’s Reddit post and showed why the error lay in the Redditor’s implementation.
Müller explained:
“Is this your website? I would recommend not using JS to change the text on your page from “unavailable” to “available” and instead just load the entire portion from JS. This way a client that doesn’t run your JS won’t get misleading information.
This is similar to the fact that Google does not recommend using JS to change a Robots meta tag from “noindex” to “Please consider my good work on HTML markup for inclusion” (there is no “index” Robots meta tag, so feel free to get creative).”
Mueller’s response explains that the site relies on JavaScript to replace placeholder text served just before the page loads, which only works for visitors whose browsers actually run that script.
What happened here is that Google read the placeholder text that the webpage displayed as indexed content. Google has recognized the originally served content as “not available” and treated it as content.
Mueller explained that the safer approach is to have the correct information present in the base HTML of the page from the start, so that both users and search engines receive the same content.
Takeaways
There are several insights here that go beyond the technical issue underlying the Redditor’s problem. At the top of the list is how they tried to find their way to an answer.
They really didn’t know how Google’s AI search worked, which led to a number of assumptions that complicated their ability to diagnose the problem. Then they performed a “fix” based on guessing what they thought might be the cause of the problem.
Guessing is an approach to SEO problems that is based on Google’s opacity. However, sometimes it’s not about Google, but rather a knowledge gap in SEO itself and a signal that further testing and diagnosis is needed.
Featured image from Shutterstock/Kues
Follow us on Facebook | Twitter | YouTube
WPAP (907)