Robots.txt documents are expanded, deep links get rules, the EU intervenes

Welcome to Pulse of the Week: Updates affect how deep links appear in your snippets, how your robots.txt is parsed, how agent features work in search, and how EU data sharing rules apply to AI chatbots.

Here you will find out what is important for you and your work.

Google lists best practices for additional deep links

Google has updated its snippet documentation with a new section on “Read more” deep links in search results. The documentation lists three best practices that can increase the likelihood of these links appearing.

Important facts: Content must be immediately visible to a human when the page loads. Content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use H2 or H3 headings. The snippet text must match the content displayed on the page, and pages whose content loads after scrolling or an interaction can further reduce the likelihood.

Why this is important

The three approaches are the first specific guidelines Google has released on this feature. Sites that use expandable FAQ sections, tabbed product detail sections, or scroll-triggered content for core information may see fewer deep links in their snippets than sites that render the same content on page load.

The instructions follow a pattern that Google has applied to other search features. Content that is rendered without user interaction is more likely to appear in the expanded display.

Slobodan Manić, founder of No Hacks, made a similar observation on LinkedIn:

“The documentation is based on a snippet behavior (read more deep links in search results), but the language chosen by Google is read as a general preference. ‘Content that is immediately visible to a human’ is the structural instruction, not a reading-specific tip.”

Manić’s point expands on his April 16 IMHO interview with editor-in-chief Shelley Walsh, in which he argued that most AI agent sites are structurally broken. He argues that search crawlers and AI agents now face the same structural problem and the test is the same for both.

For existing pages, the review question is whether important information is contained in a click-to-expand element. If a page already has a “Continue reading” deep link for a section, the structure of that section serves as a guide to what works. For other sections on the same page, replicating this structure can also improve their chances.

Google describes the guidance as best practices that can “increase the likelihood” that deep links will appear. This safeguard is important because this is not a list of requirements and following all three may not guarantee that the links will appear.

Read our full coverage: Google lists best practices for more deep links

Google may expand its list of unsupported Robots.txt rules

Google can add rules to its robots.txt documentation based on analysis of real data collected via the HTTP archive. Gary Illyes and Martin Splitt described the project in the latest Search Off the Record podcast.

Important facts: The Google team analyzed the most commonly unsupported rules in robots.txt files for millions of URLs indexed by the HTTP archive. Illyes said the team plans to document the 10 to 15 most commonly used unsupported rules, going beyond User-Agent, Allow, Disallow and Sitemap. He also said the parser could expand the number of typos it accepts and disallows, although it hasn’t committed to a schedule or naming specific typos.

Why this is important

As Google documents more unsupported instructions, sites that use custom or third-party rules will have clearer guidance about what Google is ignoring.

Anyone managing a robots.txt file with rules beyond user agent, allow, disallow, and sitemap should check for instructions that never worked on Google. The HTTP archive data can be publicly queried in BigQuery, making the same distribution that Google used available to anyone who wants to examine it.

Typo tolerance is the more speculative part. Illyes’ wording implies that the parser already accepts some misspellings of “disallow” and may accommodate more over time. Now check and correct any spelling variations instead of assuming they will be ignored.

Read our full coverage: Google may expand list of unsupported Robots.txt rules

EU suggests Google shares search data with competitors and AI chatbots

The European Commission submitted preliminary findings suggesting that Google shares search data with competing search engines in the EU and EEA, including AI chatbots, which are considered online search engines under the DMA. The measures are not yet binding, a public consultation will run until May 1st and a final decision should be made by July 27th.

Important facts: The proposal covers four categories of data that will be shared on fair, reasonable and non-discriminatory terms. The categories are ranking, query, click and viewing data. Eligibility extends to AI chatbot providers that meet the DMA definition of online search engines. If the Commission upholds eligibility through the final decision, qualified providers could gain access to anonymized Google search data under the terms proposed by the Commission.

Why this is important

This proposal explicitly extends search engine data sharing eligibility under the DMA to AI chatbots. If the eligibility survives consultation, the search engine regulatory category will now include products that have been treated as a separate category in most search engine marketing work.

The consequences vary depending on where you operate. For websites optimized for EU/EEA visibility, the change could expand the scope of anonymized search signals. AI products that compete with Google in this market could use the data to improve their retrieval and ranking systems, which in turn could affect what content they cite.

Outside the EU, the direct regulatory effect is zero. Category definition is another matter. How the Commission draws the line between “AI chatbot” and “AI chatbot that qualifies as a search engine” is likely to be cited in future proceedings.

The question of eligibility is the story to watch between now and May 1st. If the Commission limits the AI ​​chatbot criteria in response to consultation feedback, the impact will remain regulatory. If it sticks to the line, it would set a significant precedent for how AI searches are classified.

Read our full coverage: Google may have to share search data with competitors

Google is adding new task-based search features

Google has introduced new search features that continue its evolution toward getting things done. Users can now track individual hotel price reductions via a new toggle in Search, and Google is adding the ability to launch AI agents directly from AI mode.

Important facts: Hotel price tracking is available worldwide via a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. AI Agent launched in AI mode allows users to initiate AI-completed tasks within the search interface. Rose Yao, a Google Search product lead, posted about the features on X.

Why this is important

Each task-based feature moves a process previously started on another website into Google’s own interface. Hotel price tracking has been in place at the city level for months. The expansion to individual hotels adds a new signal that users can set within Google rather than on hotel or aggregator websites.

The visibility of direct bookings depends on your membership in the Google ecosystem. Sites that use price drop alerts as a trigger to keep users returning may see some of that engagement allocated to Google’s tracking UI. This increases the importance for hotel brands to ensure that individual hotel pages are fully populated in the Google business profile and hotel feeds.

On LinkedIn, Daniel Foley Carter connected the feature to a broader pattern:

“Google’s AI Overviews, AI Mode, and now in-frame functionality for SERP + SITE are just one reason Google is exploiting more and more traffic opportunities. Everything Google told us not to do, it does it itself. SPAM / LOW VALUE CONTENT – don’t summarize other people’s content – Google does it.”

The launch of the AI ​​agent is rather speculative. Google hasn’t released detailed documentation explaining what types of tasks users can delegate or how to cite sources. The feature confirms that agent search, which Sundar Pichai describes as “search as an agent manager,” will appear in search gradually, rather than as a single launch.

Read Roger Montti’s full coverage: Google adds new task-based search features

Topic of the week: The rules are written

Each story this week expresses something that was previously implicit or in progress.

Google has announced that it will expand the scope of its robots.txt documentation. The company listed specific practices that can increase the likelihood of “Read More” deep links appearing. The European Commission has proposed measures that extend the eligibility to share search engine data under the DMA to AI chatbots. And task-based features, which Sundar Pichai has described in interviews, will be introduced as switches in the search bar.

The ground becomes firmer for your everyday life. Fewer questions are demands for judgment. What qualifies and what doesn’t, what Google supports, and what a regulator considers a search engine are all written down. This is to your advantage if it means clearer testing criteria, and to your detriment if “we weren’t sure” is no longer a reasonable answer.

Top stories of the week:

Additional resources:


Featured Image: [Photographer]/Shutterstock


Follow us on Facebook | Twitter | YouTube


WPAP (907)

Leave a Comment

ajax-loader
Good Marketing Tools
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.