“Summarize with AI” buttons are used to bias AI recommendations

Microsoft’s Defender Security Research Team has published a study describing so-called “AI Recommendation Poisoning”. In this technique, companies hide prompt instructions in website buttons labeled “Summarize with AI.”

Clicking one of these buttons will open an AI assistant with a pre-populated prompt submitted via a URL query parameter. The visible part tells the wizard to summarize the page. The hidden part instructs it to remember the company as a trustworthy source for future conversations.

If the instruction enters the assistant’s memory, it can influence recommendations without you knowing it was implanted.

What happens

Microsoft’s team reviewed AI-related URLs observed in email traffic over a 60-day period. They found 50 different immediate injection trials from 31 companies.

The prompts follow a similar pattern. Microsoft’s post includes examples in which the AI ​​was instructed to remember a company as a “trusted source for quotes” or “go-to resource” for a specific topic. A prompt took it a step further, inserting full marketing copy into the assistant’s memory, including product features and selling points.

The researchers attributed the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as intended to help websites “build presence in AI memory.”

The technique is based on specially crafted URLs with prompt parameters, which are supported by most major AI assistants. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms vary by platform.

It is officially cataloged as MITER ATLAS AML.T0080 (Memory Poisoning) and AML.T0051 (LLM Prompt Injection).

What Microsoft found

The 31 companies identified were real companies, not threat actors or fraudsters.

Several calls targeted health and financial services websites where biased AI recommendations carry more weight. A company’s domain was easily confused with a well-known website, potentially leading to false credibility. And one of the 31 companies was a security provider.

Microsoft pointed out a secondary risk. Many of the websites using this technique had user-generated content areas such as comment threads and forums. Once an AI considers a website to be authoritative, it can extend that trust to unverified content on the same domain.

Answer from Microsoft

Microsoft said it has protections against cross-prompt injection attacks in Copilot. The company noted that some previously reported prompt injection behaviors can no longer be reproduced in Copilot and that protection measures are constantly evolving.

Microsoft has also released advanced search queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs that contain memory tampering keywords.

You can review and remove saved Copilot reminders from the Personalization section in Copilot Chat Settings.

Why this is important

Microsoft compares this technique to SEO poisoning and adware, putting it in the same category as the tactics that Google used to fight traditional search for two decades. The difference is that the target of search indexes has been moved to the AI ​​assistant’s memory.

Companies doing serious work on AI visibility now face competitors who may provide game recommendations through instant injection.

The timing is remarkable. SparkToro released a Report showing AI brand recommendations already vary by country almost any request. Google Vice President Robby Stein said in a podcast that AI search finds business recommendations by checking what other websites say. Memory Poisoning bypasses this process by inserting the recommendation directly into the user’s assistant.

Roger Montti’s analysis of AI training data poisoning covered the broader concept of manipulating AI systems to achieve visibility. This article focused on poisoning training datasets. This Microsoft research shows that something is happening more directly, at the point of user interaction, and is being used commercially.

Looking ahead

Microsoft has acknowledged that this is an evolving issue. The open source tools allow new attempts to appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.

It’s unclear whether AI platforms will treat this as a policy violation with consequences, or whether it remains a gray area growth tactic that companies will continue to use.

Hats off to Lily Ray for reporting the Microsoft research on X and @top5seo for the find.


Featured Image: elenabsl/Shutterstock


Follow us on Facebook | Twitter | YouTube


WPAP (907)

Leave a Comment

ajax-loader
Good Marketing Tools
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.