Now Reading: Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

Loading
svg

Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

NewsFebruary 12, 2026Artifice Prime
svg11

That handy ‘Summarize with AI’ button embedded in a growing number of websites, browsers, and apps to give users a quick overview of their content could in some cases be hiding a dark secret: a new form of AI prompt manipulation called “AI recommendation poisoning.”

So says Microsoft, which this week released research on a currently legal but extremely sneaky AI hijacking technique that appears to be spreading like wildfire among legitimate businesses.

While most ‘Summarize with AI’ buttons are exactly what they seem to be – a time-saving way to generate a summary of a website or document – a small but growing number appear to have strayed from that purpose.

Here’s how the manipulation works: a user innocently clicks on a website Summarize button. Unbeknownst to them, this button also contains a hidden prompt telling the user’s AI agent or chatbot to favor that company’s products in future responses. The same instruction can also be concealed in a specially crafted link sent to a user in an email.

Microsoft highlights how this tactic could be used to skew enterprise product research without that bias being detected before it influences decisions. Over a two-month period, its researchers identified 50 examples of the technique being deployed by 31 different companies in dozens of industry sectors, including finance, health, legal, SaaS, and business services. In an ironic twist, this even included an unnamed vendor in the security sector.

The technique is widespread enough that, last September, MITRE added it to its list of known AI manipulations

AI leverages user preferences

AI recommendation poisoning is made possible by user AIs that are designed to ingest and remember prompts as signals of the user’s preferences; if the user says that they favor something, the AI will helpfully remember that preference as part of its profile for that user.

Unlike prompt injection, in which an attacker manipulates an AI using a one-off instruction, recommendation poisoning has the added advantage of achieving longer-term persistence across future prompts. The AI, of course, has no way of distinguishing genuine preferences from those injected by third parties along the way:

“This personalization makes AI assistants significantly more useful. But it also creates a new attack surface; if someone can inject instructions or spurious facts into your AI’s memory, they gain persistent influence over your future interactions,” said Microsoft.

To the user, everything will seem normal, except that, behind the scenes, the AI keeps pushing the bogus or poisoned responses when they ask it questions in a  relevant context.

“This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated,” said the researchers.

Pushing falsehoods

A factor driving the recent popularity of recommendation poisoning appears to be the availability of open-source tools that make it easy to hide this function behind website Summarize buttons.

This raises the uncomfortable possibility that poisoned buttons aren’t being added as an afterthought by SEO developers who get carried away. More likely, the intention from the start is to contaminate users’ AIs as a form of self-serving marketing.

In Microsoft’s view, the dangers go beyond over-zealous marketing, and could just as easily be used to push falsehoods, dangerous advice, biased news sources, or commercial disinformation. What’s certain is that if legitimate companies are abusing the feature, cybercriminals won’t be shy about using it too.

The good news is that the technique is relatively easy to spot and block, even if you don’t use Microsoft’s Microsoft 365 Copilot or Azure AI services, which the company says contain integrated protections.

For individual users, this involves studying the saved information a chatbot has accumulated (how this is accessed varies by AI). For enterprise admins, in contrast, Microsoft recommends checking for URLs containing phrases such as ‘remember,’ ‘trusted source,’ ‘in future conversations,’ ‘authoritative source,’ and ‘cite or citation.’  

None of this should be surprising. Once, URLs and file attachments were seen as convenient rather than inherently risky. AI is simply following the same path that every new technology must endure as it moves into the mainstream and becomes a target for misuse.

As with other new technologies, users should educate themselves on the dangers posed by AI. “Avoid clicking AI links from untrusted sources: Treat AI assistant links with the same caution as executable downloads,” Microsoft recommended.

This article originally appeared on CIO.com.

Original Link:https://www.computerworld.com/article/4131071/companies-are-using-summarize-with-ai-to-manipulate-enterprise-chatbots-2.html
Originally Posted: Thu, 12 Feb 2026 00:13:33 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

Quick Navigation