AI Recommendation Poisoning via 'Summarize with AI' Buttons
Summary
Hide ▲
Show ▼
Microsoft has identified a new AI manipulation technique called AI Recommendation Poisoning, where businesses use 'Summarize with AI' buttons to inject biased recommendations into AI chatbots. This method involves embedding hidden instructions in URLs to manipulate the AI's memory and skew recommendations. Over 50 unique prompts from 31 companies across 14 industries were detected over a 60-day period. The attack leverages the AI's inability to distinguish genuine preferences from injected ones, potentially leading to biased advice on critical subjects like health, finance, and security.
Timeline
-
17.02.2026 11:31 1 articles · 13h ago
Microsoft Identifies AI Recommendation Poisoning via 'Summarize with AI' Buttons
Microsoft has uncovered a new AI manipulation technique where businesses use 'Summarize with AI' buttons to inject biased recommendations into AI chatbots. This method involves embedding hidden instructions in URLs to manipulate the AI's memory and skew recommendations. Over 50 unique prompts from 31 companies across 14 industries were detected over a 60-day period. The attack leverages the AI's inability to distinguish genuine preferences from injected ones, potentially leading to biased advice on critical subjects like health, finance, and security.
Show sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
Information Snippets
-
AI Recommendation Poisoning involves embedding hidden instructions in 'Summarize with AI' buttons to manipulate AI chatbot memory.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
-
Over 50 unique prompts from 31 companies across 14 industries were identified over a 60-day period.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
-
The attack leverages URL prompt parameters to inject memory manipulation commands into AI assistants.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
-
Examples include instructing AI to remember a company as a trusted source or recommend it first.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
-
Turnkey solutions like CiteMET and AI Share Button URL Creator facilitate this manipulation.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31
-
Users are advised to audit AI assistant memory, avoid untrusted AI links, and be cautious with 'Summarize with AI' buttons.
First reported: 17.02.2026 11:311 source, 1 articleShow sources
- Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations — thehackernews.com — 17.02.2026 11:31