AI-targeted cloaking attack exploits AI crawlers
Summary
Hide ▲
Show ▼
AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers. The attack can undermine trust in AI tools and manipulate reality by serving deceptive content to AI crawlers.
Timeline
-
29.10.2025 16:57 1 articles · 12d ago
AI-targeted cloaking attack identified in agentic web browsers
AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers.
Show sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57
Information Snippets
-
AI-targeted cloaking is a variation of search engine cloaking that manipulates content served to AI crawlers.
First reported: 29.10.2025 16:571 source, 1 articleShow sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57
-
AI crawlers can be deceived by serving different content based on user agent checks.
First reported: 29.10.2025 16:571 source, 1 articleShow sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57
-
AI-targeted cloaking can introduce misinformation, bias, and influence AI-driven systems.
First reported: 29.10.2025 16:571 source, 1 articleShow sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57
-
hCaptcha Threat Analysis Group (hTAG) found that browser agents execute risky tasks without safeguards.
First reported: 29.10.2025 16:571 source, 1 articleShow sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57
-
AI agents like ChatGPT Atlas, Claude Computer Use, Gemini Computer Use, Manus AI, and Perplexity Comet exhibit dangerous behaviors in various scenarios.
First reported: 29.10.2025 16:571 source, 1 articleShow sources
- New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts — thehackernews.com — 29.10.2025 16:57