CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

AI-targeted cloaking attack exploits AI crawlers

First reported
Last updated
1 unique sources, 1 articles

Summary

Hide ▲

AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers. The attack can undermine trust in AI tools and manipulate reality by serving deceptive content to AI crawlers.

Timeline

  1. 29.10.2025 16:57 1 articles · 12d ago

    AI-targeted cloaking attack identified in agentic web browsers

    AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers.

    Show sources

Information Snippets