CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

ShadowLeak: Undetectable Email Theft via AI Agents

First reported
Last updated
3 unique sources, 3 articles

Summary

Hide ▲

A new attack vector, dubbed ShadowLeak, allows hackers to invisibly steal emails from users who integrate AI agents like ChatGPT with their email inboxes. The attack exploits the lack of visibility into AI processing on cloud infrastructure, making it undetectable to the user. The vulnerability was discovered by Radware and reported to OpenAI, which addressed it in August 2025. The attack involves embedding malicious code in emails, which the AI agent processes and acts upon without user awareness. The attack leverages an indirect prompt injection hidden in email HTML, using techniques like tiny fonts, white-on-white text, and layout tricks to remain undetected by the user. The attack can be extended to any connector that ChatGPT supports, including Box, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint. A new variant of this attack, dubbed ZombieAgent, was discovered by Zvika Babo at Radware. This technique exploits a weakness in OpenAI's URL-modification defenses by leveraging pre-constructed, static URLs to exfiltrate sensitive data from ChatGPT one character at a time. The attack flow involves extracting sensitive data, normalizing it, and exfiltrating it character by character by opening pre-defined URLs in sequence. The vulnerability was reported to OpenAI via BugCrowd in September 2025 and fixed in mid-December 2025.

Timeline

  1. 08.01.2026 18:45 1 articles · 23h ago

    ZombieAgent: New Prompt Injection Technique Discovered

    A new variant of the ShadowLeak attack, dubbed ZombieAgent, was discovered by Zvika Babo at Radware. This technique exploits a weakness in OpenAI's URL-modification defenses by leveraging pre-constructed, static URLs to exfiltrate sensitive data from ChatGPT one character at a time. The vulnerability was reported to OpenAI via BugCrowd in September 2025 and fixed in mid-December 2025.

    Show sources
  2. 19.09.2025 22:07 3 articles · 3mo ago

    ShadowLeak: Undetectable Email Theft via AI Agents Discovered

    Radware researchers discovered a new attack vector, ShadowLeak, that allows undetectable email theft via AI agents like ChatGPT. The attack exploits the lack of visibility into AI processing on cloud infrastructure, making it undetectable to the user. The vulnerability was reported to OpenAI in June 2025, and OpenAI addressed the issue in August 2025. The exact details of the fix remain unclear. The attack involves embedding malicious code in emails, which the AI agent processes and acts upon, exfiltrating sensitive data to an attacker-controlled server. The attack leverages an indirect prompt injection hidden in email HTML, using techniques like tiny fonts, white-on-white text, and layout tricks to remain undetected by the user. The attack can be extended to any connector that ChatGPT supports, including Box, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint. The exfiltration in ShadowLeak occurs directly within OpenAI's cloud environment, bypassing traditional security controls.

    Show sources

Information Snippets

Similar Happenings

ChatGPT worldwide outage and data loss reported

OpenAI's ChatGPT service experienced a worldwide outage, with users reporting errors and disappearing conversations. The cause of the disruption remains unclear, and OpenAI has acknowledged the issue and is working on a fix. Approximately 30,000 users were affected, encountering errors such as 'something seems to have gone wrong' and 'There was an error generating a response.' Some users also noted that their conversations disappeared and new messages failed to load. ChatGPT has started to come back online as of 15:14 ET, but it's still slow.

Indirect Prompt Injection Vulnerabilities in ChatGPT Models

Researchers from Tenable discovered seven vulnerabilities in OpenAI's ChatGPT models (GPT-4o and GPT-5) that enable attackers to extract personal information from users' memories and chat histories. These vulnerabilities allow for indirect prompt injection attacks, which manipulate the AI's behavior to execute unintended or malicious actions. OpenAI has addressed some of these issues, but several vulnerabilities persist. The vulnerabilities include indirect prompt injection via trusted sites, zero-click indirect prompt injection in search contexts, and prompt injection via crafted links. Other techniques involve bypassing safety mechanisms, injecting malicious content into conversations, hiding malicious prompts, and poisoning user memories. The vulnerabilities affect the 'bio' feature, which allows ChatGPT to remember user details and preferences across chat sessions, and the 'open_url' command-line function, which leverages SearchGPT to access and render website content. Attackers can exploit the 'url_safe' endpoint by using Bing click-tracking URLs to lure users to phishing sites or exfiltrate user data. These findings highlight the risks associated with exposing AI chatbots to external tools and systems, which expand the attack surface for threat actors. The vulnerabilities stem from how ChatGPT ingests and processes instructions from external sources, allowing attackers to exploit these flaws through various methods. The most concerning issue is a zero-click vulnerability, where simply asking ChatGPT a benign question can trigger an attack if the search results include a poisoned website.

AI-targeted cloaking attack exploits AI crawlers

AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers. The attack can undermine trust in AI tools and manipulate reality by serving deceptive content to AI crawlers.

CometJacking attack exploits Comet browser to steal emails

A new attack called CometJacking exploits URL parameters to pass hidden instructions to Perplexity's Comet AI browser, allowing access to sensitive data from connected services like email and calendar. The attack does not require credentials or user interaction and bypasses Perplexity's data protections using Base64-encoding tricks. Comet is an agentic AI browser that can autonomously browse the web and manage tasks such as emails, shopping, and booking tickets. Despite known security gaps, its adoption is increasing. The CometJacking attack was discovered by LayerX researchers, who reported it to Perplexity in late August. Perplexity responded that it did not identify an issue, marking the report as 'not applicable.' The attack involves a five-step process where the URL instructs the Comet browser's AI to execute a hidden prompt, highlighting new security risks introduced by AI-native tools.

ForcedLeak Vulnerability in Salesforce Agentforce Exploited via AI Prompt Injection

A critical vulnerability in Salesforce Agentforce, named ForcedLeak, allowed attackers to exfiltrate sensitive CRM data through indirect prompt injection. The flaw affected organizations using Salesforce Agentforce with Web-to-Lead functionality enabled. The vulnerability was discovered and reported by Noma Security on July 28, 2025. Salesforce has since patched the issue and implemented additional security measures, including regaining control of an expired domain and preventing AI agent output from being sent to untrusted domains. The exploit involved manipulating the Description field in Web-to-Lead forms to execute malicious instructions, leading to data leakage. Salesforce has enforced a Trusted URL allowlist to mitigate the risk of similar attacks in the future. The ForcedLeak vulnerability is a critical vulnerability chain with a CVSS score of 9.4, described as a cross-site scripting (XSS) play for the AI era. The exploit involves embedding a malicious prompt in a Web-to-Lead form, which the AI agent processes, leading to data leakage. The attack could potentially lead to the exfiltration of internal communications, business strategy insights, and detailed customer information. Salesforce is addressing the root cause of the vulnerability by implementing more robust layers of defense for their models and agents.