Stealthy data exfiltration vulnerability in ChatGPT via malicious prompt and DNS side channel
Summary
Hide ▲
Show ▼
A security flaw in ChatGPT allowed attackers to exfiltrate sensitive user data—including prompts, messages, and uploaded files—through a single malicious prompt and DNS side channel. The issue stemmed from a hidden outbound communication path in ChatGPT’s isolated runtime environment, enabling covert transmission of data to external servers. Exploitation did not require complex attack chains; attackers could trick users into pasting malicious prompts via social engineering. OpenAI deployed a patch on February 20 after receiving a responsible disclosure from Check Point researchers. The scope of potential exposure included corporate credentials, personal health records, and other sensitive information processed by ChatGPT users.
Timeline
-
31.03.2026 16:01 1 articles · 4h ago
ChatGPT runtime isolation bypass enables covert data exfiltration via single prompt
Check Point researchers discovered a vulnerability in ChatGPT allowing attackers to exfiltrate sensitive user data—including messages, uploaded files, and prompts—using a single malicious prompt that activated a hidden DNS side channel from the containerized runtime to external servers. Exploitation bypassed privacy protections by leveraging the model’s assumption of runtime isolation, enabling data transmission without user awareness. A patch was deployed by OpenAI on February 20. Social engineering tactics, such as disguising malicious prompts as productivity aids, were identified as a likely attack vector.
Show sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01
Information Snippets
-
The vulnerability allowed data exfiltration via a DNS side channel originating from the ChatGPT containerized runtime environment to an attacker-controlled server.
First reported: 31.03.2026 16:011 source, 1 articleShow sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01
-
Exploitation required only a single malicious prompt executed within a regular ChatGPT conversation, bypassing built-in privacy protections.
First reported: 31.03.2026 16:011 source, 1 articleShow sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01
-
A proof-of-concept demonstrated exfiltration of a PDF containing laboratory test results with patient data using a socially engineered prompt.
First reported: 31.03.2026 16:011 source, 1 articleShow sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01
-
OpenAI deployed a security update on February 20 to remediate the issue after receiving a report from Check Point researchers.
First reported: 31.03.2026 16:011 source, 1 articleShow sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01
-
Attackers could distribute malicious prompts as productivity tips on websites or social media to deceive users into executing them.
First reported: 31.03.2026 16:011 source, 1 articleShow sources
- ChatGPT Security Issue Enabled Data Theft via Single Prompt — www.infosecurity-magazine.com — 31.03.2026 16:01