Google Gemini AI Vulnerabilities Allowing Prompt Injection and Data Exfiltration
Summary
Hide ▲
Show ▼
Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. Additionally, a zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025, allowing attackers to exfiltrate corporate data via indirect prompt injection. Google addressed this flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems. The vulnerabilities highlight the potential risks of AI tools being used as attack vectors rather than just targets. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections.
Timeline
-
10.12.2025 14:05 1 articles · 23h ago
GeminiJack zero-click vulnerability disclosed and patched
A zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025 by Noma Security. This flaw allowed attackers to exfiltrate corporate data via indirect prompt injection by exploiting the Retrieval-Augmented Generation (RAG) architecture. The attack chain involved content poisoning, triggering AI execution, and data exfiltration via a malicious image tag. Google addressed the flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems.
Show sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
30.09.2025 16:18 3 articles · 2mo ago
Google Gemini AI vulnerabilities disclosed and patched
Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections. Additionally, a zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025, allowing attackers to exfiltrate corporate data via indirect prompt injection. Google addressed this flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems.
Show sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
Information Snippets
-
Gemini Cloud Assist had a prompt injection flaw allowing attackers to exploit cloud services by injecting prompts in HTTP requests.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Search Personalization model had a search-injection flaw enabling attackers to manipulate AI behavior and leak user data.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Browsing Tool had an indirect prompt injection flaw allowing data exfiltration to external servers.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The vulnerabilities were collectively named the Gemini Trifecta by Tenable.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
Google has patched the vulnerabilities and added hardening measures to prevent similar attacks.
First reported: 30.09.2025 13:203 sources, 3 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
The flaws could have been used to query sensitive data and create hyperlinks containing this data.
First reported: 30.09.2025 13:203 sources, 3 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks.
First reported: 30.09.2025 13:201 source, 1 articleShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
A zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', allowed attackers to exfiltrate corporate data via indirect prompt injection.
First reported: 10.12.2025 14:051 source, 1 articleShow sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
The flaw was discovered in June 2025 by Noma Security and reported to Google the same day.
First reported: 10.12.2025 14:051 source, 1 articleShow sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
GeminiJack exploited the Retrieval-Augmented Generation (RAG) architecture in Gemini Enterprise to retrieve and exfiltrate sensitive data.
First reported: 10.12.2025 14:051 source, 1 articleShow sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
The attack chain involved content poisoning, triggering AI execution, and data exfiltration via a malicious image tag.
First reported: 10.12.2025 14:051 source, 1 articleShow sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
-
Google separated Vertex AI Search from Gemini Enterprise and updated their interaction with retrieval and indexing systems to fix the flaw.
First reported: 10.12.2025 14:051 source, 1 articleShow sources
- Google Fixes Zero Click Gemini Enterprise Flaw That Exposed Corporate Data — www.infosecurity-magazine.com — 10.12.2025 14:05
Similar Happenings
Google Enhances Chrome Agentic AI Security Against Indirect Prompt Injection Attacks
Google is introducing new security measures to protect Chrome's agentic AI capabilities from indirect prompt injection attacks. These protections include a new AI model called the User Alignment Critic, expanded site isolation policies, additional user confirmation steps for sensitive actions, and a prompt injection detection classifier. The User Alignment Critic independently evaluates the agent's actions, ensuring they align with the user's goals. Google is also enforcing Agent Origin Sets to limit the agent's access to relevant data origins and has developed automated red-teaming systems to test defenses. The company has announced bounty payments for security researchers to further enhance the system's robustness.
AI-Powered Malware Families Deployed in the Wild
Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.
Indirect Prompt Injection Vulnerabilities in ChatGPT Models
Researchers from Tenable discovered seven vulnerabilities in OpenAI's ChatGPT models (GPT-4o and GPT-5) that enable attackers to extract personal information from users' memories and chat histories. These vulnerabilities allow for indirect prompt injection attacks, which manipulate the AI's behavior to execute unintended or malicious actions. OpenAI has addressed some of these issues, but several vulnerabilities persist. The vulnerabilities include indirect prompt injection via trusted sites, zero-click indirect prompt injection in search contexts, and prompt injection via crafted links. Other techniques involve bypassing safety mechanisms, injecting malicious content into conversations, hiding malicious prompts, and poisoning user memories. The vulnerabilities affect the 'bio' feature, which allows ChatGPT to remember user details and preferences across chat sessions, and the 'open_url' command-line function, which leverages SearchGPT to access and render website content. Attackers can exploit the 'url_safe' endpoint by using Bing click-tracking URLs to lure users to phishing sites or exfiltrate user data. These findings highlight the risks associated with exposing AI chatbots to external tools and systems, which expand the attack surface for threat actors. The vulnerabilities stem from how ChatGPT ingests and processes instructions from external sources, allowing attackers to exploit these flaws through various methods. The most concerning issue is a zero-click vulnerability, where simply asking ChatGPT a benign question can trigger an attack if the search results include a poisoned website.
AI-targeted cloaking attack exploits AI crawlers
AI security company SPLX has identified a new security issue in agentic web browsers like OpenAI ChatGPT Atlas and Perplexity. This issue exposes underlying AI models to context poisoning attacks through AI-targeted cloaking. Attackers can serve different content to AI crawlers compared to human users, manipulating AI-generated summaries and overviews. This technique can introduce misinformation, bias, and influence the outcomes of AI-driven systems. The hCaptcha Threat Analysis Group (hTAG) has also analyzed browser agents against common abuse scenarios, revealing that these agents often execute risky tasks without safeguards. This makes them vulnerable to misuse by attackers. The attack can undermine trust in AI tools and manipulate reality by serving deceptive content to AI crawlers.
Google Gemini Vulnerable to ASCII Smuggling Attacks
Google has decided not to address a new ASCII smuggling attack in Gemini, which can be exploited to trick the AI assistant into providing false information, altering its behavior, and silently poisoning its data. This vulnerability can be used to embed hidden text in Calendar invites or emails, potentially leading to identity spoofing and data extraction. The attack leverages special characters from the Tags Unicode block to introduce invisible payloads that are processed by large-language models (LLMs). The risk is heightened due to Gemini's integration with Google Workspace and its ability to perform tasks autonomously. Researcher Viktor Markopoulos demonstrated the vulnerability in Gemini, DeepSeek, and Grok, while Claude, ChatGPT, and Microsoft CoPilot were found to be secure against such attacks.