Google Gemini AI Vulnerabilities Allowing Prompt Injection and Data Exfiltration
Summary
Hide ▲
Show ▼
Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. The vulnerabilities could have been exploited to inject malicious prompts, manipulate AI behavior, and exfiltrate user data. The flaws highlight the potential risks of AI tools being used as attack vectors rather than just targets. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections.
Timeline
-
30.09.2025 16:18 2 articles · 3h ago
Google Gemini AI vulnerabilities disclosed and patched
Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections.
Show sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
Information Snippets
-
Gemini Cloud Assist had a prompt injection flaw allowing attackers to exploit cloud services by injecting prompts in HTTP requests.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Search Personalization model had a search-injection flaw enabling attackers to manipulate AI behavior and leak user data.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Browsing Tool had an indirect prompt injection flaw allowing data exfiltration to external servers.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The vulnerabilities were collectively named the Gemini Trifecta by Tenable.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
Google has patched the vulnerabilities and added hardening measures to prevent similar attacks.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The flaws could have been used to query sensitive data and create hyperlinks containing this data.
First reported: 30.09.2025 13:202 sources, 2 articlesShow sources
- Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits — thehackernews.com — 30.09.2025 16:18
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website.
First reported: 30.09.2025 13:201 source, 1 articleShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks.
First reported: 30.09.2025 13:201 source, 1 articleShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
-
The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature.
First reported: 30.09.2025 13:201 source, 1 articleShow sources
- 'Trifecta' of Google Gemini Flaws Turn AI into Attack Vehicle — www.darkreading.com — 30.09.2025 13:20
Similar Happenings
ForcedLeak Vulnerability in Salesforce Agentforce Exploited via AI Prompt Injection
A critical vulnerability in Salesforce Agentforce, named ForcedLeak, allowed attackers to exfiltrate sensitive CRM data through indirect prompt injection. The flaw affected organizations using Salesforce Agentforce with Web-to-Lead functionality enabled. The vulnerability was discovered and reported by Noma Security on July 28, 2025. Salesforce has since patched the issue and implemented additional security measures, including regaining control of an expired domain and preventing AI agent output from being sent to untrusted domains. The exploit involved manipulating the Description field in Web-to-Lead forms to execute malicious instructions, leading to data leakage. Salesforce has enforced a Trusted URL allowlist to mitigate the risk of similar attacks in the future. The ForcedLeak vulnerability is a critical vulnerability chain with a CVSS score of 9.4, described as a cross-site scripting (XSS) play for the AI era. The exploit involves embedding a malicious prompt in a Web-to-Lead form, which the AI agent processes, leading to data leakage. The attack could potentially lead to the exfiltration of internal communications, business strategy insights, and detailed customer information. Salesforce is addressing the root cause of the vulnerability by implementing more robust layers of defense for their models and agents.
AI systems vulnerable to data-theft via hidden prompts in downscaled images
AI systems remain vulnerable to data-theft via hidden prompts in downscaled images. Researchers from Trail of Bits have demonstrated a novel attack vector that exploits AI systems by embedding hidden prompts in images. These prompts become visible when images are downscaled, enabling data theft or unauthorized actions. The attack leverages image resampling algorithms to reveal hidden instructions, which are then executed by the AI model. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by crafting images with specific patterns that emerge during downscaling. These patterns contain instructions that the AI model interprets as part of the user's input, leading to potential data leakage or other malicious activities. The researchers have developed an open-source tool, Anamorpher, to create images for testing and demonstrating the attack. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and seeking explicit user confirmation for sensitive tool calls.
Growing threat landscape for AI agents and non-human identities
The rapid adoption of AI agents and non-human identities (NHIs) presents significant security challenges. These entities are increasingly targeted by adversaries, with known attack vectors growing rapidly. The unique characteristics of AI agents, such as autonomy and extensive access, exacerbate these risks. Security experts warn of a closing window of opportunity to secure these tools and data. The threat landscape includes data poisoning, jailbreaking, prompt injection, and the exploitation of abandoned agents. Recent research highlights the potential for malicious proxy settings and zero-click vulnerabilities. Proactive measures are essential to mitigate these risks and build robust defenses.
Zero-click exploit targets AI enterprise agents
AI enterprise agents, integrated with various enterprise environments, are vulnerable to zero-click exploits. Attackers can take over these agents using only a user's email address, gaining access to sensitive data and manipulating users. The exploit affects major AI assistants from Microsoft, Google, OpenAI, Salesforce, and others. Organizations must adopt dedicated security programs to manage ongoing risks associated with AI agents. Current security approaches focusing on prompt injection have proven ineffective. The exploit highlights the need for defense-in-depth strategies and hard boundaries to mitigate risks. Organizations are advised to assume breaches and apply lessons learned from past security challenges.
GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposed
Researchers have demonstrated a jailbreak technique to bypass GPT-5's ethical guardrails, leveraging the Echo Chamber and narrative-driven steering methods. This technique can produce harmful procedural content by framing it within a story, avoiding direct malicious prompts. Additionally, zero-click AI agent attacks have been detailed, targeting cloud and IoT systems through indirect prompt injections. These attacks exploit vulnerabilities in AI connectors and integrations, leading to data exfiltration and unauthorized access. The findings highlight the risks associated with integrating AI models with external systems, emphasizing the need for robust security measures and continuous red teaming to mitigate these threats. The Echo Chamber and Storytelling technique was executed in 24 hours after the release of GPT-5, demonstrating how attackers can increase their effectiveness by combining Echo Chamber with complementary strategies.