Google Launches AI Vulnerability Reward Program
Summary
Hide ▲
Show ▼
Google has launched an AI Vulnerability Reward Program, offering up to $30,000 for identifying and reporting flaws in its AI systems. The program targets high-impact issues in key AI products, including Google Search, Gemini Apps, and Google Workspace core applications. The initiative aims to enhance the security of Google's AI products by leveraging external security research. The program includes various reward tiers based on the severity and impact of the vulnerabilities discovered. Since the program's inception, over $430,000 has been paid in AI-product related rewards. The AI VRP has been developed based on feedback from researchers who participated in the Abuse VRP. The program simplifies the reporting process by moving AI-related issues to the new AI VRP and includes a unified reward table for both abuse and security issues.
Timeline
-
07.10.2025 16:19 2 articles · 8d ago
Google Launches AI Vulnerability Reward Program
The AI Vulnerability Reward Program has paid over $430,000 in AI-product related rewards since the Abuse VRP program was created. The AI VRP includes a unified reward table for both abuse and security issues, reviewed by a single reward panel. The program simplifies the reporting process by moving AI-related issues to the new AI VRP and includes a unified reward table for both abuse and security issues. The AI VRP has been developed based on feedback from researchers who participated in the Abuse VRP.
Show sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
Information Snippets
-
The AI Vulnerability Reward Program focuses on Google's most impactful AI products, such as Google Search, Gemini Apps, and Google Workspace core applications.
First reported: 07.10.2025 16:192 sources, 2 articlesShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Rewards for vulnerabilities can reach up to $30,000 for individual quality reports with novelty bonus multipliers.
First reported: 07.10.2025 16:192 sources, 2 articlesShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Standard security flaw reports can earn up to $20,000, while sensitive data exfiltration bugs can fetch $15,000.
First reported: 07.10.2025 16:192 sources, 2 articlesShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Phishing enablement and model theft issues are rewarded with up to $5,000.
First reported: 07.10.2025 16:192 sources, 2 articlesShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
The program extends Google's Abuse Vulnerability Reward Program (VRP) to foster third-party discovery and reporting of AI-specific issues.
First reported: 07.10.2025 16:192 sources, 2 articlesShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Google awarded nearly $12 million in bug bounty rewards to 660 researchers in 2024.
First reported: 07.10.2025 16:191 source, 1 articleShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
-
Since 2010, Google has awarded $65 million in bug bounties, with the highest reward exceeding $110,000.
First reported: 07.10.2025 16:191 source, 1 articleShow sources
- Google's new AI bug bounty program pays up to $30,000 for flaws — www.bleepingcomputer.com — 07.10.2025 16:19
-
The AI Vulnerability Reward Program has paid over $430,000 in AI-product related rewards since the Abuse VRP program was created.
First reported: 10.10.2025 14:201 source, 1 articleShow sources
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
The AI VRP includes a unified reward table for both abuse and security issues, reviewed by a single reward panel.
First reported: 10.10.2025 14:201 source, 1 articleShow sources
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Google encourages researchers to use in-product functionality for reporting content-based issues.
First reported: 10.10.2025 14:201 source, 1 articleShow sources
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
-
Unclaimed rewards after 12 months will be donated to a charity chosen by Google.
First reported: 10.10.2025 14:201 source, 1 articleShow sources
- Google Launches AI Bug Bounty with $30,000 Top Reward — www.infosecurity-magazine.com — 10.10.2025 14:20
Similar Happenings
Google's CodeMender AI Automatically Patches Vulnerabilities in Code
Google's DeepMind division has released CodeMender, an AI-powered agent that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. CodeMender is designed to be both reactive and proactive, fixing new vulnerabilities as soon as they are spotted and rewriting existing codebases to eliminate classes of vulnerabilities. The AI agent leverages Google's Gemini Deep Think models and a large language model (LLM)-based critique tool to debug, flag, and fix security vulnerabilities. Over the past six months, CodeMender has upstreamed 72 security fixes to open-source projects, including some with up to 4.5 million lines of code. Google also introduced an AI Vulnerability Reward Program (AI VRP) to incentivize reporting AI-related issues in its products, with rewards up to $30,000.
Tree of AST: AI-enhanced vulnerability discovery framework presented at Black Hat USA 2025
Two high school students, Sasha Zyuzin and Ruikai Peng, presented a novel framework for vulnerability discovery at Black Hat USA 2025. The framework, Tree of AST, combines traditional static analysis with AI to automate repetitive tasks in vulnerability hunting while maintaining human oversight. The approach leverages Google DeepMind's Tree of Thoughts methodology to mimic human reasoning in bug discovery. The framework aims to reduce manual effort but not eliminate it entirely, focusing on autonomous decision-making with human verification to prevent false positives. The students discussed the double-edged nature of AI in security, noting that while AI can improve code quality, it can also introduce security risks when prioritizing functionality over security. They highlighted the importance of validating results and the potential risks of 'vibe coding'—using AI to generate code without sufficient security considerations. The framework is designed to complement existing security products rather than replace them, enhancing traditional methods with AI capabilities.
Zero-click exploit targets AI enterprise agents
AI enterprise agents, integrated with various enterprise environments, are vulnerable to zero-click exploits. Attackers can take over these agents using only a user's email address, gaining access to sensitive data and manipulating users. The exploit affects major AI assistants from Microsoft, Google, OpenAI, Salesforce, and others. Organizations must adopt dedicated security programs to manage ongoing risks associated with AI agents. Current security approaches focusing on prompt injection have proven ineffective. The exploit highlights the need for defense-in-depth strategies and hard boundaries to mitigate risks. Organizations are advised to assume breaches and apply lessons learned from past security challenges.