CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Google Launches AI Vulnerability Reward Program

First reported
Last updated
2 unique sources, 2 articles

Summary

Hide ▲

Google has launched an AI Vulnerability Reward Program, offering up to $30,000 for identifying and reporting flaws in its AI systems. The program targets high-impact issues in key AI products, including Google Search, Gemini Apps, and Google Workspace core applications. The initiative aims to enhance the security of Google's AI products by leveraging external security research. The program includes various reward tiers based on the severity and impact of the vulnerabilities discovered. Since the program's inception, over $430,000 has been paid in AI-product related rewards. The AI VRP has been developed based on feedback from researchers who participated in the Abuse VRP. The program simplifies the reporting process by moving AI-related issues to the new AI VRP and includes a unified reward table for both abuse and security issues.

Timeline

  1. 07.10.2025 16:19 2 articles · 8d ago

    Google Launches AI Vulnerability Reward Program

    The AI Vulnerability Reward Program has paid over $430,000 in AI-product related rewards since the Abuse VRP program was created. The AI VRP includes a unified reward table for both abuse and security issues, reviewed by a single reward panel. The program simplifies the reporting process by moving AI-related issues to the new AI VRP and includes a unified reward table for both abuse and security issues. The AI VRP has been developed based on feedback from researchers who participated in the Abuse VRP.

    Show sources

Information Snippets

Similar Happenings

Google's CodeMender AI Automatically Patches Vulnerabilities in Code

Google's DeepMind division has released CodeMender, an AI-powered agent that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. CodeMender is designed to be both reactive and proactive, fixing new vulnerabilities as soon as they are spotted and rewriting existing codebases to eliminate classes of vulnerabilities. The AI agent leverages Google's Gemini Deep Think models and a large language model (LLM)-based critique tool to debug, flag, and fix security vulnerabilities. Over the past six months, CodeMender has upstreamed 72 security fixes to open-source projects, including some with up to 4.5 million lines of code. Google also introduced an AI Vulnerability Reward Program (AI VRP) to incentivize reporting AI-related issues in its products, with rewards up to $30,000.

Tree of AST: AI-enhanced vulnerability discovery framework presented at Black Hat USA 2025

Two high school students, Sasha Zyuzin and Ruikai Peng, presented a novel framework for vulnerability discovery at Black Hat USA 2025. The framework, Tree of AST, combines traditional static analysis with AI to automate repetitive tasks in vulnerability hunting while maintaining human oversight. The approach leverages Google DeepMind's Tree of Thoughts methodology to mimic human reasoning in bug discovery. The framework aims to reduce manual effort but not eliminate it entirely, focusing on autonomous decision-making with human verification to prevent false positives. The students discussed the double-edged nature of AI in security, noting that while AI can improve code quality, it can also introduce security risks when prioritizing functionality over security. They highlighted the importance of validating results and the potential risks of 'vibe coding'—using AI to generate code without sufficient security considerations. The framework is designed to complement existing security products rather than replace them, enhancing traditional methods with AI capabilities.

Zero-click exploit targets AI enterprise agents

AI enterprise agents, integrated with various enterprise environments, are vulnerable to zero-click exploits. Attackers can take over these agents using only a user's email address, gaining access to sensitive data and manipulating users. The exploit affects major AI assistants from Microsoft, Google, OpenAI, Salesforce, and others. Organizations must adopt dedicated security programs to manage ongoing risks associated with AI agents. Current security approaches focusing on prompt injection have proven ineffective. The exploit highlights the need for defense-in-depth strategies and hard boundaries to mitigate risks. Organizations are advised to assume breaches and apply lessons learned from past security challenges.