CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Google's CodeMender AI Automatically Patches Vulnerabilities in Code

First reported
Last updated
1 unique sources, 1 articles

Summary

Hide ▲

Google's DeepMind division has released CodeMender, an AI-powered agent that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. CodeMender is designed to be both reactive and proactive, fixing new vulnerabilities as soon as they are spotted and rewriting existing codebases to eliminate classes of vulnerabilities. The AI agent leverages Google's Gemini Deep Think models and a large language model (LLM)-based critique tool to debug, flag, and fix security vulnerabilities. Over the past six months, CodeMender has upstreamed 72 security fixes to open-source projects, including some with up to 4.5 million lines of code. Google also introduced an AI Vulnerability Reward Program (AI VRP) to incentivize reporting AI-related issues in its products, with rewards up to $30,000.

Timeline

  1. 07.10.2025 18:18 1 articles · 6d ago

    Google Releases CodeMender AI for Automatic Vulnerability Patching

    Google's DeepMind division announced CodeMender, an AI-powered agent that automatically detects, patches, and rewrites vulnerable code. Over the past six months, CodeMender has upstreamed 72 security fixes to open-source projects. Google also introduced the AI Vulnerability Reward Program (AI VRP) to incentivize reporting AI-related issues in its products, with rewards up to $30,000. The Secure AI Framework (SAIF) has been updated to address new agentic security risks.

    Show sources

Information Snippets

Similar Happenings

Google Launches AI Vulnerability Reward Program

Google has launched an AI Vulnerability Reward Program, offering up to $30,000 for identifying and reporting flaws in its AI systems. The program targets high-impact issues in key AI products, including Google Search, Gemini Apps, and Google Workspace core applications. The initiative aims to enhance the security of Google's AI products by leveraging external security research. The program includes various reward tiers based on the severity and impact of the vulnerabilities discovered. Since the program's inception, over $430,000 has been paid in AI-product related rewards. The AI VRP has been developed based on feedback from researchers who participated in the Abuse VRP. The program simplifies the reporting process by moving AI-related issues to the new AI VRP and includes a unified reward table for both abuse and security issues.