CyberHappenings logo
☰

DARPA's AIxCC Competition Demonstrates AI's Potential in Securing Open Source Software

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

DARPA's AI Cyber Challenge (AIxCC) concluded with significant advancements in using AI to secure open source software. The competition, which focused on automating the identification and patching of vulnerabilities, demonstrated the effectiveness of AI in enhancing cybersecurity. Teams developed cyber reasoning systems (CRSes) that identified and patched both synthetic and real vulnerabilities in open source code. The winners were awarded substantial prizes, and all finalist teams' CRSes will be made open source. The competition highlighted the potential of AI in cybersecurity, with teams discovering 54 unique synthetic vulnerabilities and 18 real vulnerabilities. The technology developed during AIxCC aims to improve the security of critical infrastructure by addressing vulnerabilities in open source software.

Timeline

  1. 21.08.2025 16:00 📰 1 articles

    DARPA's AIxCC Competition Concludes with Significant Advancements in AI-Driven Cybersecurity

    The AI Cyber Challenge (AIxCC) concluded with teams developing cyber reasoning systems (CRSes) that identified and patched both synthetic and real vulnerabilities in open source software. The competition demonstrated the effectiveness of AI in enhancing cybersecurity, with teams discovering 54 unique synthetic vulnerabilities and 18 real vulnerabilities. The winners were awarded substantial prizes, and all finalist teams' CRSes will be made open source to ensure widespread use and improvement.

    Show sources

Information Snippets

  • AIxCC was a two-year program focused on using AI to secure open source technology underlying critical infrastructure.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • Teams developed cyber reasoning systems (CRSes) to identify and generate patches for vulnerabilities.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • In the final competition, CRSes discovered 54 unique synthetic vulnerabilities and patched 43.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • Teams also discovered 18 additional real vulnerabilities and provided 11 patches during the competition.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • The competition tasks cost an average of $152, significantly lower than traditional bug bounty costs.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • The winners were Team Atlanta, Trail of Bits, and Theori, receiving $4 million, $3 million, and $1.5 million respectively.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • All finalist teams' CRSes will be made available for open source use.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources
  • The competition aimed to improve the security of open source software used in critical infrastructure.

    First reported: 21.08.2025 16:00
    📰 1 source, 1 article
    Show sources

Similar Happenings

HexStrike AI Exploits Citrix Vulnerabilities Disclosed in August 2025

Threat actors have begun using HexStrike AI to exploit Citrix vulnerabilities disclosed in August 2025. HexStrike AI, an AI-driven security platform, was designed to automate reconnaissance and vulnerability discovery for authorized red teaming operations, but it has been repurposed for malicious activities. The exploitation attempts target three Citrix vulnerabilities, with some threat actors offering access to vulnerable NetScaler instances for sale on darknet forums. The use of HexStrike AI by threat actors significantly reduces the time between vulnerability disclosure and exploitation, increasing the risk of widespread attacks. The tool's automation capabilities allow for continuous exploitation attempts, enhancing the likelihood of successful breaches. Security experts emphasize the urgency of patching and hardening affected systems to mitigate the risks posed by this AI-driven threat. HexStrike AI's client features a retry logic and recovery handling to mitigate the effects of failures in any individual step on its complex operations. HexStrike AI has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks. Hackers started discussing HexStrike AI on hacking forums within hours of the Citrix vulnerabilities disclosure. HexStrike AI has been used to automate the exploitation chain, including scanning for vulnerable instances, crafting exploits, delivering payloads, and maintaining persistence. Check Point recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.

AI systems vulnerable to data-theft via hidden prompts in downscaled images

Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.

Black Hat NOC Enhances AI in Security Operations

The Black Hat USA 2025 Network Operations Center (NOC) team expanded its use of AI and machine learning (ML) to manage and secure the conference network. The team monitored and mitigated malicious activities, distinguishing between legitimate and illegal activities, and identified vulnerabilities in applications and misconfigurations in security tools. The NOC team also observed trends such as increased self-hosting and insecure data transmission, which pose risks to organizations. The NOC team leveraged AI for risk scoring and categorization, ensuring that legitimate training activities were not flagged as malicious. They also identified and alerted attendees to security issues, such as misconfigured security tools and vulnerable applications, which could expose sensitive data. Additionally, high school students Sasha Zyuzin and Ruikai Peng presented a new vulnerability discovery framework at Black Hat USA 2025, combining static analysis with AI. Their framework, "Tree of AST," uses Google DeepMind's Tree of Thoughts methodology to automate vulnerability hunting while maintaining human oversight. The presenters discussed the double-edged nature of AI in security, noting that while LLMs can improve code quality, they can also introduce security risks.