CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Claude Mythos uncovers thousands of zero-days across major systems via Project Glasswing

First reported
Last updated
2 unique sources, 2 articles

Summary

Hide ▲

Anthropic’s frontier AI model, Claude Mythos Preview, has autonomously discovered and remediated thousands of high-severity zero-day vulnerabilities across major operating systems, web browsers, and software libraries through Project Glasswing, a consortium involving AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic. The initiative leverages Mythos Preview’s advanced agentic coding and reasoning capabilities—developed without explicit cybersecurity training—to identify long-standing flaws such as a 27-year-old OpenBSD denial-of-service vulnerability and a 16-year-old FFmpeg flaw. Anthropic has reported and patched these vulnerabilities while committing $100 million in usage credits and $4 million in donations to support open-source security efforts. The project aims to safely deploy Mythos-class models at scale, though concerns persist about potential malicious exploitation by threat actors.

Timeline

  1. 08.04.2026 12:16 2 articles · 2h ago

    Claude Mythos autonomously uncovers thousands of zero-days across major software platforms

    Claude Mythos Preview, operating under Project Glasswing, autonomously discovered thousands of high-severity zero-day vulnerabilities across major operating systems, web browsers, and software libraries. The model identified and demonstrated exploits for long-standing flaws, including a 27-year-old OpenBSD bug that allowed remote denial-of-service via simple connection, a 16-year-old FFmpeg vulnerability only detectable after automated testing hit the code five million times, and multiple Linux kernel flaws enabling privilege escalation from user access to full system control. Mythos Preview autonomously developed a multi-stage web browser exploit that chained four vulnerabilities to escape renderer and OS sandboxes, solving a complex network attack simulation in under 10 hours. Project Glasswing, launched on April 7, is a consortium including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic. Anthropic committed up to $100 million in usage credits and $4 million in donations to open-source security organizations to scale secure usage of Mythos Preview. The company emphasized the model's agentic coding and reasoning skills—developed without explicit cybersecurity training—as the driver behind these capabilities.

    Show sources

Information Snippets

Similar Happenings

Frontier AI dependency recommendations found to generate flawed upgrade and patch guidance

A study by Sonatype analyzing 258,000 AI-generated dependency upgrade recommendations across Maven Central, npm, PyPI, and NuGet from June to August 2025 revealed that frontier AI models—including GPT-5.2, Claude Sonnet 3.7/4.5, Claude Opus 4.6, and Gemini 2.5 Pro/3 Pro—frequently produce hallucinated or incorrect upgrade paths, security fixes, and version recommendations. Nearly 28% of recommendations from earlier models were hallucinations, while even improved frontier models introduced faulty advice, leaving critical and high-severity vulnerabilities unresolved in production environments. The issue stems from the models’ lack of real-time dependency, vulnerability, compatibility, and enterprise policy context, leading to wasted developer time, unresolved exposures, and increased technical debt. Notably, some recommendations introduced known vulnerabilities into AI tooling stacks themselves, exacerbating risk within the models’ own infrastructure.

Emergence of AI-powered attack and defense techniques reshaping cyber threat landscape in 2026

At RSAC 2026, SANS Institute researchers unveiled five AI-driven attack techniques becoming mainstream in 2026, fundamentally altering the cyber threat landscape. Independent researchers demonstrated AI-generated zero-day exploits at minimal cost ($116 in AI token expenses), breaking historical barriers to zero-day development. Supply chain attacks continued to surge, with malicious packages like the Shai-Hulud worm exposing 14,000 credentials across 487 organizations and a China-affiliated group compromising Notepad++ update infrastructure for six months. Operational Technology (OT) environments face increasing accountability crises due to lack of visibility, where evidence evaporates post-compromise and critical infrastructure incidents result in catastrophic outcomes with unclear attribution. Irresponsible AI deployment in Digital Forensics & Incident Response (DFIR) is generating false confidence and undermining response outcomes. Meanwhile, defenders are adopting autonomous defense frameworks like Protocol SIFT to counter AI-driven attacks, achieving up to 47x faster response times in simulated incidents.

Claude Opus 4.6 Identifies 500+ High-Severity Flaws in Open-Source Libraries

Anthropic's Claude Opus 4.6, a large language model (LLM), discovered over 500 previously unknown high-severity security flaws in major open-source libraries such as Ghostscript, OpenSC, and CGIF. The model, launched on February 6, 2026, demonstrated improved capabilities in code review, debugging, and vulnerability detection. The flaws were identified without requiring task-specific tooling or specialized prompting. Anthropic validated each flaw to ensure they were not hallucinated and prioritized severe memory corruption vulnerabilities. The identified vulnerabilities have since been patched by the respective maintainers.

OpenAI's Aardvark agent for automated code vulnerability detection and patching

OpenAI has introduced Aardvark, an agentic security researcher powered by GPT-5, designed to automatically detect, assess, and patch security vulnerabilities in code repositories. The agent integrates into the software development pipeline to continuously monitor code changes and propose fixes. Aardvark has already identified at least 10 CVEs in open-source projects during its beta testing phase. The agent uses GPT-5's advanced reasoning capabilities and a sandboxed environment to validate and patch vulnerabilities. OpenAI envisions Aardvark as a tool to enhance security without hindering innovation. OpenAI has rolled out Codex Security, an evolution of Aardvark, which is available in a research preview to ChatGPT Pro, Enterprise, Business, and Edu customers. Codex Security has scanned over 1.2 million commits, identifying 792 critical and 10,561 high-severity findings. The tool leverages advanced models and automated validation to minimize false positives and propose actionable fixes.

AI-Powered Offensive Research System Generates Exploits in Minutes

An AI-powered offensive research system, named Auto Exploit, has developed exploits for 14 vulnerabilities in open-source software packages in under 15 minutes. The system uses large language models (LLMs) and CVE advisories to create proof-of-concept exploit code, significantly reducing the time required for exploit development. This advancement highlights the potential impact of full automation on enterprise defenders, who must adapt to vulnerabilities that can be quickly turned into exploits. The system, developed by Israeli cybersecurity researchers, leverages Anthropic's Claude-sonnet-4.0 model to analyze advisories and code patches, generate vulnerable test applications and exploit code, and validate the results. The researchers emphasize that while the approach requires some manual tweaking, it demonstrates the potential for LLMs to accelerate exploit development, posing new challenges for cybersecurity defenses.