CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

AI-assisted zero-day vulnerability weaponized against web-based admin tool

First reported
Last updated
1 unique sources, 1 articles

Summary

Hide ▲

Threat actors leveraged an AI model to identify and weaponize a zero-day vulnerability in a widely used open-source web-based system administration tool, enabling bypass of two-factor authentication (2FA) protections. Google Threat Intelligence Group (GTIG) disrupted the campaign before exploitation occurred, marking the first confirmed instance of AI being used to discover and weaponize a zero-day. The attack underscored the accelerating integration of AI into cyber threat operations across criminal and state-sponsored groups.

Timeline

  1. 11.05.2026 16:00 1 articles · 1h ago

    AI-developed zero-day exploit targeting web admin tool disrupted

    A coordinated campaign leveraging an AI-developed zero-day exploit to bypass 2FA in a widely used open-source web administration tool was identified and disrupted. The exploit, written in Python with AI-generated characteristics, was neutralized prior to deployment through collaboration between GTIG and the affected vendor. The attack represents the first confirmed instance of AI being used across the full lifecycle of a zero-day vulnerability, from discovery to weaponization.

    Show sources

Information Snippets

Similar Happenings

AI-Augmented Exploit Development and Autonomous Attack Orchestration Observed in Active Threat Campaigns

Threat actors are leveraging large language models (LLMs) and agentic AI tools to automate vulnerability research, exploit development, and multi-stage attack orchestration. Actors have demonstrated the ability to develop zero-day exploits using AI-generated code, automate reconnaissance and persistence mechanisms, and orchestrate autonomous campaigns against enterprise targets. The shift toward AI-driven frameworks reduces human oversight in attack execution, increasing operational speed and scaling potential. The observed activities span credential-assisted 2FA bypass exploits, Android backdoor automation via AI prompts, and agentic workflows for vulnerability validation and persistence maintenance.

State-Backed Hackers Abuse AI Models for Advanced Cyber Attacks

Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. State-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration. Chinese threat actors used Gemini to automate vulnerability analysis and provide targeted testing plans against specific US-based targets. Iranian adversary APT42 leveraged Gemini for social engineering campaigns and to speed up the creation of tailored malicious tools. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.

AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic

In mid-September 2025, state-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign." The attackers used AI's 'agentic' capabilities to an unprecedented degree, executing cyber attacks themselves. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention, targeting about 30 global entities across various sectors. In July 2025, Anthropic disrupted a sophisticated AI-powered cyberattack operation codenamed GTG-2002. The actor targeted 17 organizations across critical sectors, using Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. The operation also included the creation of obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually. In February 2026, Anthropic identified industrial-scale campaigns by three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) to illegally extract Claude's capabilities. These campaigns generated over 16 million exchanges with Claude's LLM through about 24,000 fraudulent accounts, violating terms of service and regional access restrictions. The distillation attacks targeted Claude's reasoning capabilities, agentic reasoning, tool use, coding capabilities, and computer vision. Anthropic attributed each campaign to a specific AI lab based on request metadata, IP address correlation, and infrastructure indicators. To counter the threat, Anthropic built classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns and implemented enhanced safeguards. Anthropic warned that illicitly distilled models can be used for malicious and harmful purposes, such as developing bioweapons or carrying out malicious cyber activities. Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic does not currently offer commercial access to Claude in China or to subsidiaries of Chinese companies located outside of the country for security reasons.