CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

AI-Driven Cyberattacks Exploit Network Vulnerabilities

First reported
Last updated
2 unique sources, 2 articles

Summary

Hide ▲

Adversarial AI-based attacks, such as those by Scattered Spider, are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. Network Detection and Response (NDR) systems are increasingly being adopted to counter these AI-driven threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. NDR solutions can detect fast-moving, polymorphic attacks, summarize network activities, and render verdicts on potential threats, reducing the pressure on SOC analysts. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.

Timeline

  1. 26.01.2026 13:30 1 articles · 23h ago

    Blockade Spider Uses Mixed Domains for Ransomware Attacks

    Blockade Spider, active since April 2024, uses mixed domains for ransomware attacks. After gaining access through unmanaged systems, they move laterally across a network, searching for file collections to encrypt for ransom. The full breadth of their approach was discovered by using NDR to obtain visibility into virtual systems and cloud properties, and then using EDR as soon as the attack moved across the network into managed endpoints.

    Show sources
  2. 26.01.2026 13:30 1 articles · 23h ago

    Volt Typhoon Attack Uses LoTL Techniques on Unmanaged Devices

    Volt Typhoon, attributed to Chinese state-sponsored actors, used living off the land (LoTL) techniques to avoid endpoint detection. Targeting unmanaged network edge devices, such as SOHO routers and IoT hardware, the actors altered originating packets to appear legitimate. NDR detected variations in network traffic volume, indicating malicious activity that slipped past EDR systems.

    Show sources
  3. 11.12.2025 17:05 2 articles · 1mo ago

    AI-Driven Cyberattacks Exploit Network Vulnerabilities

    Adversarial AI-based attacks are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. NDR systems are increasingly being adopted to counter these threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.

    Show sources

Information Snippets

Similar Happenings

AI-Driven 'Fifth Wave' of Cybercrime Expands with Dark LLMs and Deepfake Kits

Group-IB's report identifies a new 'fifth wave' of cybercrime, characterized by the widespread adoption of AI and generative AI (GenAI) tools. This wave, termed 'weaponized AI,' enables cheaper, faster, and more scalable cybercrime. Key developments include the proliferation of deepfake kits, AI-powered phishing kits, and proprietary 'dark LLMs' used for various malicious activities. The report highlights the increasing sophistication and accessibility of these tools, which are fueling a surge in cybercrime activities.

AI-Powered Malware Families Deployed in the Wild

Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.

Inadequate security readiness for AI deployments in organizations

Organizations are rapidly adopting AI, particularly generative AI, without adequate security measures. This trend exposes them to significant cybersecurity risks, including phishing, fraud, and model manipulation. The lack of preparedness is evident across various sectors, with smaller businesses being particularly vulnerable. Effective AI security requires integrated, proactive measures and a security-first mindset. The World Economic Forum (WEF) and Accenture highlight that many organizations lack foundational data and AI security practices, leaving them exposed to AI-driven cyberattacks. These attacks can exploit vulnerabilities in AI systems, leading to data breaches and financial losses. Organizations need to embed security into AI development pipelines, continuously monitor AI models, and unify cyber resilience strategies to mitigate these risks.

AI-Powered Offensive Research System Generates Exploits in Minutes

An AI-powered offensive research system, named Auto Exploit, has developed exploits for 14 vulnerabilities in open-source software packages in under 15 minutes. The system uses large language models (LLMs) and CVE advisories to create proof-of-concept exploit code, significantly reducing the time required for exploit development. This advancement highlights the potential impact of full automation on enterprise defenders, who must adapt to vulnerabilities that can be quickly turned into exploits. The system, developed by Israeli cybersecurity researchers, leverages Anthropic's Claude-sonnet-4.0 model to analyze advisories and code patches, generate vulnerable test applications and exploit code, and validate the results. The researchers emphasize that while the approach requires some manual tweaking, it demonstrates the potential for LLMs to accelerate exploit development, posing new challenges for cybersecurity defenses.

AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic

In mid-September 2025, state-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign." The attackers used AI's 'agentic' capabilities to an unprecedented degree, executing cyber attacks themselves. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention, targeting about 30 global entities across various sectors. Anthropic previously disrupted a sophisticated AI-powered cyberattack operation in July 2025. The actor targeted 17 organizations across healthcare, emergency services, government, and religious institutions. The attacker used Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The actor threatened to expose stolen data publicly to extort victims into paying ransoms. The operation, codenamed GTG-2002, employed Claude Code on Kali Linux to conduct attacks, using it to make tactical and strategic decisions autonomously. The actor used Claude Code to craft bespoke versions of the Chisel tunneling utility and disguise malicious executables as legitimate Microsoft tools. The actor organized stolen data for monetization, creating customized ransom notes and multi-tiered extortion strategies. Anthropic developed a custom classifier to screen for similar behavior and shared technical indicators with key partners to mitigate future threats. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. Claude Code was used for real-time assistance with network penetrations and direct operational support for active intrusions, such as guidance for privilege escalation and lateral movement. The threat actor created obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.