AI-Driven Cyberattacks Exploit Network Vulnerabilities
Summary
Hide ▲
Show ▼
Adversarial AI-based attacks, such as those by Scattered Spider, are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. Network Detection and Response (NDR) systems are increasingly being adopted to counter these AI-driven threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. NDR solutions can detect fast-moving, polymorphic attacks, summarize network activities, and render verdicts on potential threats, reducing the pressure on SOC analysts. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.
Timeline
-
26.01.2026 13:30 1 articles · 23h ago
Blockade Spider Uses Mixed Domains for Ransomware Attacks
Blockade Spider, active since April 2024, uses mixed domains for ransomware attacks. After gaining access through unmanaged systems, they move laterally across a network, searching for file collections to encrypt for ransom. The full breadth of their approach was discovered by using NDR to obtain visibility into virtual systems and cloud properties, and then using EDR as soon as the attack moved across the network into managed endpoints.
Show sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
26.01.2026 13:30 1 articles · 23h ago
Volt Typhoon Attack Uses LoTL Techniques on Unmanaged Devices
Volt Typhoon, attributed to Chinese state-sponsored actors, used living off the land (LoTL) techniques to avoid endpoint detection. Targeting unmanaged network edge devices, such as SOHO routers and IoT hardware, the actors altered originating packets to appear legitimate. NDR detected variations in network traffic volume, indicating malicious activity that slipped past EDR systems.
Show sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
11.12.2025 17:05 2 articles · 1mo ago
AI-Driven Cyberattacks Exploit Network Vulnerabilities
Adversarial AI-based attacks are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. NDR systems are increasingly being adopted to counter these threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.
Show sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
Information Snippets
-
AI-based attacks are using living-off-the-land methods to spread and evade detection.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Google's Threat Intelligence group has tracked new AI-fueled attack methods that can bypass safety guardrails and generate malicious scripts.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Anthropic observed the first known use of AI-based orchestration to stitch together malware for network reconnaissance and data harvesting.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
AI orchestration can happen at a speed and scale that overwhelms manual detection and remediation methods.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
The Cloud Security Alliance report lists over 70 ways autonomous AI-based agents can attack enterprise systems.
First reported: 11.12.2025 17:051 source, 1 articleShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
-
NDR systems provide real-time monitoring and analysis of network data to detect AI-based threats.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
NDR solutions can identify and counteract AI-fueled reconnaissance campaigns and polymorphic attacks.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
NDR systems can summarize and analyze network activities, such as encrypted traffic ratios and new protocol usage.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
NDR solutions can save patterns for future inspection and analysis, helping to prevent future attacks.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
NDR systems can render verdicts on potential threats, reducing false positives and pressures on SOC analysts.
First reported: 11.12.2025 17:052 sources, 2 articlesShow sources
- AI is accelerating cyberattacks. Is your network prepared? — www.bleepingcomputer.com — 11.12.2025 17:05
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Google's Threat Intelligence Group reported adversaries using LLMs to conceal code and generate malicious scripts in real-time.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Anthropic reported the first known AI-orchestrated cyber espionage campaign in November 2025.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
ClickFix-related attacks using steganography techniques slipped past signature-based scans.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Adversaries exploited AV exclusion rules using social engineering, attack-in-the-middle, and SIM swapping techniques.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Blockade Spider uses mixed domains for ransomware attacks, discovered using NDR and EDR together.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Volt Typhoon attack used living off the land (LoTL) techniques targeting unmanaged network edge devices.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
-
Compromised VPNs can hide lateral network movement, identified by NDR and EDR working together.
First reported: 26.01.2026 13:301 source, 1 articleShow sources
- Winning Against AI-Based Attacks Requires a Combined Defensive Approach — thehackernews.com — 26.01.2026 13:30
Similar Happenings
AI-Driven 'Fifth Wave' of Cybercrime Expands with Dark LLMs and Deepfake Kits
Group-IB's report identifies a new 'fifth wave' of cybercrime, characterized by the widespread adoption of AI and generative AI (GenAI) tools. This wave, termed 'weaponized AI,' enables cheaper, faster, and more scalable cybercrime. Key developments include the proliferation of deepfake kits, AI-powered phishing kits, and proprietary 'dark LLMs' used for various malicious activities. The report highlights the increasing sophistication and accessibility of these tools, which are fueling a surge in cybercrime activities.
AI-Powered Malware Families Deployed in the Wild
Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.
Inadequate security readiness for AI deployments in organizations
Organizations are rapidly adopting AI, particularly generative AI, without adequate security measures. This trend exposes them to significant cybersecurity risks, including phishing, fraud, and model manipulation. The lack of preparedness is evident across various sectors, with smaller businesses being particularly vulnerable. Effective AI security requires integrated, proactive measures and a security-first mindset. The World Economic Forum (WEF) and Accenture highlight that many organizations lack foundational data and AI security practices, leaving them exposed to AI-driven cyberattacks. These attacks can exploit vulnerabilities in AI systems, leading to data breaches and financial losses. Organizations need to embed security into AI development pipelines, continuously monitor AI models, and unify cyber resilience strategies to mitigate these risks.
AI-Powered Offensive Research System Generates Exploits in Minutes
An AI-powered offensive research system, named Auto Exploit, has developed exploits for 14 vulnerabilities in open-source software packages in under 15 minutes. The system uses large language models (LLMs) and CVE advisories to create proof-of-concept exploit code, significantly reducing the time required for exploit development. This advancement highlights the potential impact of full automation on enterprise defenders, who must adapt to vulnerabilities that can be quickly turned into exploits. The system, developed by Israeli cybersecurity researchers, leverages Anthropic's Claude-sonnet-4.0 model to analyze advisories and code patches, generate vulnerable test applications and exploit code, and validate the results. The researchers emphasize that while the approach requires some manual tweaking, it demonstrates the potential for LLMs to accelerate exploit development, posing new challenges for cybersecurity defenses.
AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
In mid-September 2025, state-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign." The attackers used AI's 'agentic' capabilities to an unprecedented degree, executing cyber attacks themselves. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention, targeting about 30 global entities across various sectors. Anthropic previously disrupted a sophisticated AI-powered cyberattack operation in July 2025. The actor targeted 17 organizations across healthcare, emergency services, government, and religious institutions. The attacker used Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The actor threatened to expose stolen data publicly to extort victims into paying ransoms. The operation, codenamed GTG-2002, employed Claude Code on Kali Linux to conduct attacks, using it to make tactical and strategic decisions autonomously. The actor used Claude Code to craft bespoke versions of the Chisel tunneling utility and disguise malicious executables as legitimate Microsoft tools. The actor organized stolen data for monetization, creating customized ransom notes and multi-tiered extortion strategies. Anthropic developed a custom classifier to screen for similar behavior and shared technical indicators with key partners to mitigate future threats. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. Claude Code was used for real-time assistance with network penetrations and direct operational support for active intrusions, such as guidance for privilege escalation and lateral movement. The threat actor created obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.