AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
Summary
Hide ▲
Show ▼
In mid-September 2025, state-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign." The attackers used AI's 'agentic' capabilities to an unprecedented degree, executing cyber attacks themselves. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention, targeting about 30 global entities across various sectors. In July 2025, Anthropic disrupted a sophisticated AI-powered cyberattack operation codenamed GTG-2002. The actor targeted 17 organizations across critical sectors, using Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. The operation also included the creation of obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually. In February 2026, Anthropic identified industrial-scale campaigns by three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) to illegally extract Claude's capabilities. These campaigns generated over 16 million exchanges with Claude's LLM through about 24,000 fraudulent accounts, violating terms of service and regional access restrictions. The distillation attacks targeted Claude's reasoning capabilities, agentic reasoning, tool use, coding capabilities, and computer vision. Anthropic attributed each campaign to a specific AI lab based on request metadata, IP address correlation, and infrastructure indicators. To counter the threat, Anthropic built classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns and implemented enhanced safeguards. Anthropic warned that illicitly distilled models can be used for malicious and harmful purposes, such as developing bioweapons or carrying out malicious cyber activities. Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic does not currently offer commercial access to Claude in China or to subsidiaries of Chinese companies located outside of the country for security reasons.
Timeline
-
24.02.2026 08:04 2 articles · 1d ago
Anthropic Identifies Industrial-Scale Campaigns by Chinese AI Firms
In February 2026, Anthropic identified industrial-scale campaigns by three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) to illegally extract Claude's capabilities. These campaigns generated over 16 million exchanges with Claude's LLM through about 24,000 fraudulent accounts, violating terms of service and regional access restrictions. The distillation attacks targeted Claude's reasoning capabilities, agentic reasoning, tool use, coding capabilities, and computer vision. Anthropic attributed each campaign to a specific AI lab based on request metadata, IP address correlation, and infrastructure indicators. To counter the threat, Anthropic built classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns and implemented enhanced safeguards. Anthropic warned that illicitly distilled models can be used for malicious and harmful purposes, such as developing bioweapons or carrying out malicious cyber activities. Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic does not currently offer commercial access to Claude in China or to subsidiaries of Chinese companies located outside of the country for security reasons.
Show sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
14.11.2025 11:53 2 articles · 3mo ago
Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign
The article confirms that the attackers are likely Chinese state-sponsored hackers and deployed the campaigns for cyber espionage purposes. It also details the six-phase attack chain, including campaign initialization and target selection, reconnaissance and attack surface mapping, vulnerability discovery and validation, credential harvesting and lateral movement, data collection and intelligence extraction, and documentation and handoff. The victims of the cyber-attacks saw their systems infiltrated with minor human intervention. Anthropic assessed that the AI assistant, Claude Code, performed up to 80-90% of the tasks, with only four to six critical decision points per hacking campaign made by the hackers themselves.
Show sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
-
27.08.2025 18:10 4 articles · 6mo ago
AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
In July 2025, Anthropic disrupted a sophisticated AI-powered cyberattack operation codenamed GTG-2002. The actor targeted 17 organizations across critical sectors, using Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. The operation also included the creation of obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.
Show sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
Information Snippets
-
The actor targeted 17 organizations across critical sectors, including healthcare, emergency services, government, and religious institutions.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attacker used Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The actor employed Claude Code on Kali Linux as a comprehensive attack platform, embedding operational instructions in a CLAUDE.md file.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The operation involved scanning thousands of VPN endpoints to flag susceptible systems, obtaining initial access, and extracting credentials.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attacker used Claude Code to craft bespoke versions of the Chisel tunneling utility and disguise malicious executables as legitimate Microsoft tools.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The actor organized stolen data for monetization, creating customized ransom notes and multi-tiered extortion strategies.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic developed a custom classifier to screen for similar behavior and shared technical indicators with key partners.
First reported: 27.08.2025 18:102 sources, 3 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
-
The actor threatened to expose stolen data publicly to extort victims into paying ransoms ranging from $75,000 to $500,000 in Bitcoin.
First reported: 27.08.2025 18:103 sources, 3 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Claude Code was used to make tactical and strategic decisions autonomously, including deciding which data to exfiltrate and crafting targeted extortion demands.
First reported: 27.08.2025 18:103 sources, 4 articlesShow sources
- Anthropic Disrupts AI-Powered Cyberattacks Automating Theft and Extortion Across Critical Sectors — thehackernews.com — 27.08.2025 18:10
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic revealed that a cybercriminal abused its agentic artificial intelligence coding tool to automate a large-scale data theft and extortion campaign, marking a new evolution in how threat actors are weaponizing AI.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file that is used as a guide for Claude Code to respond to prompts in a manner preferred by the user.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Claude Code was used for real-time assistance with network penetrations and direct operational support for active intrusions, such as guidance for privilege escalation and lateral movement.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Claude Code was used for automated credential harvesting and data exfiltration as well as the creation of malware and anti-detection tools.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The threat actor created obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all.
First reported: 28.08.2025 00:152 sources, 2 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading.
First reported: 28.08.2025 00:152 sources, 2 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information.
First reported: 28.08.2025 00:152 sources, 2 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process.
First reported: 28.08.2025 00:152 sources, 2 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.
First reported: 28.08.2025 00:153 sources, 3 articlesShow sources
- Anthropic AI Used to Automate Data Extortion Campaign — www.darkreading.com — 28.08.2025 00:15
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
State-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign" in mid-September 2025.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attackers used AI's 'agentic' capabilities to an unprecedented degree – using AI not just as an advisor, but to execute the cyber attacks themselves.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The activity is assessed to have manipulated Claude Code, Anthropic's AI coding tool, to attempt to break into about 30 global targets spanning large tech companies, financial institutions, chemical manufacturing companies, and government agencies.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention and for intelligence collection by striking high-value targets, indicating continued evolution in adversarial use of the technology.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The threat actor turned Claude into an "autonomous cyber attack agent" to support various stages of the attack lifecycle, including reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration.
First reported: 14.11.2025 11:532 sources, 2 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The system is part of an attack framework that accepts as input a target of interest from a human operator and then leverages the power of MCP to conduct reconnaissance and attack surface mapping.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
In one case targeting an unnamed technology company, the threat actor is said to have instructed Claude to independently query databases and systems and parse results to flag proprietary information and group findings by intelligence value.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic said its AI tool generated detailed attack documentation at all phases, allowing the threat actors to likely hand off persistent access to additional teams for long-term operations after the initial wave.
First reported: 14.11.2025 11:532 sources, 2 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
There is no evidence that the operational infrastructure enabled custom malware development. Rather, it has been found to rely extensively on publicly available network scanners, database exploitation frameworks, password crackers, and binary analysis suites.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Investigation into the activity has also uncovered a crucial limitation of AI tools: Their tendency to hallucinate and fabricate data during autonomous operations – cooking up fake credentials or presenting publicly available information as critical discoveries – thereby posing major roadblocks to the overall effectiveness of the scheme.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
The disclosure comes nearly four months after Anthropic disrupted another sophisticated operation that weaponized Claude to conduct large-scale theft and extortion of personal data in July 2025.
First reported: 14.11.2025 11:533 sources, 3 articlesShow sources
- Chinese Hackers Use Anthropic's AI to Launch Automated Cyber Espionage Campaign — thehackernews.com — 14.11.2025 11:53
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attackers are likely Chinese state-sponsored hackers and deployed the campaigns for cyber espionage purposes.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The targeted organizations included large tech companies, financial institutions, chemical manufacturing companies, and government agencies.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The victims of the cyber-attacks saw their systems infiltrated with minor human intervention.
First reported: 14.11.2025 14:152 sources, 3 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Anthropic assessed that the AI assistant, Claude Code, performed up to 80-90% of the tasks, with only four to six critical decision points per hacking campaign made by the hackers themselves.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attackers used Claude Code’s agentic capabilities to an unprecedented degree, exploiting features such as the capability for GenAI-powered tools to follow complex instructions, understand context, and make automated decisions.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The attack chain included six phases: campaign initialization and target selection, reconnaissance and attack surface mapping, vulnerability discovery and validation, credential harvesting and lateral movement, data collection and intelligence extraction, and documentation and handoff.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic banned malicious accounts, notified affected entities, and contacted competent authorities to provide them with actionable intelligence within ten days.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic expanded its detection capabilities and developed better classifiers to flag malicious activity.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
The report lacked actionable information such as adversarial prompts, indicators of compromise (IOCs), and clear signals to detect similar activity.
First reported: 14.11.2025 14:152 sources, 2 articlesShow sources
- Chinese Hackers Automate Cyber-Attacks With AI-Powered Claude Code — www.infosecurity-magazine.com — 14.11.2025 14:15
- Anthropic claims of Claude AI-automated cyberattacks met with doubt — www.bleepingcomputer.com — 14.11.2025 20:31
-
Anthropic identified industrial-scale campaigns by three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) to illegally extract Claude's capabilities.
First reported: 24.02.2026 08:042 sources, 2 articlesShow sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
The campaigns generated over 16 million exchanges with Claude's LLM through about 24,000 fraudulent accounts, violating terms of service and regional access restrictions.
First reported: 24.02.2026 08:042 sources, 2 articlesShow sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Distillation attacks targeted Claude's reasoning capabilities, agentic reasoning, tool use, coding capabilities, and computer vision.
First reported: 24.02.2026 08:042 sources, 2 articlesShow sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Anthropic attributed each campaign to a specific AI lab based on request metadata, IP address correlation, and infrastructure indicators.
First reported: 24.02.2026 08:042 sources, 2 articlesShow sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Anthropic built classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns and implemented enhanced safeguards.
First reported: 24.02.2026 08:042 sources, 2 articlesShow sources
- Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model — thehackernews.com — 24.02.2026 08:04
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Anthropic warned that illicitly distilled models can be used for malicious and harmful purposes that the original owner of the stolen model has built guardrails against, such as developing bioweapons or carrying out malicious cyber activities.
First reported: 24.02.2026 13:301 source, 1 articleShow sources
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns and mass surveillance.
First reported: 24.02.2026 13:301 source, 1 articleShow sources
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
-
Anthropic does not currently offer commercial access to Claude in China or to subsidiaries of Chinese companies located outside of the country for security reasons.
First reported: 24.02.2026 13:301 source, 1 articleShow sources
- Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns — www.infosecurity-magazine.com — 24.02.2026 13:30
Similar Happenings
OpenClaw AI Agent Security Concerns in Business Environments
OpenClaw, an open-source AI agent formerly known as MoltBot and ClawdBot, has rapidly gained popularity on GitHub, raising significant security concerns due to its extensive access to user systems and data. The AI agent can execute commands, manage files, and interact with various platforms, posing risks such as prompt injection and unauthorized access. Despite its growth, security experts warn about the dangers of integrating such AI agents into corporate environments without proper safeguards. The project has seen a 14-fold increase in adoption within a week, with over 113,000 stars on GitHub. However, its rapid development and extensive access capabilities have led to concerns about potential data breaches and supply chain risks. Experts emphasize the need for better security practices to mitigate these risks.
Ex-Google Engineer Convicted for Stealing AI Trade Secrets for China
Linwei Ding, a former Google engineer, has been convicted of stealing over 2,000 confidential documents containing AI-related trade secrets to benefit China. The theft occurred between May 2022 and April 2023, involving sensitive information about Google's supercomputing infrastructure, AI models, and custom hardware. Ding was found guilty on seven counts of economic espionage and seven counts of theft of trade secrets. Additionally, three former Google engineers and one of their husbands have been indicted in the U.S. for allegedly committing trade secret theft from Google and other tech firms and transferring the information to unauthorized locations, including Iran. The stolen data included details about Google's Tensor Processing Unit chips, Cluster Management System software, and other proprietary technologies. Ding used deceitful methods to cover up the theft, including transferring data to his personal Google Cloud account and using an accomplice to fake his presence at work. He also applied to a Shanghai-based talent program sponsored by Beijing, aiming to enhance China's AI capabilities. Ding was originally indicted in March 2024 after lying and not cooperating with Google's internal investigation. He was secretly affiliated with two China-based technology companies and negotiated a role as CTO at one of them. Ding founded his own AI company in China (Shanghai Zhisuan Technology Co.) and served as its CEO, intending to benefit entities controlled by the government of China. Ding faces a maximum sentence of 10 years for each theft count and 15 years for each espionage count.
Bizarre Bazaar Campaign Exploits Exposed LLM Endpoints
A cybercrime operation named 'Bizarre Bazaar' is actively targeting exposed or poorly authenticated LLM (Large Language Model) service endpoints. Over 35,000 attack sessions were recorded in 40 days, involving unauthorized access to steal computing resources, resell API access, exfiltrate data, and pivot into internal systems. The campaign highlights the emerging threat of 'LLMjacking' attacks, where attackers exploit misconfigurations in LLM infrastructure to monetize access through cryptocurrency mining and darknet markets. The SilverInc service, marketed on Telegram and Discord, resells access to more than 50 AI models in exchange for cryptocurrency or PayPal payments. A recent investigation by SentinelOne SentinelLABS and Censys revealed 175,000 unique Ollama hosts across 130 countries, many of which are configured with tool-calling capabilities, increasing the risk of LLMjacking attacks.
VoidLink Malware Framework Targets Cloud and Container Environments
VoidLink is a Linux-based command-and-control (C2) framework capable of long-term intrusion across cloud and enterprise environments. The malware generates implant binaries designed for credential theft, data exfiltration, and stealthy persistence on compromised systems. VoidLink combines multi-cloud targeting with container and kernel awareness in a single Linux implant, fingerprinting environments across major cloud providers and adjusting its behavior based on what it finds. The implant harvests credentials from environment variables, configuration files, and metadata APIs, and profiles security controls, kernel versions, and container runtimes before activating additional modules. VoidLink employs a modular plugin-based architecture that loads functionality as needed, including credential harvesting, environment fingerprinting, container escape, Kubernetes privilege escalation, and kernel-level stealth. The malware uses AES-256-GCM over HTTPS for encrypted C2 traffic, designed to resemble normal web activity. VoidLink stands out for its apparent development using a large language model (LLM) coding agent with limited human review, as indicated by unusual development artifacts such as structured "Phase X:" labels, verbose debug logs, and documentation left inside the production binary. The research concludes that VoidLink is not a proof-of-concept but an operational implant with live infrastructure, highlighting how AI-assisted development is lowering the barrier to producing functional, modular, and hard-to-detect malware. A previously unknown threat actor tracked as UAT-9921 has been observed leveraging VoidLink in campaigns targeting the technology and financial services sectors. UAT-9921 has been active since 2019, although they have not necessarily used VoidLink over the duration of their activity. The threat actor uses compromised hosts to install VoidLink command-and-control (C2), which are then used to launch scanning activities both internal and external to the network. VoidLink is deployed as a post-compromise tool, allowing the adversary to sidestep detection. The threat actor has been observed deploying a SOCKS proxy on compromised servers to launch scans for internal reconnaissance and lateral movement using open-source tools like Fscan. VoidLink uses three different programming languages: ZigLang for the implant, C for the plugins, and GoLang for the backend. The framework supports compilation on demand for plugins, providing support for the different Linux distributions that might be targeted. The plugins allow for gathering information, lateral movement, and anti-forensics. VoidLink comes fitted with a wide range of stealth mechanisms to hinder analysis, prevent its removal from the infected hosts, and even detect endpoint detection and response (EDR) solutions and devise an evasion strategy on the fly. VoidLink has an auditability feature and a role-based access control (RBAC) mechanism, which consists of three role levels: SuperAdmin, Operator, and Viewer. There are signs that there exists a main implant that has been compiled for Windows and can load plugins via a technique called DLL side-loading.
Threat Actors Target Misconfigured Proxies for Paid LLM Access
Threat actors are systematically targeting misconfigured proxy servers to gain unauthorized access to commercial large language model (LLM) services. The campaign, which began in late December, has probed over 73 LLM endpoints and generated more than 80,000 sessions. The attackers use low-noise prompts to query endpoints without triggering security alerts. GreyNoise's report indicates two distinct campaigns, one of which exploits server-side request forgery (SSRF) vulnerabilities to force servers to connect to attacker-controlled infrastructure. The other campaign involves high-volume enumeration of exposed or misconfigured LLM endpoints. The targeted models include those from major providers such as OpenAI, Anthropic, Meta, Google, Mistral, Alibaba, and xAI. The activity is likely part of an organized reconnaissance effort to catalog accessible LLM services, though no exploitation or data theft has been observed yet.