Accidental disclosure of Anthropic's Claude Code closed-source implementation via NPM package
Summary
Hide ▲
Show ▼
Anthropic accidentally exposed the closed-source implementation of its Claude Code AI coding assistant through a packaging error in an NPM release. The leak occurred when version 2.1.88 of Claude Code included a 60 MB source map file (`cli.js.map`) containing approximately 1,900 files and 500,000 lines of internal source code. No customer data or credentials were involved. The exposed code has since propagated widely on platforms like GitHub, prompting Anthropic to issue DMCA takedown notices. The incident stemmed from a human error during release packaging, not a security breach, and Anthropic is implementing measures to prevent recurrence. The disclosed code reveals undocumented features, including a "Proactive mode" for 24/7 autonomous coding and a "Dream" mode for background problem-solving, along with details of Claude-exclusive functionality.
Timeline
-
01.04.2026 03:32 1 articles · 23h ago
Claude Code closed-source implementation exposed via NPM packaging error
Anthropic’s closed-source Claude Code AI coding assistant had its internal source code (1,900 files, 500,000 lines) accidentally exposed via version 2.1.88 of its NPM package due to inclusion of a 60 MB `cli.js.map` file containing embedded source content. The exposure was caused by human error during packaging, not a security breach. Anthropic has since issued DMCA takedowns for the leaked code and is implementing controls to prevent recurrence.
Show sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
Information Snippets
-
Anthropic accidentally published the internal source code for its closed-source Claude Code AI assistant in version 2.1.88 of the NPM package.
First reported: 01.04.2026 03:322 sources, 2 articlesShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
- Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms — thehackernews.com — 01.04.2026 09:12
-
The leak occurred via a 60 MB source map file (`cli.js.map`) included in the NPM package, which contained approximately 1,900 files and 500,000 lines of source code when reconstructed due to the `sourcesContent` field embedding full file contents.
First reported: 01.04.2026 03:321 source, 1 articleShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
-
Anthropic confirmed the exposure was a packaging error caused by human error and not a security breach, with no customer data or credentials exposed.
First reported: 01.04.2026 03:322 sources, 2 articlesShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
- Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms — thehackernews.com — 01.04.2026 09:12
-
The exposed code has spread across GitHub and other platforms, leading Anthropic to issue DMCA infringement notifications to remove it.
First reported: 01.04.2026 03:322 sources, 2 articlesShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
- Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms — thehackernews.com — 01.04.2026 09:12
-
The disclosed source code reveals undocumented features such as a "Proactive mode" for continuous autonomous coding and a "Dream" mode for background problem-solving.
First reported: 01.04.2026 03:321 source, 1 articleShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
-
Anthropic is investigating a separate bug causing faster-than-expected exhaustion of usage limits in Claude Code, affecting Pro, Max, and Personal plans.
First reported: 01.04.2026 03:321 source, 1 articleShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
-
Users reported usage limits being reached in minutes rather than hours of interaction, with Anthropic prioritizing this as a top issue as of March 31, 2026.
First reported: 01.04.2026 03:321 source, 1 articleShow sources
- Claude Code source code accidentally leaked in NPM package — www.bleepingcomputer.com — 01.04.2026 03:32
Similar Happenings
Claude Code Vulnerabilities Enable Remote Code Execution and API Key Theft
Multiple vulnerabilities in Anthropic's Claude Code AI-powered coding assistant allow remote code execution and API key exfiltration. The flaws exploit configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables. Three vulnerabilities were identified, with fixes released in versions 1.0.87, 1.0.111, and 2.0.65. Exploitation could lead to arbitrary code execution, data exfiltration, and unauthorized access to AI infrastructure. The vulnerabilities highlight the risks associated with AI-powered tools that execute commands and initiate network communication autonomously.
Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution
Three vulnerabilities in the mcp-server-git, maintained by Anthropic, allow file access, deletion, and code execution via prompt injection. The flaws have been addressed in versions 2025.9.25 and 2025.12.18. The vulnerabilities include path traversal and argument injection issues that can be exploited to manipulate Git repositories and execute arbitrary code. The issues were disclosed by Cyata researcher Yarden Porat, highlighting the risks of prompt injection attacks without direct system access. The vulnerabilities affect all versions of mcp-server-git released before December 8, 2025, and apply to default installations. An attacker only needs to influence what an AI assistant reads to trigger the vulnerabilities. The flaws allow attackers to execute code, delete arbitrary files, and load arbitrary files into a large language model's context. While the vulnerabilities do not directly exfiltrate data, sensitive files may still be exposed to the AI, creating downstream security and privacy risks. The vulnerabilities have been assigned CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145.
Lies-in-the-Loop Attack Exploits AI Coding Agents
A new attack vector called 'lies-in-the-loop' (LITL) exploits AI coding agents to deceive users into granting permissions for dangerous actions. The attack manipulates AI agents into presenting seemingly safe contexts, leveraging human trust and fallibility. This technique was demonstrated on Anthropic's Claude Code and Microsoft Copilot Chat, showing potential for software supply chain attacks. The LITL attack exploits the intersection of agentic tooling and human fallibility, targeting AI agents that rely on human-in-the-loop (HITL) interactions for safety and security approvals. The attack can be applied to any AI agent that uses HITL mechanisms. The researchers from Checkmarx Zero demonstrated the attack by convincing Claude Code to run arbitrary commands, including a command injection that could lead to a software supply chain attack. The attack highlights the risks of prompt injection and the need for vigilance in reviewing AI-generated prompts. The research also shows that attackers can manipulate HITL dialogs to appear harmless, even though approving them triggers arbitrary code execution. The attack can originate from indirect prompt injections that poison the agent's context long before the dialog is shown. The researchers recommend a defense-in-depth approach to mitigate the risks.
GPUGate Malware Campaign Targets IT Firms in Western Europe
The **GPUGate malware campaign** continues to evolve, now leveraging **Claude AI artifacts and Google Ads** to distribute **MacSync and AMOS infostealers** via **ClickFix attacks**. Over **15,600 users** have accessed malicious Claude-generated guides, which instruct victims to execute Terminal commands fetching malware payloads. This follows earlier waves abusing **ChatGPT/Grok chats, fake GitHub repositories, and malvertising** to deploy stealers targeting credentials, crypto wallets, and system data. The campaign, active since **April 2023**, has expanded from traditional phishing to **abusing AI ecosystems, supply-chain weaknesses, and trusted platforms** (e.g., Homebrew, LogMeIn, AI assistants). Russian-speaking actors operate **AMOS as a Malware-as-a-Service (MaaS)**, with stolen logs sold in underground markets to fuel fraud, ransomware, and account takeovers. The latest **Claude artifact abuse** underscores the shift toward **high-impact, scalable distribution channels**, exploiting weak platform vetting and user trust in AI-generated content. Organizations should monitor for **suspicious Terminal activity, C2 traffic to domains like `a2abotnet[.]com`, and unauthorized data egress** while educating users on **ClickFix-style lures** and unverified AI tool instructions.