CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Malicious OpenClaw AI Coding Assistant Extension on VS Code Marketplace

First reported
Last updated
3 unique sources, 4 articles

Summary

Hide ▲

A malicious Microsoft Visual Studio Code (VS Code) extension named "ClawdBot Agent - AI Coding Assistant" was discovered on the official Extension Marketplace. The extension, which posed as a free AI coding assistant, stealthily dropped a malicious payload on compromised hosts. The extension was taken down by Microsoft after being reported by cybersecurity researchers. The malicious extension executed a binary named "Code.exe" that deployed a legitimate remote desktop program, granting attackers persistent remote access to compromised hosts. The extension also incorporated multiple fallback mechanisms to ensure payload delivery, including retrieving a DLL from Dropbox and using hard-coded URLs to obtain the payloads. Additionally, security researchers found hundreds of unauthenticated Moltbot instances online, exposing sensitive data and credentials. Moltbot, an open-source personal AI assistant, can run 24/7 locally, maintaining a persistent memory and executing scheduled tasks. However, insecure deployments can lead to sensitive data leaks, corporate data exposure, credential theft, and command execution. Hundreds of Clawdbot Control admin interfaces are exposed online due to reverse proxy misconfiguration, allowing unauthenticated access and root-level system access. More than 230 malicious packages for OpenClaw (formerly Moltbot and ClawdBot) have been published in less than a week on the tool's official registry and on GitHub. These malicious skills impersonate legitimate utilities and inject information-stealing malware payloads onto users' systems, targeting sensitive data like API keys, wallet private keys, SSH credentials, and browser passwords. Users are advised to audit their configurations, revoke connected service integrations, and implement network controls to mitigate potential risks. A self-styled social networking platform built for AI agents, Moltbook, contained a misconfigured database that allowed full read and write access to all data. The exposure was due to a Supabase API key exposed in client-side JavaScript, granting unauthenticated access to the entire production database. Researchers accessed 1.5 million API authentication tokens, 30,000 email addresses, and thousands of private messages between agents. The API key exposure allowed attackers to impersonate any agent on the platform, post content, send messages, and interact as that agent. Unauthenticated users could edit existing posts, inject malicious content or prompt injection payloads, and deface the site.

Timeline

  1. 03.02.2026 12:00 1 articles · 15h ago

    Moltbook database misconfiguration exposes user data

    A self-styled social networking platform built for AI agents, Moltbook, contained a misconfigured database that allowed full read and write access to all data. The exposure was due to a Supabase API key exposed in client-side JavaScript, granting unauthenticated access to the entire production database. Researchers accessed 1.5 million API authentication tokens, 30,000 email addresses, and thousands of private messages between agents. The API key exposure allowed attackers to impersonate any agent on the platform, post content, send messages, and interact as that agent. Unauthenticated users could edit existing posts, inject malicious content or prompt injection payloads, and deface the site. The platform had 17,000 human 'owners' registered, and humans could post content disguised as 'AI agents' via a basic POST request. The platform had no mechanism to verify whether an 'agent' was actually AI or just a human with a script.

    Show sources
  2. 02.02.2026 21:11 1 articles · 1d ago

    Malicious OpenClaw skills push password-stealing malware

    More than 230 malicious packages for OpenClaw (formerly Moltbot and ClawdBot) have been published in less than a week on the tool's official registry and on GitHub. These malicious skills impersonate legitimate utilities and inject information-stealing malware payloads onto users' systems, targeting sensitive data like API keys, wallet private keys, SSH credentials, and browser passwords. The malware dropped on macOS systems is identified as a variant of NovaStealer that can bypass Gatekeeper and target various sensitive data. Koi Security found 341 malicious skills on ClawHub, attributing them to a single campaign, and also identified 29 typosquats for the ClawHub name. The creator of OpenClaw, Peter Steinberger, admitted the inability to review the massive number of skill submissions, advising users to double-check the safety of skills before deployment. Users are recommended to isolate the AI assistant in a virtual machine, give it restricted permissions, and secure remote access to it.

    Show sources
  3. 28.01.2026 22:26 2 articles · 6d ago

    Supply-chain attack via Moltbot Skill demonstrated

    A supply-chain attack against Moltbot users was demonstrated via a Skill that contained a minimal 'ping' payload. The developer published the skill on the official MoltHub (ClawdHub) registry and inflated its download count, making it the most popular asset. In less than eight hours, 16 developers in seven countries downloaded the artificially promoted skill.

    Show sources
  4. 28.01.2026 19:46 3 articles · 6d ago

    Malicious Moltbot AI Coding Assistant Extension Discovered and Removed

    On January 27, 2026, a malicious Microsoft Visual Studio Code (VS Code) extension named "ClawdBot Agent - AI Coding Assistant" was discovered on the official Extension Marketplace. The extension, which posed as a free AI coding assistant, stealthily dropped a malicious payload on compromised hosts. The extension was taken down by Microsoft after being reported by cybersecurity researchers. The malicious extension executed a binary named "Code.exe" that deployed a legitimate remote desktop program, granting attackers persistent remote access. The extension also incorporated multiple fallback mechanisms to ensure payload delivery, including retrieving a DLL from Dropbox and using hard-coded URLs to obtain the payloads. Additionally, security researchers found hundreds of unauthenticated Moltbot instances online, exposing sensitive data and credentials. Users are advised to audit their configurations, revoke connected service integrations, and implement network controls to mitigate potential risks. Moltbot, an open-source personal AI assistant, can run 24/7 locally, maintaining a persistent memory and executing scheduled tasks. However, insecure deployments can lead to sensitive data leaks, corporate data exposure, credential theft, and command execution. Hundreds of Clawdbot Control admin interfaces are exposed online due to reverse proxy misconfiguration, allowing unauthenticated access and root-level system access.

    Show sources

Information Snippets

Similar Happenings

341 Malicious ClawHub Skills Target OpenClaw Users with Atomic Stealer

A security audit by Koi Security identified 341 malicious skills on ClawHub, a marketplace for OpenClaw users, which distribute Atomic Stealer malware to steal sensitive data from macOS and Windows systems. The campaign, codenamed ClawHavoc, uses social engineering tactics to trick users into installing malicious prerequisites. The skills masquerade as legitimate tools, including cryptocurrency utilities, YouTube tools, and finance applications. OpenClaw has added a reporting feature to mitigate the issue. The malware targets API keys, credentials, and other sensitive data, exploiting the open-source ecosystem's vulnerabilities. The campaign coincides with a report from OpenSourceMalware, highlighting the same threat. The intersection of AI agent capabilities and persistent memory amplifies the risks, enabling stateful, delayed-execution attacks. New findings reveal almost 400 fake crypto trading add-ons in the project behind the viral Moltbot/OpenClaw AI assistant tool can lead users to install information-stealing malware. These addons, called skills, masquerade as cryptocurrency trading automation tools and target ByBit, Polymarket, Axiom, Reddit, and LinkedIn. The malicious skills share the same command-and-control (C2) infrastructure, 91.92.242.30, and use sophisticated social engineering to convince users to execute malicious commands which then steals crypto assets like exchange API keys, wallet private keys, SSH credentials, and browser passwords.

OpenClaw AI Agent Security Concerns in Business Environments

OpenClaw, an open-source AI agent formerly known as MoltBot and ClawdBot, has rapidly gained popularity on GitHub, raising significant security concerns due to its extensive access to user systems and data. The AI agent can execute commands, manage files, and interact with various platforms, posing risks such as prompt injection and unauthorized access. Despite its growth, security experts warn about the dangers of integrating such AI agents into corporate environments without proper safeguards. The project has seen a 14-fold increase in adoption within a week, with over 113,000 stars on GitHub. However, its rapid development and extensive access capabilities have led to concerns about potential data breaches and supply chain risks. Experts emphasize the need for better security practices to mitigate these risks.

454,000+ Malicious Open Source Packages Discovered in 2026

Researchers reported a surge in malicious open source packages, with 454,648 new malicious packages discovered in 2026. These packages are increasingly used in sustained, industrialized campaigns, often state-sponsored, targeting developer machines and CI/CD pipelines. The threat landscape includes repository abuse, potentially unwanted apps, and multi-stage attacks involving host information exfiltration, droppers, and backdoors. Additionally, AI-assisted development is exacerbating the risk by recommending non-existent versions and failing to check for malicious indicators.

Security Risks in Agentic AI Workflows and Machine Control Protocols

AI agents are increasingly capable of executing code end-to-end, which introduces significant security risks. Machine Control Protocols (MCPs) manage AI agent actions, including tool access and API permissions. Misconfigurations or compromises in MCPs can lead to unauthorized actions, as demonstrated by CVE-2025-6514, where a flaw in an OAuth proxy enabled remote code execution. This highlights the need for securing MCPs to prevent AI agents from executing attacks. A webinar will address these risks, covering MCP security, shadow API key management, and practical controls to secure agentic AI without hindering development.

Attackers Optimize Traditional TTPs with AI in 2025

In 2025, attackers continued to leverage traditional techniques such as supply chain attacks and phishing, but with increased efficiency and scale due to AI advancements. The Shai Hulud NPM campaign demonstrated how a single compromised package can affect thousands of downstream projects. AI has lowered the barrier to entry for cybercriminals, enabling lean teams or even individuals to execute sophisticated attacks. Phishing remains effective, with one click potentially compromising large-scale systems. Malicious Chrome extensions bypassing official stores highlight the ongoing challenge of automated reviews and human moderators keeping pace with attacker sophistication.