CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

AI-Powered Malware Families Deployed in the Wild

First reported
Last updated
3 unique sources, 3 articles

Summary

Hide ▲

Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.

Timeline

  1. 05.11.2025 16:59 3 articles · 5d ago

    AI-Powered Malware Families Deployed in the Wild

    PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine.

    Show sources

Information Snippets

Similar Happenings

SesameOp malware leverages OpenAI Assistants API for command-and-control

A new backdoor malware, SesameOp, uses the OpenAI Assistants API as a covert command-and-control channel. The malware was discovered during an investigation into a July 2025 cyberattack. It allowed attackers to gain persistent access to compromised environments and remotely manage backdoored devices for several months. The attackers leveraged legitimate cloud services, avoiding detection and traditional incident response measures. The malware employs a combination of symmetric and asymmetric encryption to secure communications. It uses a heavily obfuscated loader and a .NET-based backdoor deployed through .NET AppDomainManager injection into Microsoft Visual Studio utilities. The attack chain includes internal web shells and malicious processes designed for long-term espionage. The malware uses a loader component named "Netapi64.dll" and a .NET-based backdoor named "OpenAIAgent.Netapi64". The malware supports three types of values in the description field of the Assistants list retrieved from OpenAI: SLEEP, Payload, and Result. Microsoft and OpenAI collaborated to investigate the abuse of the API, leading to the disabling of the account and API key used in the attacks. The malware does not exploit a vulnerability in OpenAI's platform but misuses built-in capabilities of the Assistants API. The OpenAI Assistants API is scheduled for deprecation in August 2026 and will be replaced by a new Responses API.

Open Source Benchmark Framework b3 for LLM Security in AI Agents

The UK AI Security Institute (AISI) has launched an open-source benchmark framework, b3, in collaboration with Check Point and Lakera. This tool aims to enhance the security of large language models (LLMs) that power AI agents by identifying vulnerabilities in individual LLM calls. The b3 framework uses 'threat snapshots' to test for various attack vectors, including system prompt exfiltration, phishing link insertion, and malicious code injection. The framework is designed to make LLM security measurable and comparable across different models and applications. The b3 benchmark includes 10 representative agent threat snapshots and a dataset of 19,433 adversarial attacks from Lakera’s Gandalf initiative. It provides developers with a way to assess and improve the security posture of their models. The framework reveals that models with step-by-step reasoning tend to be more secure, and open-weight models are rapidly closing the security gap with closed systems.

CAPI Backdoor Targets Russian Auto and E-Commerce Firms via .NET Malware

A new campaign targeting the Russian automobile and e-commerce sectors uses a previously undocumented .NET malware, CAPI Backdoor. The attack chain involves phishing emails with ZIP archives containing a decoy document and a malicious Windows shortcut file. The malware, disguised as 'adobe.dll', uses legitimate Microsoft binaries to execute and establish persistence. It can steal data from browsers, take screenshots, and exfiltrate information. The campaign includes a domain impersonating a legitimate Russian automotive site.

Google Gemini AI Vulnerabilities Allowing Prompt Injection and Data Exfiltration

Researchers disclosed three vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. The vulnerabilities could have been exploited to inject malicious prompts, manipulate AI behavior, and exfiltrate user data. The flaws highlight the potential risks of AI tools being used as attack vectors rather than just targets. The Gemini Search Personalization model's flaw allowed attackers to manipulate AI behavior and leak user data by injecting malicious search queries via JavaScript from a malicious website. The Gemini Cloud Assist flaw allowed attackers to execute instructions via prompt injections hidden in log content, potentially compromising cloud resources and enabling phishing attacks. The Gemini Browsing Tool flaw allowed attackers to exfiltrate a user's saved information and location data by exploiting the tool's 'Show thinking' feature. Google has made specific changes to mitigate each flaw, including rolling back vulnerable models, hardening search personalization features, and preventing data exfiltration from browsing in indirect prompt injections.

Discovery of MalTerminal Malware Leveraging GPT-4 for Ransomware and Reverse Shell

Researchers have identified MalTerminal, a malware that incorporates GPT-4 for generating ransomware code and reverse shells. This marks the earliest known instance of LLM-embedded malware. The malware was presented at the LABScon 2025 security conference. MalTerminal was likely a proof-of-concept or red team tool, never deployed in the wild. It includes Python scripts and a defensive tool called FalconShield. The use of LLMs in malware represents a new challenge for cybersecurity defenses. Additionally, threat actors are using LLMs to bypass email security layers by embedding hidden prompts in phishing emails. This technique deceives AI-powered security scanners, allowing malicious emails to reach users' inboxes. The emails exploit the Follina vulnerability (CVE-2022-30190) to deliver additional malware and disable Microsoft Defender Antivirus. AI-powered site builders are also being exploited to host fake CAPTCHA pages leading to phishing websites, stealing user credentials and sensitive information.