CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

State-Backed Hackers Abuse AI Models for Advanced Cyber Attacks

First reported
Last updated
3 unique sources, 4 articles

Summary

Hide ▲

Google's Threat Intelligence Group (GTIG) has identified new malware families that leverage artificial intelligence (AI) and large language models (LLMs) for dynamic self-modification during execution. These malware families, including PromptFlux, PromptSteal, FruitShell, QuietVault, and PromptLock, demonstrate advanced capabilities for evading detection and maintaining persistence. PromptFlux, an experimental VBScript dropper, uses Google's LLM Gemini to generate obfuscated VBScript variants and evade antivirus software. It attempts persistence via Startup folder entries and spreads laterally on removable drives and mapped network shares. The malware is under development or testing phase and is assessed to be financially motivated. PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. State-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration. Chinese threat actors used Gemini to automate vulnerability analysis and provide targeted testing plans against specific US-based targets. Iranian adversary APT42 leveraged Gemini for social engineering campaigns and to speed up the creation of tailored malicious tools. The use of AI in malware enables adversaries to create more versatile and adaptive threats, posing significant challenges for cybersecurity defenses. Various threat actors, including those from China, Iran, and North Korea, have been observed abusing AI models like Gemini across different stages of the attack lifecycle. The underground market for AI-powered cybercrime tools is also growing, with offerings ranging from deepfake generation to malware development and vulnerability exploitation.

Timeline

  1. 12.02.2026 09:00 1 articles · 16h ago

    State-backed hackers abuse Gemini AI for all attack stages

    State-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration. Chinese threat actors used Gemini to automate vulnerability analysis and provide targeted testing plans against specific US-based targets. Iranian adversary APT42 leveraged Gemini for social engineering campaigns and to speed up the creation of tailored malicious tools.

    Show sources
  2. 12.02.2026 09:00 1 articles · 16h ago

    AI-powered malware families HonestCue and CoinBait identified

    HonestCue is a proof-of-concept malware framework that uses the Gemini API to generate C# code for second-stage malware. CoinBait is a React SPA-wrapped phishing kit developed using AI code generation tools.

    Show sources
  3. 12.02.2026 09:00 1 articles · 16h ago

    Cybercriminals use generative AI in ClickFix campaigns

    Cybercriminals used generative AI services in ClickFix campaigns to deliver the AMOS info-stealing malware for macOS. Users were lured to execute malicious commands through malicious ads listed in search results for troubleshooting specific issues.

    Show sources
  4. 12.02.2026 09:00 1 articles · 16h ago

    Gemini AI faces model extraction and distillation threats

    Gemini AI has faced model extraction and distillation attempts, posing a significant commercial and intellectual property threat. Google has implemented targeted defenses in Gemini's classifiers to mitigate abuse and assure robust security measures.

    Show sources
  5. 05.11.2025 16:59 4 articles · 3mo ago

    AI-Powered Malware Families Deployed in the Wild

    PromptSteal is a data miner written in Python that queries the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands to collect information and documents in specific folders and send the data to a command-and-control (C2) server. It is used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine. Additionally, state-backed hackers from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia have used Gemini AI for all stages of an attack, including reconnaissance, phishing lure creation, C2 development, and data exfiltration.

    Show sources

Information Snippets

Similar Happenings

AI-Driven 'Fifth Wave' of Cybercrime Expands with Dark LLMs and Deepfake Kits

Group-IB's report identifies a new 'fifth wave' of cybercrime, characterized by the widespread adoption of AI and generative AI (GenAI) tools. This wave, termed 'weaponized AI,' enables cheaper, faster, and more scalable cybercrime. Key developments include the proliferation of deepfake kits, AI-powered phishing kits, and proprietary 'dark LLMs' used for various malicious activities. The report highlights the increasing sophistication and accessibility of these tools, which are fueling a surge in cybercrime activities.

VoidLink Malware Framework Targets Cloud and Container Environments

VoidLink is a Linux-based command-and-control (C2) framework capable of long-term intrusion across cloud and enterprise environments. The malware generates implant binaries designed for credential theft, data exfiltration, and stealthy persistence on compromised systems. VoidLink combines multi-cloud targeting with container and kernel awareness in a single Linux implant, fingerprinting environments across major cloud providers and adjusting its behavior based on what it finds. The implant harvests credentials from environment variables, configuration files, and metadata APIs, and profiles security controls, kernel versions, and container runtimes before activating additional modules. VoidLink employs a modular plugin-based architecture that loads functionality as needed, including credential harvesting, environment fingerprinting, container escape, Kubernetes privilege escalation, and kernel-level stealth. The malware uses AES-256-GCM over HTTPS for encrypted C2 traffic, designed to resemble normal web activity. VoidLink stands out for its apparent development using a large language model (LLM) coding agent with limited human review, as indicated by unusual development artifacts such as structured "Phase X:" labels, verbose debug logs, and documentation left inside the production binary. The research concludes that VoidLink is not a proof-of-concept but an operational implant with live infrastructure, highlighting how AI-assisted development is lowering the barrier to producing functional, modular, and hard-to-detect malware.

Attackers Optimize Traditional TTPs with AI in 2025

In 2025, attackers continued to leverage traditional techniques such as supply chain attacks and phishing, but with increased efficiency and scale due to AI advancements. The Shai Hulud NPM campaign demonstrated how a single compromised package can affect thousands of downstream projects. AI has lowered the barrier to entry for cybercriminals, enabling lean teams or even individuals to execute sophisticated attacks. Phishing remains effective, with one click potentially compromising large-scale systems. Malicious Chrome extensions bypassing official stores highlight the ongoing challenge of automated reviews and human moderators keeping pace with attacker sophistication.

AI-Driven Cyberattacks Exploit Network Vulnerabilities

Adversarial AI-based attacks, such as those by Scattered Spider, are accelerating and leveraging living-off-the-land methods to spread and evade detection. These attacks use AI orchestration to perform network reconnaissance, discover vulnerabilities, move laterally, and harvest data at speeds that overwhelm manual detection methods. The Cloud Security Alliance report highlights over 70 ways autonomous AI-based agents can attack enterprise systems, expanding the attack surface beyond traditional security practices. Network Detection and Response (NDR) systems are increasingly being adopted to counter these AI-driven threats by providing real-time monitoring, analyzing network data, and identifying abnormal traffic patterns. NDR solutions can detect fast-moving, polymorphic attacks, summarize network activities, and render verdicts on potential threats, reducing the pressure on SOC analysts. Recent reports from Google's Threat Intelligence Group and Anthropic have revealed new AI-fueled attack methods, including the use of LLMs to generate malicious scripts and AI-orchestrated cyber espionage campaigns. Adversaries are also exploiting AV exclusion rules and using steganography techniques to evade detection. The combined use of NDR and EDR is essential for detecting and mitigating these sophisticated attacks.

Predator Spyware Exploits Zero-Click Infection Vector via Malicious Ads

Predator spyware, developed by Intellexa, has been using a zero-click infection mechanism called Aladdin, which infects targets by displaying malicious advertisements. This vector is hidden behind shell companies across multiple countries and leverages the commercial mobile advertising system to deliver malware. The spyware is still operational and actively developed, with additional delivery vectors like Triton targeting Samsung Exynos devices. The infection occurs when a target views a malicious ad, which triggers a redirection to Intellexa’s exploit delivery servers. The ads are served through a complex network of advertising firms, making defense measures challenging. Despite sanctions and investigations, including fines from the Greek Data Protection Authority, Intellexa remains active and prolific in zero-day exploitation. Recent leaks reveal that Intellexa's Predator spyware has been marketed under various names, including Helios, Nova, Green Arrow, and Red Arrow. The spyware exploits multiple zero-day vulnerabilities in Android and iOS devices, and uses frameworks like JSKit for native code execution. Intellexa also has the capability to remotely access the surveillance systems of its customers using TeamViewer. The spyware collects extensive data from targeted devices, including messaging apps, calls, emails, device locations, screenshots, passwords, and other on-device information.