ChatGPT Misuse by Nation-State Actors for Malware Development and Influence Operations
Summary
Hide ▲
Show ▼
OpenAI has disrupted multiple activity clusters misusing its ChatGPT AI tool for cyberattacks, including nation-state actors from Russia, North Korea, and China. These actors have used ChatGPT to develop malware, conduct phishing campaigns, and engage in influence operations. The Russian threat actor developed a remote access trojan (RAT) and credential stealer, while the North Korean group created malware and command-and-control (C2) infrastructure. The Chinese group, UNK_DropPitch, generated phishing content and tooling for routine tasks. Additionally, Chinese law enforcement used ChatGPT to draft and edit reports on smear campaigns against Chinese dissidents and Japanese Prime Minister Sanae Takaichi. OpenAI also blocked accounts used for scams, influence operations, and surveillance, including networks from Cambodia, Myanmar, Nigeria, and individuals linked to Chinese government entities.
Timeline
-
26.02.2026 02:00 1 articles · 23h ago
Russian Threat Actor Uses ChatGPT for Influence Operations in Africa
A Russian threat actor used ChatGPT to generate and edit social media content and longform articles about geopolitical issues in sub-Saharan Africa. The campaign involved 53 articles published on various African news sites under a fake byline. The threat actor removed em dashes from the final generated text to reduce suspicion of AI generation.
Show sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
08.10.2025 10:16 2 articles · 4mo ago
OpenAI Disrupts Nation-State Actors Misusing ChatGPT for Cyberattacks
OpenAI has disrupted multiple activity clusters misusing its ChatGPT AI tool for cyberattacks. These clusters include Russian, North Korean, and Chinese threat actors who used ChatGPT to develop malware, conduct phishing campaigns, and engage in influence operations. The Russian threat actor developed a remote access trojan (RAT) and credential stealer, while the North Korean group created malware and command-and-control (C2) infrastructure. The Chinese group, UNK_DropPitch, generated phishing content and tooling for routine tasks. Additionally, Chinese law enforcement used ChatGPT to draft and edit reports on smear campaigns against Chinese dissidents and Japanese Prime Minister Sanae Takaichi. OpenAI also blocked accounts used for scams, influence operations, and surveillance, including networks from Cambodia, Myanmar, Nigeria, and individuals linked to Chinese government entities.
Show sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
Information Snippets
-
Russian threat actor used ChatGPT to develop and refine a remote access trojan (RAT) and credential stealer.
First reported: 08.10.2025 10:162 sources, 2 articlesShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
North Korean threat actor used ChatGPT for malware and C2 development, including macOS Finder extensions and Windows Server VPNs.
First reported: 08.10.2025 10:162 sources, 2 articlesShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
Chinese threat actor UNK_DropPitch used ChatGPT for phishing campaigns and tooling to accelerate routine tasks.
First reported: 08.10.2025 10:162 sources, 2 articlesShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
OpenAI blocked accounts from Cambodia, Myanmar, Nigeria, and China for scams, influence operations, and surveillance.
First reported: 08.10.2025 10:162 sources, 2 articlesShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
Threat actors used ChatGPT to remove indicators of AI-generated content, such as em-dashes.
First reported: 08.10.2025 10:162 sources, 2 articlesShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
Anthropic released an open-source auditing tool called Petri to accelerate AI safety research.
First reported: 08.10.2025 10:161 source, 1 articleShow sources
- OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks — thehackernews.com — 08.10.2025 10:16
-
Chinese law enforcement used ChatGPT to draft and edit reports on smear campaigns against Chinese dissidents and Japanese Prime Minister Sanae Takaichi.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
The Chinese campaign involved posting negative online comments, impersonating Japanese citizens, and recruiting internet users to generate political pressure.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
The campaign used fake social media accounts and spread positive sentiments about conditions in Inner Mongolia.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
The same individual used ChatGPT for campaigns against Chinese dissidents and one human rights organization.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
A Russian threat actor used ChatGPT to generate and edit social media content and longform articles about geopolitical issues in sub-Saharan Africa.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
The Russian campaign involved 53 articles published on various African news sites under a fake byline.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
-
The threat actor removed em dashes from the final generated text to reduce suspicion of AI generation.
First reported: 26.02.2026 02:001 source, 1 articleShow sources
- Chinese Police Use ChatGPT to Smear Japan PM Takaichi — www.darkreading.com — 26.02.2026 02:00
Similar Happenings
US sanctions North Korean entities and individuals for cybercrime and IT worker fraud
The U.S. Treasury Department has imposed sanctions on ten North Korean individuals and entities involved in laundering $12.7 million in cryptocurrency and IT worker fraud. The sanctions target Ryujong Credit Bank and Korea Mangyongdae Computer Technology Company (KMCTC), along with their respective executives and financial representatives. The move aims to disrupt North Korea's ability to fund its weapons programs and other illicit activities through cybercrime and financial fraud. The Treasury Department has identified $12.7 million in transactions linked to North Korean financial institutions over the past two years. North Korean IT workers have been using foreign freelance programmers to establish business partnerships and split revenue. The Treasury Department has accused North Korea of leveraging its IT army to gain employment at companies by obfuscating their nationality and identities, funneling income back to the DPRK.
U.S. Seizes $15 Billion in Crypto from Prince Group's Pig Butchering Scam
The U.S. Department of Justice (DOJ), in coordination with the UK's Foreign, Commonwealth, and Development Office (FCDO) and the U.S. Department of the Treasury's Office of Foreign Assets Control (OFAC), has seized $15 billion in bitcoin from the Prince Group, a criminal organization involved in extensive cryptocurrency investment scams, also known as romance baiting or pig butchering. The group targets victims in the United States and globally, using shell companies, forced labor, and sophisticated money laundering techniques. The Prince Group operates over 100 shell companies across more than 30 countries, with operations dating back to approximately 2015. The group's leader, Chen Zhi, remains at large and is responsible for orchestrating the fraud scheme and bribing officials to avoid law enforcement. The Prince Group maintains links to its operations through corporate proxies, including Jin Bei Group, Golden Fortune Resorts World Ltd, and Byex Exchange. The DOJ, FCDO, and OFAC have sanctioned Chen Zhi and 146 other targets within the Prince Group. The sanctions freeze the assets and properties of the network's operators in the UK, including a mansion in North London, an office building in the City of London, and multiple flats in South London. The Prince Group's operations are incorporated in the British Virgin Islands.
China-aligned UTA0388 Targets Multiple Regions with GOVERSHELL Malware
A China-aligned threat actor, UTA0388, has conducted spear-phishing campaigns targeting North America, Asia, and Europe to deliver the GOVERSHELL backdoor. These campaigns use tailored lures and fictional identities in multiple languages. The malware, which has evolved through several variants, is designed to execute commands and gather system information. The actor has leveraged legitimate services like Netlify, Sync, and OneDrive to stage archive files and used OpenAI ChatGPT to generate phishing content and assist with malicious workflows. The campaigns have been highly tailored, with the threat actors building trust with recipients over time before sending malicious links. The targeting profile indicates a focus on Asian geopolitical issues, particularly Taiwan. Additionally, a separate campaign targeting European institutions has been observed, involving the use of PlugX malware. New insights reveal that UTA0388 has shifted from simple phishing links to 'rapport-building phishing,' engaging in extended conversations with targets before delivering malicious files. The GOVERSHELL malware has evolved to use encrypted WebSocket and HTTPS communication channels. The campaigns involved archive files containing a legitimate-looking executable and a hidden malicious dynamic link library (DLL). The use of cloud hosting services like Netlify and OneDrive to deliver payloads, along with domain names impersonating major firms such as Microsoft and Apple, has been observed. The rapid campaign tempo, with up to 26 phishing emails sent within three days, indicates a high level of activity.
AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
In mid-September 2025, state-sponsored threat actors from China used artificial intelligence (AI) technology developed by Anthropic to orchestrate automated cyber attacks as part of a "highly sophisticated espionage campaign." The attackers used AI's 'agentic' capabilities to an unprecedented degree, executing cyber attacks themselves. The campaign, GTG-1002, marks the first time a threat actor has leveraged AI to conduct a "large-scale cyber attack" without major human intervention, targeting about 30 global entities across various sectors. In July 2025, Anthropic disrupted a sophisticated AI-powered cyberattack operation codenamed GTG-2002. The actor targeted 17 organizations across critical sectors, using Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. The operation also included the creation of obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually. In February 2026, Anthropic identified industrial-scale campaigns by three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) to illegally extract Claude's capabilities. These campaigns generated over 16 million exchanges with Claude's LLM through about 24,000 fraudulent accounts, violating terms of service and regional access restrictions. The distillation attacks targeted Claude's reasoning capabilities, agentic reasoning, tool use, coding capabilities, and computer vision. Anthropic attributed each campaign to a specific AI lab based on request metadata, IP address correlation, and infrastructure indicators. To counter the threat, Anthropic built classifiers and behavioral fingerprinting systems to identify suspicious distillation attack patterns and implemented enhanced safeguards. Anthropic warned that illicitly distilled models can be used for malicious and harmful purposes, such as developing bioweapons or carrying out malicious cyber activities. Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance. Anthropic does not currently offer commercial access to Claude in China or to subsidiaries of Chinese companies located outside of the country for security reasons.