CyberHappenings logo
☰

Zero-click exploit in enterprise AI agents

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

A zero-click exploit in enterprise AI agents allows attackers to take over AI assistants using only a user's email address. This exploit enables unauthorized access to sensitive data and manipulation of users through trusted AI advisers. The vulnerability affects major AI assistants integrated with enterprise environments, including Microsoft, Google Workspace, and Salesforce. Organizations are urged to adopt dedicated security programs to manage ongoing risks associated with AI agents. The exploit leverages the extensive access granted to AI assistants within enterprise environments. These assistants can access emails, documents, calendars, and perform actions on behalf of users, making them attractive targets for attackers. Current security measures focusing on prompt injection have proven ineffective, and a defense-in-depth strategy is recommended to mitigate risks.

Timeline

  1. 19.08.2025 22:02 📰 1 articles

    Zero-click exploit in enterprise AI agents disclosed

    A zero-click exploit in enterprise AI agents has been identified, allowing attackers to take over AI assistants using only a user's email address. This exploit enables unauthorized access to sensitive data and manipulation of users through trusted AI advisers. The vulnerability affects major AI assistants integrated with enterprise environments, including Microsoft, Google Workspace, and Salesforce. Organizations are urged to adopt dedicated security programs to manage ongoing risks associated with AI agents.

    Show sources

Information Snippets

  • AI assistants have extensive access to enterprise environments, including emails, documents, and calendars.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • A zero-click exploit allows attackers to take over AI assistants using only a user's email address.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • The exploit affects major AI assistants integrated with enterprise environments, such as Microsoft, Google Workspace, and Salesforce.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • Current security measures focusing on prompt injection have been ineffective.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • A defense-in-depth strategy is recommended to mitigate risks associated with AI agents.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • Organizations are advised to create dedicated security programs to manage ongoing risks associated with AI agents.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources
  • The exploit can be used to manipulate users through trusted AI advisers.

    First reported: 19.08.2025 22:02
    📰 1 source, 1 article
    Show sources

Similar Happenings

Cursor AI editor autoruns malicious code in repositories

A flaw in the Cursor AI editor allows malicious code in repositories to autorun on developer devices. This vulnerability can lead to malware execution, environment hijacking, and credential theft. The issue arises from Cursor disabling the Workspace Trust feature from VS Code, which prevents automatic task execution without explicit user consent. The flaw affects one million users who generate over a billion lines of code daily. The Cursor team has decided not to fix the issue, citing the need to maintain AI and other features. They recommend users enable Workspace Trust manually or use basic text editors for unknown projects. The flaw is part of a broader trend of prompt injections and jailbreaks affecting AI-powered coding tools.

HexStrike AI Exploits Citrix Vulnerabilities Disclosed in August 2025

Threat actors have begun using HexStrike AI to exploit Citrix vulnerabilities disclosed in August 2025. HexStrike AI, an AI-driven security platform, was designed to automate reconnaissance and vulnerability discovery for authorized red teaming operations, but it has been repurposed for malicious activities. The exploitation attempts target three Citrix vulnerabilities, with some threat actors offering access to vulnerable NetScaler instances for sale on darknet forums. The use of HexStrike AI by threat actors significantly reduces the time between vulnerability disclosure and exploitation, increasing the risk of widespread attacks. The tool's automation capabilities allow for continuous exploitation attempts, enhancing the likelihood of successful breaches. Security experts emphasize the urgency of patching and hardening affected systems to mitigate the risks posed by this AI-driven threat. HexStrike AI's client features a retry logic and recovery handling to mitigate the effects of failures in any individual step on its complex operations. HexStrike AI has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks. Hackers started discussing HexStrike AI on hacking forums within hours of the Citrix vulnerabilities disclosure. HexStrike AI has been used to automate the exploitation chain, including scanning for vulnerable instances, crafting exploits, delivering payloads, and maintaining persistence. Check Point recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.

AI-Powered Cyberattacks Targeting Critical Sectors Disrupted

Anthropic disrupted an AI-powered operation in July 2025 that used its Claude AI chatbot to conduct large-scale theft and extortion across 17 organizations in healthcare, emergency services, government, and religious sectors. The actor used Claude Code on Kali Linux to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The operation, codenamed GTG-2002, employed AI to make tactical and strategic decisions, exfiltrating sensitive data and demanding ransoms ranging from $75,000 to $500,000 in Bitcoin. The actor used AI to craft bespoke versions of the Chisel tunneling utility to evade detection and disguise malicious executables as legitimate Microsoft tools. The operation highlights the increasing use of AI in cyberattacks, making defense and enforcement more challenging. Anthropic developed new detection methods to prevent future abuse of its AI models.

AI systems vulnerable to data-theft via hidden prompts in downscaled images

Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.

Murky Panda, Genesis Panda, and Glacial Panda Target Cloud and Telecom Sectors

Chinese cyber espionage groups Murky Panda, Genesis Panda, and Glacial Panda have escalated their activities targeting cloud and telecom sectors. Murky Panda exploits trusted cloud relationships and zero-day vulnerabilities to breach enterprise networks. They also compromise cloud service providers to gain access to downstream customer environments. Genesis Panda targets cloud services for lateral movement and persistence. Glacial Panda focuses on telecom organizations to exfiltrate call detail records and related telemetry. Murky Panda, also known as Silk Typhoon, has been active since at least 2021, targeting government, technology, academic, legal, and professional services entities in North America. They exploit internet-facing appliances, SOHO devices, and known vulnerabilities in Citrix and Commvault to gain initial access. They deploy web shells and custom malware like CloudedHope to maintain persistence. Genesis Panda, active since January 2024, targets financial services, media, telecommunications, and technology sectors across 11 countries. They exploit cloud-hosted systems for lateral movement and persistence, using compromised credentials to burrow deeper into cloud accounts. Glacial Panda has seen a 130% increase in activity targeting the telecom sector, focusing on Linux systems and legacy operating systems. They exploit known vulnerabilities and weak passwords to gain access and deploy trojanized OpenSSH components for credential harvesting.