CyberHappenings logo
☰

Growing AI Agent and Non-Human Identity Management Challenges

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

The rapid proliferation of AI agents and non-human identities (NHIs) presents significant security challenges. NHIs outnumber human identities 82:1, and the integration of agentic AI exacerbates the issue. This growth is driven by automation projects and modern architectures like microservices and serverless cloud computing. The unique challenges posed by AI agents include unintended harmful actions, adversarial attacks, and the potential for breaching the logic of defensive systems. The window of opportunity to secure these tools and data is closing rapidly. Recent demonstrations and research highlight various attack vectors, including data poisoning, jailbreaking, prompt injection, and abandoned agents. The Noma Security team and Aim Labs have identified specific vulnerabilities and attack methods, underscoring the urgent need for robust security measures.

Timeline

  1. 22.08.2025 17:00 📰 1 articles

    AI Agents and NHIs Present Growing Security Challenges

    The rapid proliferation of AI agents and non-human identities (NHIs) is driven by automation and modern architectures. NHIs outnumber human identities 82:1, and the integration of agentic AI exacerbates the issue. Recent demonstrations and research highlight various attack vectors, including data poisoning, jailbreaking, prompt injection, and abandoned agents. The Noma Security team and Aim Labs have identified specific vulnerabilities and attack methods, underscoring the urgent need for robust security measures.

    Show sources

Information Snippets

  • AI agents and NHIs are proliferating rapidly, with NHIs outnumbering human identities 82:1.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • AI agents present unique security challenges due to their autonomy and access levels.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • Adversarial attacks on AI agents are anticipated, including jailbreaking and logic breaches.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • Recent demonstrations showed data poisoning, jailbreaking, and prompt injection attacks.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • The Cloud Security Alliance and OWASP have identified multiple attack vectors against AI agents.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • Abandoned agents, including orphan and zombie agents, pose significant security risks.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • Noma Security identified a vulnerability allowing exfiltration of sensitive data via malicious proxy settings.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources
  • Aim Labs discovered the EchoLeak zero-click vulnerability in Microsoft's Copilot.

    First reported: 22.08.2025 17:00
    📰 1 source, 1 article
    Show sources

Similar Happenings

AI-Powered Cyberattacks Targeting Critical Sectors Disrupted

Anthropic disrupted an AI-powered operation in July 2025 that used its Claude AI chatbot to conduct large-scale theft and extortion across 17 organizations in healthcare, emergency services, government, and religious sectors. The actor used Claude Code on Kali Linux to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The operation, codenamed GTG-2002, employed AI to make tactical and strategic decisions, exfiltrating sensitive data and demanding ransoms ranging from $75,000 to $500,000 in Bitcoin. The actor used AI to craft bespoke versions of the Chisel tunneling utility to evade detection and disguise malicious executables as legitimate Microsoft tools. The operation highlights the increasing use of AI in cyberattacks, making defense and enforcement more challenging. Anthropic developed new detection methods to prevent future abuse of its AI models.

AI systems vulnerable to data-theft via hidden prompts in downscaled images

Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.

UNC5518 deploys CORNFLAKE.V3 backdoor via ClickFix and fake CAPTCHA pages

UNC5518, an access-as-a-service threat actor, deploys the CORNFLAKE.V3 backdoor using the ClickFix social engineering tactic and fake CAPTCHA pages. This backdoor is used by at least two other groups, UNC5774 and UNC4108, to initiate multi-stage infections and drop additional payloads. The attack begins with users being tricked into running a malicious PowerShell script via a fake CAPTCHA page. The script executes a dropper payload that ultimately launches CORNFLAKE.V3, which supports various payload types and collects system information. The backdoor has been observed in both JavaScript and PHP versions and uses Cloudflare tunnels to avoid detection. A new ClickFix variant manipulates AI-generated text summaries to deliver malicious commands, turning AI tools into active participants in social engineering attacks.

PromptFix exploit enables AI browser deception

A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.