CyberHappenings logo
☰

AI-driven cyber threats targeting identity systems

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

AI-driven cyber threats are evolving rapidly, targeting identity systems with deepfakes, autonomous agents, and synthetic identities. These advanced techniques bypass traditional security measures, making identity the last line of defense. The increasing sophistication of AI tools allows attackers to scale operations quickly, exploit APIs, and create convincing fake identities. This shift necessitates a focus on identity verification to protect against AI-powered threats. The webinar 'AI's New Attack Surface: Why Identity Is the Last Line of Defense' will discuss these emerging vulnerabilities and provide strategies for securing identity systems against AI-driven attacks.

Timeline

  1. 13.08.2025 12:30 📰 1 articles

    Webinar on AI-driven identity threats and defenses announced

    A webinar titled 'AI's New Attack Surface: Why Identity Is the Last Line of Defense' will be held to discuss the evolving landscape of AI-driven cyber threats. The webinar will focus on the vulnerabilities created by AI, the mechanics of synthetic identities, and strategies for securing identity systems. It will also provide a blueprint for building secure AI applications and protecting against AI-powered threats.

    Show sources

Information Snippets

Similar Happenings

AI-Powered Cyberattacks Targeting Critical Sectors Disrupted

Anthropic disrupted an AI-powered operation in July 2025 that used its Claude AI chatbot to conduct large-scale theft and extortion across 17 organizations in healthcare, emergency services, government, and religious sectors. The actor used Claude Code on Kali Linux to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The operation, codenamed GTG-2002, employed AI to make tactical and strategic decisions, exfiltrating sensitive data and demanding ransoms ranging from $75,000 to $500,000 in Bitcoin. The actor used AI to craft bespoke versions of the Chisel tunneling utility to evade detection and disguise malicious executables as legitimate Microsoft tools. The operation highlights the increasing use of AI in cyberattacks, making defense and enforcement more challenging. Anthropic developed new detection methods to prevent future abuse of its AI models.

AI systems vulnerable to data-theft via hidden prompts in downscaled images

Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.

UNC5518 deploys CORNFLAKE.V3 backdoor via ClickFix and fake CAPTCHA pages

UNC5518, an access-as-a-service threat actor, deploys the CORNFLAKE.V3 backdoor using the ClickFix social engineering tactic and fake CAPTCHA pages. This backdoor is used by at least two other groups, UNC5774 and UNC4108, to initiate multi-stage infections and drop additional payloads. The attack begins with users being tricked into running a malicious PowerShell script via a fake CAPTCHA page. The script executes a dropper payload that ultimately launches CORNFLAKE.V3, which supports various payload types and collects system information. The backdoor has been observed in both JavaScript and PHP versions and uses Cloudflare tunnels to avoid detection. A new ClickFix variant manipulates AI-generated text summaries to deliver malicious commands, turning AI tools into active participants in social engineering attacks.

Cybercriminals exploit Lovable vibe coding service for malicious site creation

Cybercriminals have been exploiting the Lovable vibe coding service to create malicious websites for phishing attacks, crypto scams, and other threats. Lovable, a Stockholm-based startup, launched its AI-powered platform in late 2024 to help users build applications and websites. Since then, tens of thousands of Lovable URLs have been detected in malicious activities, including phishing kits, malware distribution, and credential harvesting. The abuse of Lovable highlights the growing trend of threat actors leveraging AI tools to enhance their attacks. Lovable has implemented new security protections, including Security Checker 2.0, an AI-powered platform safety program, and real-time detection of malicious site creation. Despite these measures, cybercriminals continue to find ways to abuse the platform.

PromptFix exploit enables AI browser deception

A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.