PromptFix exploit enables AI browser deception
Summary
Hide ▲
Show ▼
A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.
Timeline
-
20.08.2025 16:01 📰 3 articles
PromptFix exploit demonstrated on AI browsers
A new prompt injection technique, PromptFix, has been demonstrated to trick AI-driven browsers into executing malicious actions. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Comet was tricked into buying an Apple Watch from a fake Walmart site, manipulated into interacting with a phishing email, and deceived by a fake CAPTCHA page to download a malicious file. Guardio Labs recommends avoiding assigning sensitive tasks to AI browsers until security matures, and manually inputting sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.
Show sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
Information Snippets
-
PromptFix exploits AI browsers by embedding malicious instructions in web pages.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
The exploit targets AI-driven browsers like Perplexity's Comet, which automate tasks such as online shopping and email management.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity.
First reported: 20.08.2025 16:01📰 1 source, 1 articleShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
-
Researchers demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
PromptFix can bypass CAPTCHA checks to download malicious payloads without user involvement.
First reported: 20.08.2025 16:01📰 3 sources, 3 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- Perplexity’s Comet AI browser tricked into buying fake items online — www.bleepingcomputer.com — 20.08.2025 19:31
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
-
The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks.
First reported: 20.08.2025 16:01📰 2 sources, 2 articlesShow sources
- Experts Find AI Browsers Can Be Tricked by PromptFix Exploit to Run Malicious Hidden Prompts — thehackernews.com — 20.08.2025 16:01
- AI Agents in Browsers Light on Cybersecurity, Bypass Controls — www.darkreading.com — 26.08.2025 23:53
Similar Happenings
Cursor AI editor autoruns malicious code in repositories
A flaw in the Cursor AI editor allows malicious code in repositories to autorun on developer devices. This vulnerability can lead to malware execution, environment hijacking, and credential theft. The issue arises from Cursor disabling the Workspace Trust feature from VS Code, which prevents automatic task execution without explicit user consent. The flaw affects one million users who generate over a billion lines of code daily. The Cursor team has decided not to fix the issue, citing the need to maintain AI and other features. They recommend users enable Workspace Trust manually or use basic text editors for unknown projects. The flaw is part of a broader trend of prompt injections and jailbreaks affecting AI-powered coding tools.
GhostRedirector Compromises 65 Windows Servers Using Rungan Backdoor and Gamshen IIS Module
GhostRedirector, a previously undocumented threat cluster, has compromised at least 65 Windows servers primarily in Brazil, Thailand, and Vietnam. The attacks, active since at least August 2024, deployed the Rungan backdoor and Gamshen IIS module. Rungan executes commands on compromised servers, while Gamshen manipulates search engine results for SEO fraud. The threat actor targets various sectors, including education, healthcare, technology, transportation, insurance, and retail, using SQL injection vulnerabilities for initial access. The group is assessed with medium confidence to be China-aligned. The operation involves using PowerShell to download malware tools and exploits like EfsPotato and BadPotato for privilege escalation.
Malicious link spreading via X's Grok AI
Threat actors exploit X's Grok AI to bypass link posting restrictions and spread malicious links. They embed links in the 'From:' metadata field of video ads, prompting Grok to reveal the links in replies. This technique, dubbed 'Grokking,' boosts the credibility and reach of malicious content, leading users to scams and malware. The abuse affects millions of users, with Grok's trusted status amplifying the spread of malicious ads. Potential solutions include scanning all fields, blocking hidden links, and sanitizing Grok's responses to prevent it from echoing malicious links. The malicious links are part of a Traffic Distribution System (TDS) used by malicious ad tech vendors, and the operation involves hundreds of organized accounts. The Grok 4 model's security is fundamentally weaker than its competitors, relying heavily on system prompts that can be easily bypassed.
HexStrike AI Exploits Citrix Vulnerabilities Disclosed in August 2025
Threat actors have begun using HexStrike AI to exploit Citrix vulnerabilities disclosed in August 2025. HexStrike AI, an AI-driven security platform, was designed to automate reconnaissance and vulnerability discovery for authorized red teaming operations, but it has been repurposed for malicious activities. The exploitation attempts target three Citrix vulnerabilities, with some threat actors offering access to vulnerable NetScaler instances for sale on darknet forums. The use of HexStrike AI by threat actors significantly reduces the time between vulnerability disclosure and exploitation, increasing the risk of widespread attacks. The tool's automation capabilities allow for continuous exploitation attempts, enhancing the likelihood of successful breaches. Security experts emphasize the urgency of patching and hardening affected systems to mitigate the risks posed by this AI-driven threat. HexStrike AI's client features a retry logic and recovery handling to mitigate the effects of failures in any individual step on its complex operations. HexStrike AI has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks. Hackers started discussing HexStrike AI on hacking forums within hours of the Citrix vulnerabilities disclosure. HexStrike AI has been used to automate the exploitation chain, including scanning for vulnerable instances, crafting exploits, delivering payloads, and maintaining persistence. Check Point recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.
APT29 Watering Hole Campaign Targeting Microsoft Device Code Authentication
Amazon disrupted an APT29 watering hole campaign targeting Microsoft device code authentication. The campaign compromised websites to redirect visitors to malicious infrastructure, aiming to trick users into authorizing attacker-controlled devices. The operation leveraged various phishing methods and evasion techniques to harvest credentials and gather intelligence. APT29, a Russia-linked state-sponsored hacking group, used compromised websites to inject JavaScript that redirected visitors to actor-controlled domains mimicking Cloudflare verification pages. The campaign aimed to entice victims into entering a legitimate device code into a sign-in page, granting attackers access to Microsoft accounts and data. The activity involved Base64 encoding to conceal malicious code, setting cookies to prevent repeated redirects, and shifting to new infrastructure when blocked. Amazon's intervention led to the registration of additional domains by the actor, continuing the campaign's objectives. The campaign reflects an evolution in APT29's technical approach, no longer relying on domains that impersonate AWS or social engineering attempts to bypass multi-factor authentication (MFA).