Tree of AST: AI-enhanced vulnerability discovery framework presented at Black Hat USA 2025
Summary
Hide ▲
Show ▼
Two high school students, Sasha Zyuzin and Ruikai Peng, presented a novel framework for vulnerability discovery at Black Hat USA 2025. The framework, Tree of AST, combines traditional static analysis with AI to automate repetitive tasks in vulnerability hunting while maintaining human oversight. The approach leverages Google DeepMind's Tree of Thoughts methodology to mimic human reasoning in bug discovery. The framework aims to reduce manual effort but not eliminate it entirely, focusing on autonomous decision-making with human verification to prevent false positives. The students discussed the double-edged nature of AI in security, noting that while AI can improve code quality, it can also introduce security risks when prioritizing functionality over security. They highlighted the importance of validating results and the potential risks of 'vibe coding'—using AI to generate code without sufficient security considerations. The framework is designed to complement existing security products rather than replace them, enhancing traditional methods with AI capabilities.
Timeline
-
21.08.2025 19:22 1 articles · 1mo ago
Tree of AST framework presented at Black Hat USA 2025
Two high school students, Sasha Zyuzin and Ruikai Peng, presented the Tree of AST framework at Black Hat USA 2025. The framework combines traditional static analysis with AI to automate vulnerability discovery, leveraging Google DeepMind's Tree of Thoughts methodology. The approach aims to reduce manual effort while maintaining human oversight to verify results and prevent false positives. The students discussed the broader implications of AI in security, highlighting both its benefits and potential risks.
Show sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
Information Snippets
-
Tree of AST framework combines static analysis with AI to automate vulnerability discovery.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
-
The framework uses Google DeepMind's Tree of Thoughts methodology to mimic human reasoning in bug discovery.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
-
The approach aims to reduce manual effort but retains human oversight to verify results and prevent false positives.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
-
The students presented the framework at Black Hat USA 2025, making them the youngest presenters at the event.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
-
AI in security can improve code quality but may also introduce security risks if functionality is prioritized over security.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
-
The framework is designed to complement existing security products, enhancing traditional methods with AI capabilities.
First reported: 21.08.2025 19:221 source, 1 articleShow sources
- Tree of AST: A Bug-Hunting Framework Powered by LLMs — www.darkreading.com — 21.08.2025 19:22
Similar Happenings
HexStrike AI weaponized to exploit Citrix vulnerabilities
Threat actors have begun using HexStrike AI, an AI-driven security tool, to exploit recently disclosed Citrix vulnerabilities. HexStrike AI, designed for authorized red teaming and bug bounty hunting, has been repurposed to automate the exploitation of security flaws. This development highlights the rapid weaponization of AI tools by malicious actors, significantly reducing the time between vulnerability disclosure and exploitation. The exploitation attempts target three Citrix vulnerabilities disclosed last week. Threat actors are using HexStrike AI to identify and exploit vulnerable NetScaler instances, which are then offered for sale on dark web forums. This trend underscores the growing threat of AI-powered cyberattacks and the need for robust defensive measures. CheckPoint Research observed significant chatter on the dark web around HexStrike-AI, associated with the rapid weaponization of newly disclosed Citrix vulnerabilities, including CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Nearly 8,000 endpoints remain vulnerable to CVE-2025-7775 as of September 2, 2025, down from 28,000 the previous week. CheckPoint recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.
AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
Anthropic disrupted a sophisticated AI-powered cyberattack operation in July 2025. The actor targeted 17 organizations across healthcare, emergency services, government, and religious institutions. The attacker used Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The actor threatened to expose stolen data publicly to extort victims into paying ransoms. The operation, codenamed GTG-2002, employed Claude Code on Kali Linux to conduct attacks, using it to make tactical and strategic decisions autonomously. The attacker used Claude Code to craft bespoke versions of the Chisel tunneling utility and disguise malicious executables as legitimate Microsoft tools. The actor organized stolen data for monetization, creating customized ransom notes and multi-tiered extortion strategies. Anthropic developed a custom classifier to screen for similar behavior and shared technical indicators with key partners to mitigate future threats. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. Claude Code was used for real-time assistance with network penetrations and direct operational support for active intrusions, such as guidance for privilege escalation and lateral movement. The threat actor created obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.
AI systems vulnerable to data-theft via hidden prompts in downscaled images
AI systems remain vulnerable to data-theft via hidden prompts in downscaled images. Researchers from Trail of Bits have demonstrated a novel attack vector that exploits AI systems by embedding hidden prompts in images. These prompts become visible when images are downscaled, enabling data theft or unauthorized actions. The attack leverages image resampling algorithms to reveal hidden instructions, which are then executed by the AI model. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by crafting images with specific patterns that emerge during downscaling. These patterns contain instructions that the AI model interprets as part of the user's input, leading to potential data leakage or other malicious activities. The researchers have developed an open-source tool, Anamorpher, to create images for testing and demonstrating the attack. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and seeking explicit user confirmation for sensitive tool calls.
AI Browsers Vulnerable to PromptFix Exploit for Malicious Prompts
AI-driven browsers are vulnerable to a new prompt injection technique called PromptFix, which tricks them into executing malicious actions. The exploit embeds harmful instructions within fake CAPTCHA checks on web pages, leading AI browsers to interact with phishing sites or fraudulent storefronts without user intervention. This vulnerability affects AI browsers like Perplexity's Comet, which can be manipulated into performing actions such as purchasing items on fake websites or entering credentials on phishing pages. The technique leverages the AI's design goal of assisting users quickly and without hesitation, leading to a new form of scam called Scamlexity. This involves AI systems autonomously pursuing goals and making decisions with minimal human supervision, increasing the complexity and invisibility of scams. The exploit can be triggered by simple instructions, such as 'Buy me an Apple Watch,' leading the AI browser to add items to carts and auto-fill sensitive information on fake sites. Similarly, AI browsers can be tricked into parsing spam emails and entering credentials on phony login pages, creating a seamless trust chain for attackers. Guardio's tests revealed that agentic AI browsers are vulnerable to phishing, prompt injection, and purchasing from fake shops. Comet was directed to a fake shop and completed a purchase without human confirmation. Comet also treated a fake Wells Fargo email as genuine and entered credentials on a phishing page. Additionally, Comet interpreted hidden instructions in a fake CAPTCHA page, triggering a malicious file download. AI firms are integrating AI functionality into browsers, allowing software agents to automate workflows, but enterprise security teams need to balance automation's benefits with the risks posed by the fact that artificial intelligence lacks security awareness. Security has largely been put on the back burner, and AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. Nearly all companies plan to expand their use of AI agents in the next year, but most are not prepared for the new risks posed by AI agents in a business environment. Until the security aspect of agentic AI browsers reaches a certain level of maturity, it is advisable to avoid assigning sensitive tasks to them and to manually input sensitive data when needed.