CyberHappenings logo
☰

Track cybersecurity events as they unfold. Sourced timelines, daily updates. Fast, privacy‑respecting. No ads, no tracking.

Tree of AST: AI-Assisted Bug Hunting Framework

First reported
Last updated
πŸ“° 1 unique sources, 1 articles

Summary

Hide β–²

High school students Sasha Zyuzin and Ruikai Peng presented a novel vulnerability discovery framework at Black Hat USA 2025. The framework, Tree of AST, combines traditional static analysis with AI to automate repetitive manual processes in vulnerability hunting while maintaining human oversight. The approach leverages Google DeepMind's Tree of Thoughts methodology to mimic human reasoning in bug discovery. The creators emphasize the need for human verification to prevent false positives and address the risks introduced by AI-generated code.

Timeline

  1. 21.08.2025 19:22 πŸ“° 1 articles Β· ⏱ 26d ago

    Tree of AST: AI-Assisted Bug Hunting Framework Presented at Black Hat USA 2025

    High school students Sasha Zyuzin and Ruikai Peng presented the Tree of AST framework at Black Hat USA 2025. The framework combines traditional static analysis with AI to automate vulnerability discovery while maintaining human oversight. The creators discussed the risks of AI-generated code and the need for human verification to prevent false positives. Tree of AST is designed to complement existing static analysis frameworks and enhance traditional cybersecurity tasks with AI capabilities.

    Show sources

Information Snippets

  • Tree of AST framework integrates Google DeepMind's Tree of Thoughts methodology to automate vulnerability discovery.

    First reported: 21.08.2025 19:22
    πŸ“° 1 source, 1 article
    Show sources
  • The framework aims to reduce manual oversight in vulnerability hunting but retains human verification to prevent false positives.

    First reported: 21.08.2025 19:22
    πŸ“° 1 source, 1 article
    Show sources
  • The creators highlight the risks of AI-generated code, such as 'vibe coding,' which can introduce security vulnerabilities.

    First reported: 21.08.2025 19:22
    πŸ“° 1 source, 1 article
    Show sources
  • Tree of AST is designed to complement existing static analysis frameworks and methodologies.

    First reported: 21.08.2025 19:22
    πŸ“° 1 source, 1 article
    Show sources
  • The framework is intended to enhance traditional cybersecurity tasks with AI capabilities, such as fuzzing, static, and dynamic analysis.

    First reported: 21.08.2025 19:22
    πŸ“° 1 source, 1 article
    Show sources

Similar Happenings

Citrix NetScaler ADC and Gateway vulnerabilities patched and actively exploited in the wild

Citrix has released patches for three vulnerabilities in NetScaler ADC and NetScaler Gateway. One of these vulnerabilities, CVE-2025-7775, is actively exploited in the wild. The flaws include memory overflow vulnerabilities and improper access control issues. The vulnerabilities affect specific configurations of NetScaler ADC and NetScaler Gateway, including unsupported, end-of-life versions. Citrix has confirmed active exploitation of CVE-2025-7775, which can lead to remote code execution or denial-of-service. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added CVE-2025-7775 to its Known Exploited Vulnerabilities (KEV) catalog, requiring federal agencies to remediate within 48 hours. Nearly 20% of NetScaler assets identified are on unsupported, end-of-life versions, with a significant concentration in North America and the APAC region. CISA lists 10 NetScaler flaws in its KEV catalog, with six discovered in the last two years. Threat actors are using HexStrike AI, an AI-driven security platform, to exploit the Citrix vulnerabilities, significantly reducing the time between disclosure and mass exploitation. HexStrike-AI was created by cybersecurity researcher Muhammad Osama and has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks.

AI systems vulnerable to data-theft prompts in downscaled images

Researchers have demonstrated a new attack method that steals user data by embedding malicious prompts in images. These prompts are invisible in full-resolution images but become visible when the images are downscaled by AI systems. The attack exploits aliasing artifacts introduced by resampling algorithms, allowing hidden text to emerge and be interpreted as user instructions by the AI model. This can lead to data leakage or unauthorized actions. The method has been successfully tested against several AI systems, including Google Gemini CLI, Vertex AI Studio, Gemini's web interface, Gemini's API, Google Assistant on Android, and Genspark. The attack was developed by Kikimora Morozova and Suha Sabi Hussain from Trail of Bits, building on a 2020 theory presented in a USENIX paper. The researchers have also released an open-source tool, Anamorpher, to create images for testing the attack. They recommend implementing dimension restrictions and user confirmation for sensitive tool calls as mitigation strategies.