CyberHappenings logo
☰

Black Hat NOC Enhances AI in Security Operations

First reported
Last updated
πŸ“° 1 unique sources, 2 articles

Summary

Hide β–²

The Black Hat USA 2025 Network Operations Center (NOC) team expanded its use of AI and machine learning (ML) to manage and secure the conference network. The team monitored and mitigated malicious activities, distinguishing between legitimate and illegal activities, and identified vulnerabilities in applications and misconfigurations in security tools. The NOC team also observed trends such as increased self-hosting and insecure data transmission, which pose risks to organizations. The NOC team leveraged AI for risk scoring and categorization, ensuring that legitimate training activities were not flagged as malicious. They also identified and alerted attendees to security issues, such as misconfigured security tools and vulnerable applications, which could expose sensitive data. Additionally, high school students Sasha Zyuzin and Ruikai Peng presented a new vulnerability discovery framework at Black Hat USA 2025, combining static analysis with AI. Their framework, "Tree of AST," uses Google DeepMind's Tree of Thoughts methodology to automate vulnerability hunting while maintaining human oversight. The presenters discussed the double-edged nature of AI in security, noting that while LLMs can improve code quality, they can also introduce security risks.

Timeline

  1. 21.08.2025 19:22 πŸ“° 1 articles

    Innovative AI-Based Vulnerability Discovery Framework Presented

    High school students Sasha Zyuzin and Ruikai Peng presented a new vulnerability discovery framework at Black Hat USA 2025. Their framework, "Tree of AST," combines static analysis with AI to automate vulnerability hunting while maintaining human oversight. The presenters discussed the double-edged nature of AI in security, noting that while LLMs can improve code quality, they can also introduce security risks. The framework aims to reduce manual oversight in vulnerability discovery but does not eliminate the need for human verification. The presenters acknowledged the risks of "vibe coding" and the potential for LLMs to introduce security vulnerabilities. They emphasized the importance of validating results to avoid false positives in vulnerability discovery.

    Show sources
  2. 11.08.2025 21:48 πŸ“° 1 articles

    Black Hat NOC Expands AI Implementation Across Security Operations

    The Black Hat USA 2025 NOC team expanded its use of AI and ML to manage and secure the conference network. The team monitored and mitigated malicious activities, distinguishing between legitimate and illegal activities, and identified vulnerabilities in applications and misconfigurations in security tools. The NOC team also observed trends such as increased self-hosting and insecure data transmission, which pose risks to organizations.

    Show sources

Information Snippets

Similar Happenings

Axios Abuse and Salty 2FA Kits in Microsoft 365 Phishing Campaigns

Threat actors are leveraging HTTP client tools like Axios and Microsoft's Direct Send feature to execute advanced phishing campaigns targeting Microsoft 365 environments. These campaigns have demonstrated a 70% success rate, bypassing traditional security defenses and exploiting authentication workflows. The attacks began in July 2025 and have targeted executives and managers in various sectors, including finance, healthcare, and manufacturing. The phishing campaigns use compensation-themed lures to trick recipients into opening malicious PDFs containing QR codes that direct users to fake login pages. Additionally, a phishing-as-a-service (PhaaS) offering called Salty 2FA is being used to steal Microsoft login credentials and bypass multi-factor authentication (MFA). The Salty2FA kit includes advanced features such as subdomain rotation, dynamic corporate branding, and sophisticated evasion tactics to enhance its effectiveness and evade detection. Salty2FA activity began gaining momentum in June 2025, with early traces possibly dating back to March–April 2025. The campaigns have been active since late July 2025 and continue to this day, generating dozens of fresh analysis sessions daily. Salty2FA targets industries including finance, energy, telecom, healthcare, government, logistics, IT consulting, education, construction, chemicals, industrial manufacturing, real estate, consulting, metallurgy, and more.

HexStrike AI Exploits Citrix Vulnerabilities Disclosed in August 2025

Threat actors have begun using HexStrike AI to exploit Citrix vulnerabilities disclosed in August 2025. HexStrike AI, an AI-driven security platform, was designed to automate reconnaissance and vulnerability discovery for authorized red teaming operations, but it has been repurposed for malicious activities. The exploitation attempts target three Citrix vulnerabilities, with some threat actors offering access to vulnerable NetScaler instances for sale on darknet forums. The use of HexStrike AI by threat actors significantly reduces the time between vulnerability disclosure and exploitation, increasing the risk of widespread attacks. The tool's automation capabilities allow for continuous exploitation attempts, enhancing the likelihood of successful breaches. Security experts emphasize the urgency of patching and hardening affected systems to mitigate the risks posed by this AI-driven threat. HexStrike AI's client features a retry logic and recovery handling to mitigate the effects of failures in any individual step on its complex operations. HexStrike AI has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks. Hackers started discussing HexStrike AI on hacking forums within hours of the Citrix vulnerabilities disclosure. HexStrike AI has been used to automate the exploitation chain, including scanning for vulnerable instances, crafting exploits, delivering payloads, and maintaining persistence. Check Point recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.

APT29 Watering Hole Campaign Targeting Microsoft Device Code Authentication

Amazon disrupted an APT29 watering hole campaign targeting Microsoft device code authentication. The campaign compromised websites to redirect visitors to malicious infrastructure, aiming to trick users into authorizing attacker-controlled devices. The operation leveraged various phishing methods and evasion techniques to harvest credentials and gather intelligence. APT29, a Russia-linked state-sponsored hacking group, used compromised websites to inject JavaScript that redirected visitors to actor-controlled domains mimicking Cloudflare verification pages. The campaign aimed to entice victims into entering a legitimate device code into a sign-in page, granting attackers access to Microsoft accounts and data. The activity involved Base64 encoding to conceal malicious code, setting cookies to prevent repeated redirects, and shifting to new infrastructure when blocked. Amazon's intervention led to the registration of additional domains by the actor, continuing the campaign's objectives. The campaign reflects an evolution in APT29's technical approach, no longer relying on domains that impersonate AWS or social engineering attempts to bypass multi-factor authentication (MFA).

Malicious nx Packages Exfiltrate 2,349 GitHub, Cloud, and AI Credentials in Supply Chain Attack

A supply chain attack on the nx build system compromised multiple npm packages, leading to the exfiltration of 2,349 GitHub, cloud, and AI credentials. The attack unfolded in three distinct phases, impacting 2,180 accounts and 7,200 repositories. The attack exploited a vulnerable workflow in the nx repository to publish malicious versions of the nx package and supporting plugins. The compromised packages scanned file systems for credentials and sent them to attacker-controlled GitHub repositories. The attack impacted over 1,346 repositories and affected Linux and macOS systems. The nx maintainers identified the root cause as a vulnerable workflow added on August 21, 2025, that allowed for the injection of executable code via a pull request title. The malicious packages were published on August 26, 2025, and have since been removed from the npm registry. The attackers leveraged the GITHUB_TOKEN to trigger the publish workflow and exfiltrate the npm token. The malicious postinstall script scanned systems for text files, collected credentials, and sent them to publicly accessible GitHub repositories. The script also modified .zshrc and .bashrc files to shut down the machine immediately upon user interaction. The nx maintainers have rotated npm and GitHub tokens, audited activities, and updated publish access to require two-factor authentication. Wiz researchers identified a second attack wave impacting over 190 users/organizations and over 3,000 repositories. The second wave involved making private repositories public and creating forks to preserve data. GitGuardian's analysis revealed that 33% of compromised systems had at least one LLM client installed, and 85% were running Apple macOS. The attack took approximately four hours from start to finish. AI-powered CLI tools were used to dynamically scan for high-value secrets. The malware created public repositories on GitHub to store stolen data. The attack impacted over 1,000 developers, exfiltrating around 20,000 sensitive files. The malware modified shell startup files to crash systems upon terminal access. The attack was detected by multiple cybersecurity vendors. The malicious packages were removed from npm at 2:44 a.m. UTC on August 27, 2025. GitHub disabled all singularity-repository instances by 9 a.m. UTC on August 27, 2025. Around 90% of leaked GitHub tokens remain active as of August 28, 2025.

AI-Powered Cyberattacks Targeting Critical Sectors Disrupted

Anthropic disrupted an AI-powered operation in July 2025 that used its Claude AI chatbot to conduct large-scale theft and extortion across 17 organizations in healthcare, emergency services, government, and religious sectors. The actor used Claude Code on Kali Linux to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The operation, codenamed GTG-2002, employed AI to make tactical and strategic decisions, exfiltrating sensitive data and demanding ransoms ranging from $75,000 to $500,000 in Bitcoin. The actor used AI to craft bespoke versions of the Chisel tunneling utility to evade detection and disguise malicious executables as legitimate Microsoft tools. The operation highlights the increasing use of AI in cyberattacks, making defense and enforcement more challenging. Anthropic developed new detection methods to prevent future abuse of its AI models.