CyberHappenings logo
☰

Claude Code Security Reviews Introduced for AI-Assisted Development

First reported
Last updated
📰 1 unique sources, 1 articles

Summary

Hide ▲

Anthropic has introduced security review features in its Claude Code platform. These features aim to integrate security checks directly into AI-assisted development workflows, addressing vulnerabilities in AI-generated code. The new capabilities include ad hoc vulnerability checks and automated reviews on code changes, focusing on common issues like SQL injection and cross-site scripting. The reviews are designed to complement existing security tools and human reviews, rather than replace them. Security experts emphasize the need for a multi-layered approach to secure software development, especially as AI introduces new attack surfaces and vulnerabilities.

Timeline

  1. 22.08.2025 16:05 📰 1 articles

    Claude Code Introduces Security Review Features for AI-Assisted Development

    Anthropic has launched security review capabilities in its Claude Code platform. These features enable automated vulnerability checks and reviews in AI-assisted development workflows. The reviews focus on common issues like SQL injection and cross-site scripting, and can be integrated with GitHub actions for automated reviews on code changes. Security experts emphasize the need for a multi-layered approach to secure software development, using AI-assisted reviews as a complement to existing tools and human reviews.

    Show sources

Information Snippets

  • Claude Code's new security review feature allows for ad hoc vulnerability checks and automated reviews on code changes.

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources
  • The reviews focus on common vulnerabilities such as SQL injection, cross-site scripting, and insecure data handling.

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources
  • The feature can be integrated with GitHub actions to trigger reviews on every pull request.

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources
  • Experts recommend using AI-assisted reviews as a complement to human reviews and other security tools.

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources
  • The new capabilities are part of a broader trend towards platform engineering, embedding security directly into development tools and pipelines.

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources
  • AI-assisted development introduces new attack surfaces and vulnerabilities, such as those related to modern context protocol (MCP).

    First reported: 22.08.2025 16:05
    📰 1 source, 1 article
    Show sources

Similar Happenings

Cursor AI editor autoruns malicious code in repositories

A flaw in the Cursor AI editor allows malicious code in repositories to autorun on developer devices. This vulnerability can lead to malware execution, environment hijacking, and credential theft. The issue arises from Cursor disabling the Workspace Trust feature from VS Code, which prevents automatic task execution without explicit user consent. The flaw affects one million users who generate over a billion lines of code daily. The Cursor team has decided not to fix the issue, citing the need to maintain AI and other features. They recommend users enable Workspace Trust manually or use basic text editors for unknown projects. The flaw is part of a broader trend of prompt injections and jailbreaks affecting AI-powered coding tools.

HexStrike AI Exploits Citrix Vulnerabilities Disclosed in August 2025

Threat actors have begun using HexStrike AI to exploit Citrix vulnerabilities disclosed in August 2025. HexStrike AI, an AI-driven security platform, was designed to automate reconnaissance and vulnerability discovery for authorized red teaming operations, but it has been repurposed for malicious activities. The exploitation attempts target three Citrix vulnerabilities, with some threat actors offering access to vulnerable NetScaler instances for sale on darknet forums. The use of HexStrike AI by threat actors significantly reduces the time between vulnerability disclosure and exploitation, increasing the risk of widespread attacks. The tool's automation capabilities allow for continuous exploitation attempts, enhancing the likelihood of successful breaches. Security experts emphasize the urgency of patching and hardening affected systems to mitigate the risks posed by this AI-driven threat. HexStrike AI's client features a retry logic and recovery handling to mitigate the effects of failures in any individual step on its complex operations. HexStrike AI has been open-source and available on GitHub for the last month, where it has already garnered 1,800 stars and over 400 forks. Hackers started discussing HexStrike AI on hacking forums within hours of the Citrix vulnerabilities disclosure. HexStrike AI has been used to automate the exploitation chain, including scanning for vulnerable instances, crafting exploits, delivering payloads, and maintaining persistence. Check Point recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.

AI systems vulnerable to data-theft via hidden prompts in downscaled images

Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.