AI-Assisted Coding Security Practices and Tools
Summary
Hide ▲
Show ▼
AI-assisted coding tools are increasingly adopted in organizations, but they introduce security vulnerabilities if not properly managed. Effective use involves human oversight, security-focused development, and leveraging AI code checkers. Recent advancements include new tools and capabilities designed to secure AI-generated code at inception and remediate vulnerabilities. These tools aim to integrate security into the development process, addressing the predictable nature of AI-generated vulnerabilities.
Timeline
-
19.08.2025 23:25 1 articles · 1mo ago
AI Code Checkers and Security Practices Emerging
AI-assisted coding tools are increasingly adopted, introducing security vulnerabilities if not managed properly. Recent advancements include new tools and capabilities designed to secure AI-generated code at inception and remediate vulnerabilities. These tools aim to integrate security into the development process, addressing the predictable nature of AI-generated vulnerabilities. DARPA's AI Cyber Challenge highlights ongoing efforts to develop AI tools for addressing open source software flaws.
Show sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
Information Snippets
-
AI-assisted coding tools are widely adopted, with 100% of developers expected to use them.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
-
AI-generated code introduces notable vulnerabilities in 45% of tested tasks.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
-
Human oversight is crucial for verifying and securing AI-generated code.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
-
AI code checkers, such as Snyk's 'Secure at Inception' and Veracode's Veracode Fix, are emerging to address security concerns.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
-
DARPA's AI Cyber Challenge aims to develop AI tools for addressing open source software flaws.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
-
AI code checkers are designed to offload some security burden from humans but still require human verification.
First reported: 19.08.2025 23:251 source, 1 articleShow sources
- How to Vibe Code With Security in Mind — www.darkreading.com — 19.08.2025 23:25
Similar Happenings
AI Governance Strategies for CISOs in Enterprise Environments
Chief Information Security Officers (CISOs) are increasingly tasked with driving effective AI governance in enterprise environments. The integration of AI presents both opportunities and risks, necessitating a balanced approach that ensures security without stifling innovation. Effective AI governance requires a living system that adapts to real-world usage and aligns with organizational risk tolerance and business priorities. CISOs must understand the ground-level AI usage within their organizations, align policies with the speed of organizational adoption, and make AI governance sustainable. This involves creating AI inventories, model registries, and cross-functional committees to ensure comprehensive oversight and shared responsibility. Policies should be flexible and evolve with the organization, supported by standards and procedures that guide daily work. Sustainable governance also includes equipping employees with secure AI tools and reinforcing positive behaviors. The SANS Institute's Secure AI Blueprint outlines two pillars: Utilizing AI and Protecting AI, which are crucial for effective AI governance.
Cursor IDE autorun flaw allows malicious code execution
A vulnerability in the Cursor AI-powered Integrated Development Environment (IDE) allows automatic execution of tasks in malicious repositories upon opening. This flaw can be exploited to drop malware, hijack developer environments, or steal credentials and API tokens. The issue arises from Cursor disabling the Workspace Trust feature from Visual Studio Code (VS Code), which blocks automatic execution of tasks without explicit consent. This default behavior can be exploited by adding a malicious .vscode/tasks.json file in a publicly shared repository. The flaw affects Cursor's one million users who generate over a billion lines of code daily. The flaw can be exploited to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply-chain attacks. Cursor has decided not to fix the issue, citing the need to maintain AI and other features that depend on the autorun behavior. Users are advised to enable Workspace Trust manually or use a basic text editor for unknown projects.
HexStrike AI weaponized to exploit Citrix vulnerabilities
Threat actors have begun using HexStrike AI, an AI-driven security tool, to exploit recently disclosed Citrix vulnerabilities. HexStrike AI, designed for authorized red teaming and bug bounty hunting, has been repurposed to automate the exploitation of security flaws. This development highlights the rapid weaponization of AI tools by malicious actors, significantly reducing the time between vulnerability disclosure and exploitation. The exploitation attempts target three Citrix vulnerabilities disclosed last week. Threat actors are using HexStrike AI to identify and exploit vulnerable NetScaler instances, which are then offered for sale on dark web forums. This trend underscores the growing threat of AI-powered cyberattacks and the need for robust defensive measures. CheckPoint Research observed significant chatter on the dark web around HexStrike-AI, associated with the rapid weaponization of newly disclosed Citrix vulnerabilities, including CVE-2025-7775, CVE-2025-7776, and CVE-2025-8424. Nearly 8,000 endpoints remain vulnerable to CVE-2025-7775 as of September 2, 2025, down from 28,000 the previous week. CheckPoint recommends defenders focus on early warning through threat intelligence, AI-driven defenses, and adaptive detection.
AI-Powered Cyberattacks Automating Theft and Extortion Disrupted by Anthropic
Anthropic disrupted a sophisticated AI-powered cyberattack operation in July 2025. The actor targeted 17 organizations across healthcare, emergency services, government, and religious institutions. The attacker used Anthropic's AI-powered chatbot Claude to automate various phases of the attack cycle, including reconnaissance, credential harvesting, and network penetration. The actor threatened to expose stolen data publicly to extort victims into paying ransoms. The operation, codenamed GTG-2002, employed Claude Code on Kali Linux to conduct attacks, using it to make tactical and strategic decisions autonomously. The attacker used Claude Code to craft bespoke versions of the Chisel tunneling utility and disguise malicious executables as legitimate Microsoft tools. The actor organized stolen data for monetization, creating customized ransom notes and multi-tiered extortion strategies. Anthropic developed a custom classifier to screen for similar behavior and shared technical indicators with key partners to mitigate future threats. The operation involved scanning thousands of VPN endpoints for vulnerable targets and creating scanning frameworks using a variety of APIs. The actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file. Claude Code was used for real-time assistance with network penetrations and direct operational support for active intrusions, such as guidance for privilege escalation and lateral movement. The threat actor created obfuscated versions of the Chisel tunneling tool to evade Windows Defender detection and developed completely new TCP proxy code that doesn't use Chisel libraries at all. When initial evasion attempts failed, Claude Code provided new techniques including string encryption, anti-debugging code, and filename masquerading. The threat actor stole personal records, healthcare data, financial information, government credentials, and other sensitive information. Claude not only performed 'on-keyboard' operations but also analyzed exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming HTML ransom notes that were displayed on victim machines by embedding them into the boot process. The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.
AI systems vulnerable to data-theft via hidden prompts in downscaled images
AI systems remain vulnerable to data-theft via hidden prompts in downscaled images. Researchers from Trail of Bits have demonstrated a novel attack vector that exploits AI systems by embedding hidden prompts in images. These prompts become visible when images are downscaled, enabling data theft or unauthorized actions. The attack leverages image resampling algorithms to reveal hidden instructions, which are then executed by the AI model. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by crafting images with specific patterns that emerge during downscaling. These patterns contain instructions that the AI model interprets as part of the user's input, leading to potential data leakage or other malicious activities. The researchers have developed an open-source tool, Anamorpher, to create images for testing and demonstrating the attack. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and seeking explicit user confirmation for sensitive tool calls.