AI-Assisted Code Development Security Practices
Summary
Hide â˛
Show âŧ
AI-assisted coding tools are increasingly adopted by developers, but they introduce significant security vulnerabilities if unchecked. Effective use requires human oversight, security-focused development, and AI code checkers. Organizations are integrating AI code checkers to scan and remediate vulnerabilities in real-time. Human verification remains crucial to ensure the security of AI-generated code. Security experts emphasize the need for a robust security program and continuous testing to mitigate risks associated with AI-assisted coding.
Timeline
-
19.08.2025 23:25 đ° 1 articles
AI-Assisted Coding Tools Adoption and Security Practices
AI-assisted coding tools are widely adopted, introducing security vulnerabilities if unchecked. Organizations are integrating AI code checkers to scan and remediate vulnerabilities in real-time. Human oversight and continuous testing are essential to ensure the security of AI-generated code. Security experts recommend a robust security program and continuous remediation to address predictable vulnerabilities in AI-generated code.
Show sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
Information Snippets
-
AI-assisted coding tools are widely used by developers, with 100% adoption expected.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
-
AI-generated code introduces notable vulnerabilities in 45% of tested tasks.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
-
Human oversight and verification are essential to ensure the security of AI-generated code.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
-
AI code checkers, such as Snyk's 'Secure at Inception' and Veracode's Veracode Fix, scan and remediate vulnerabilities in real-time.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
-
DARPA's AI Cyber Challenge aims to develop AI tools to address open source software flaws.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
-
Continuous testing and remediation are necessary to address predictable vulnerabilities in AI-generated code.
First reported: 19.08.2025 23:25đ° 1 source, 1 articleShow sources
- How to Vibe Code With Security in Mind â www.darkreading.com â 19.08.2025 23:25
Similar Happenings
Cursor AI editor autoruns malicious code in repositories
A flaw in the Cursor AI editor allows malicious code in repositories to autorun on developer devices. This vulnerability can lead to malware execution, environment hijacking, and credential theft. The issue arises from Cursor disabling the Workspace Trust feature from VS Code, which prevents automatic task execution without explicit user consent. The flaw affects one million users who generate over a billion lines of code daily. The Cursor team has decided not to fix the issue, citing the need to maintain AI and other features. They recommend users enable Workspace Trust manually or use basic text editors for unknown projects. The flaw is part of a broader trend of prompt injections and jailbreaks affecting AI-powered coding tools.
AI systems vulnerable to data-theft via hidden prompts in downscaled images
Researchers at Trail of Bits have demonstrated a new attack method that exploits image downscaling in AI systems to steal user data. The attack injects hidden prompts in full-resolution images that become visible when the images are resampled to lower quality. These prompts are interpreted by AI models as user instructions, potentially leading to data leakage or unauthorized actions. The vulnerability affects multiple AI systems, including Google Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Genspark. The attack works by embedding instructions in images that are only revealed when the images are downscaled using specific resampling algorithms. The AI model then interprets these hidden instructions as part of the user's input, executing them without the user's knowledge. The researchers have developed an open-source tool, Anamorpher, to create images for testing this vulnerability. To mitigate the risk, Trail of Bits recommends implementing dimension restrictions on image uploads, providing users with previews of downscaled images, and requiring explicit user confirmation for sensitive tool calls.
PromptFix exploit enables AI browser deception
A new prompt injection technique, PromptFix, tricks AI-driven browsers into executing malicious actions by embedding hidden instructions in web pages. The exploit targets AI browsers like Perplexity's Comet, Microsoft Edge with Copilot, and OpenAI's upcoming 'Aura', which automate tasks such as online shopping and email management. PromptFix can deceive AI models into interacting with phishing sites or fraudulent storefronts, potentially leading to unauthorized purchases or credential theft. The technique exploits the AI's design goal to assist users quickly and without hesitation, creating a new scam landscape called Scamlexity. Researchers from Guardio Labs demonstrated the exploit by tricking Comet into adding items to a cart and auto-filling payment details on fake shopping sites. Similar attacks can manipulate AI browsers into parsing spam emails and entering credentials on phishing pages. PromptFix can also bypass CAPTCHA checks to download malicious payloads without user involvement. The exploit highlights the need for robust defenses in AI systems to anticipate and neutralize such attacks, including phishing detection, URL reputation checks, and domain spoofing protections. Until security matures, users should avoid assigning sensitive tasks to AI browsers and manually input sensitive data when needed. AI browser agents from major AI firms failed to reliably detect the signs of a phishing site. AI agents are gullible and servile, making them vulnerable to attacks in an adversarial setting. Companies should move from "trust, but verify" to "doubt, and double verify" until an AI agent has shown it can always complete a workflow properly. AI companies are not expected to pause developing more functionality to improve security. Companies should hold off on putting AI agents into any business process that requires reliability until AI-agent makers offer better visibility, control, and security. Securing AI requires gaining visibility into all AI use by company workers and creating an AI usage policy and a list of approved tools.
Black Hat NOC Enhances AI in Security Operations
The Black Hat USA 2025 Network Operations Center (NOC) team expanded its use of AI and machine learning (ML) to manage and secure the conference network. The team monitored and mitigated malicious activities, distinguishing between legitimate and illegal activities, and identified vulnerabilities in applications and misconfigurations in security tools. The NOC team also observed trends such as increased self-hosting and insecure data transmission, which pose risks to organizations. The NOC team leveraged AI for risk scoring and categorization, ensuring that legitimate training activities were not flagged as malicious. They also identified and alerted attendees to security issues, such as misconfigured security tools and vulnerable applications, which could expose sensitive data. Additionally, high school students Sasha Zyuzin and Ruikai Peng presented a new vulnerability discovery framework at Black Hat USA 2025, combining static analysis with AI. Their framework, "Tree of AST," uses Google DeepMind's Tree of Thoughts methodology to automate vulnerability hunting while maintaining human oversight. The presenters discussed the double-edged nature of AI in security, noting that while LLMs can improve code quality, they can also introduce security risks.