Lies-in-the-Loop Attack Exploits AI Coding Agents
Summary
Hide β²
Show βΌ
A new attack vector, 'Lies-in-the-Loop' (LITL), exploits AI coding agents by manipulating human-in-the-loop (HITL) interactions. Researchers from Checkmarx Zero demonstrated the attack on Anthropic's Claude Code, showing how it can be tricked into executing arbitrary commands. The attack highlights the risks of prompt injection and the potential for software supply chain attacks. The LITL attack persuades AI agents to execute dangerous commands by providing seemingly safe context. It abuses the trust between humans and AI agents, making it difficult for users to detect malicious activities. The attack was successfully tested on developers, demonstrating its potential impact on software development workflows.
Timeline
-
15.09.2025 12:11 π° 1 articles Β· β± 10h ago
Lies-in-the-Loop Attack Demonstrated on Claude Code
Researchers from Checkmarx Zero demonstrated the LITL attack on Anthropic's Claude Code, an AI coding assistant. The attack involves tricking the AI agent into executing arbitrary commands by providing fake and seemingly safe context. The researchers successfully executed commands on a Windows machine and tested the attack on developers, showing its potential impact on software development workflows.
Show sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
Information Snippets
-
The Lies-in-the-Loop (LITL) attack exploits the intersection of agentic tooling and human fallibility.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
The attack was demonstrated on Anthropic's Claude Code, an AI coding assistant known for its safety considerations.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
The LITL attack involves tricking the AI agent into providing fake and seemingly safe context via commanding and explicit language.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
Researchers successfully executed arbitrary commands on a Windows machine using the LITL attack.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
The attack can hide malicious commands by pushing them off the top of the terminal window, making them less visible to users.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
The LITL attack can be used to submit malicious npm packages to GitHub repositories, posing a risk to the software supply chain.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
Organizations are increasingly adopting AI agents, with 79% already integrating them into workflows.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11
-
The LITL attack highlights the need for suspicion and careful management of AI agents in organizational workflows.
First reported: 15.09.2025 12:11π° 1 source, 1 articleShow sources
- 'Lies-in-the-Loop' Attack Defeats AI Coding Agents β www.darkreading.com β 15.09.2025 12:11