CyberHappenings logo
☰

Track cybersecurity events as they unfold. Sourced timelines, daily updates. Fast, privacy‑respecting. No ads, no tracking.

Lies-in-the-Loop Attack Exploits AI Coding Agents

First reported
Last updated
πŸ“° 1 unique sources, 1 articles

Summary

Hide β–²

A new attack vector, 'Lies-in-the-Loop' (LITL), exploits AI coding agents by manipulating human-in-the-loop (HITL) interactions. Researchers from Checkmarx Zero demonstrated the attack on Anthropic's Claude Code, showing how it can be tricked into executing arbitrary commands. The attack highlights the risks of prompt injection and the potential for software supply chain attacks. The LITL attack persuades AI agents to execute dangerous commands by providing seemingly safe context. It abuses the trust between humans and AI agents, making it difficult for users to detect malicious activities. The attack was successfully tested on developers, demonstrating its potential impact on software development workflows.

Timeline

  1. 15.09.2025 12:11 πŸ“° 1 articles Β· ⏱ 10h ago

    Lies-in-the-Loop Attack Demonstrated on Claude Code

    Researchers from Checkmarx Zero demonstrated the LITL attack on Anthropic's Claude Code, an AI coding assistant. The attack involves tricking the AI agent into executing arbitrary commands by providing fake and seemingly safe context. The researchers successfully executed commands on a Windows machine and tested the attack on developers, showing its potential impact on software development workflows.

    Show sources

Information Snippets

  • The Lies-in-the-Loop (LITL) attack exploits the intersection of agentic tooling and human fallibility.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • The attack was demonstrated on Anthropic's Claude Code, an AI coding assistant known for its safety considerations.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • The LITL attack involves tricking the AI agent into providing fake and seemingly safe context via commanding and explicit language.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • Researchers successfully executed arbitrary commands on a Windows machine using the LITL attack.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • The attack can hide malicious commands by pushing them off the top of the terminal window, making them less visible to users.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • The LITL attack can be used to submit malicious npm packages to GitHub repositories, posing a risk to the software supply chain.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • Organizations are increasingly adopting AI agents, with 79% already integrating them into workflows.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources
  • The LITL attack highlights the need for suspicion and careful management of AI agents in organizational workflows.

    First reported: 15.09.2025 12:11
    πŸ“° 1 source, 1 article
    Show sources