CyberHappenings logo

Track cybersecurity events as they unfold. Sourced timelines. Filter, sort, and browse. Fast, privacy‑respecting. No invasive ads, no tracking.

Indirect prompt injection in Grafana AI rendering enables silent data exfiltration

First reported
Last updated
2 unique sources, 2 articles

Summary

Hide ▲

A novel attack chain dubbed GrafanaGhost exploits indirect prompt injection via Grafana's AI rendering components to silently extract sensitive enterprise data. The exploit is triggered when manipulated inputs—crafted via image tags, protocol-relative URLs, and the "INTENT" keyword—bypass domain validation and AI guardrails, enabling stealthy real-time transmission of financial metrics, infrastructure health data, and customer records. Grafana Labs has patched the vulnerability in its Markdown component’s image renderer, though the company disputes claims of zero-click operation, asserting exploitation required substantial user interaction and AI warnings.

Timeline

  1. 07.04.2026 17:00 2 articles · 6h ago

    GrafanaGhost exploit in AI-enabled Grafana environments used for silent data exfiltration

    Further technical details of the GrafanaGhost attack chain are confirmed, including the use of image tags via protocol-relative URLs to bypass domain validation in Grafana’s Markdown component, and the "INTENT" keyword to disable AI model guardrails. Grafana Labs has patched the specific vulnerability in the Markdown image renderer following responsible disclosure by Noma Security. Grafana disputes the "zero-click" characterization, asserting that exploitation required significant user interaction and repeated instructions to the AI assistant, even after warnings, though no evidence of in-the-wild exploitation or data leakage from Grafana Cloud exists.

    Show sources

Information Snippets

Similar Happenings

Indirect Prompt Injection Vulnerabilities in ChatGPT Models

Researchers from Tenable discovered seven vulnerabilities in OpenAI's ChatGPT models (GPT-4o and GPT-5) that enable attackers to extract personal information from users' memories and chat histories. These vulnerabilities allow for indirect prompt injection attacks, which manipulate the AI's behavior to execute unintended or malicious actions. OpenAI has addressed some of these issues, but several vulnerabilities persist. The vulnerabilities include indirect prompt injection via trusted sites, zero-click indirect prompt injection in search contexts, and prompt injection via crafted links. Other techniques involve bypassing safety mechanisms, injecting malicious content into conversations, hiding malicious prompts, and poisoning user memories. The vulnerabilities affect the 'bio' feature, which allows ChatGPT to remember user details and preferences across chat sessions, and the 'open_url' command-line function, which leverages SearchGPT to access and render website content. Attackers can exploit the 'url_safe' endpoint by using Bing click-tracking URLs to lure users to phishing sites or exfiltrate user data. These findings highlight the risks associated with exposing AI chatbots to external tools and systems, which expand the attack surface for threat actors. The vulnerabilities stem from how ChatGPT ingests and processes instructions from external sources, allowing attackers to exploit these flaws through various methods. The most concerning issue is a zero-click vulnerability, where simply asking ChatGPT a benign question can trigger an attack if the search results include a poisoned website.

Google Gemini AI Vulnerabilities Allowing Prompt Injection and Data Exfiltration

Researchers disclosed multiple vulnerabilities in Google's Gemini AI assistant that could have exposed users to privacy risks and data theft. The flaws, collectively named the Gemini Trifecta, affected Gemini Cloud Assist, the Search Personalization Model, and the Browsing Tool. These vulnerabilities allowed for prompt injection attacks, search-injection attacks, and data exfiltration. Google has since patched the issues and implemented additional security measures. Additionally, a zero-click vulnerability in Gemini Enterprise, dubbed 'GeminiJack', was discovered in June 2025, allowing attackers to exfiltrate corporate data via indirect prompt injection. Google addressed this flaw by separating Vertex AI Search from Gemini Enterprise and updating their interaction with retrieval and indexing systems. A new prompt injection flaw in Google Gemini allowed attackers to bypass authorization guardrails and use Google Calendar as a data extraction mechanism. The flaw enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction. The attack involved a malicious payload hidden within a standard calendar invite, which was activated when a user asked Gemini about their schedule. The flaw allowed Gemini to create a new calendar event and write a full summary of the target user's private meetings in the event's description. The issue was addressed following responsible disclosure, highlighting the need for evaluating large language models across key safety and security dimensions. Additionally, a high-severity flaw in Google's implementation of Gemini AI in the Chrome browser, tracked as CVE-2026-0628, could allow attackers to escalate privileges, violate user privacy, and access sensitive system resources. The flaw was discovered by researchers from Palo Alto Networks' Unit 42 and was patched by Google in early January. The vulnerabilities highlight the potential risks of AI tools being used as attack vectors rather than just targets.

ShadowLeak: Undetectable Email Theft via AI Agents

A new attack vector, dubbed ShadowLeak, allows hackers to invisibly steal emails from users who integrate AI agents like ChatGPT with their email inboxes. The attack exploits the lack of visibility into AI processing on cloud infrastructure, making it undetectable to the user. The vulnerability was discovered by Radware and reported to OpenAI, which addressed it in August 2025. The attack involves embedding malicious code in emails, which the AI agent processes and acts upon without user awareness. The attack leverages an indirect prompt injection hidden in email HTML, using techniques like tiny fonts, white-on-white text, and layout tricks to remain undetected by the user. The attack can be extended to any connector that ChatGPT supports, including Box, Dropbox, GitHub, Google Drive, HubSpot, Microsoft Outlook, Notion, or SharePoint. A new variant of this attack, dubbed ZombieAgent, was discovered by Zvika Babo at Radware. This technique exploits a weakness in OpenAI's URL-modification defenses by leveraging pre-constructed, static URLs to exfiltrate sensitive data from ChatGPT one character at a time. The attack flow involves extracting sensitive data, normalizing it, and exfiltrating it character by character by opening pre-defined URLs in sequence. The vulnerability was reported to OpenAI via BugCrowd in September 2025 and fixed in mid-December 2025.