Widespread Security Misconfigurations Disclosed in Self-Hosted AI Infrastructure Exposing Millions of Hosts
Summary
Hide ▲
Show ▼
A security analysis of over 1 million exposed AI services across 2 million hosts revealed systemic security failures in self-hosted large language model (LLM) infrastructure. Default deployments lacked authentication, exposing sensitive user data, chatbot conversations, and internal business logic. Instances of credential leaks, plaintext API keys, and unsecured agent management platforms were identified across government, marketing, and finance sectors. Unauthenticated Ollama APIs (31% of 5,200+ tested) enabled potential abuse of frontier models without accountability. Analysis confirmed insecure defaults, arbitrary code execution vulnerabilities, and inadequate sandboxing practices. The findings indicate that rapid AI adoption has outpaced security controls, with self-hosted AI tools exhibiting higher exposure risks than other software categories analyzed.
Timeline
-
05.05.2026 13:30 1 articles · 16h ago
Systemic Security Flaws in Self-Hosted AI Infrastructure Exposed During Large-Scale Scan
Security researchers identified over 1 million exposed AI services across 2 million hosts, highlighting default-insecure deployments, lack of authentication, and critical misconfigurations in self-hosted LLM infrastructure. Key findings include unauthenticated Ollama APIs (31% of 5,200+ tested), exposed chatbot conversation histories, plaintext API keys, and unsecured agent management platforms (n8n, Flowise). Analysis confirmed insecure defaults, arbitrary code execution vulnerabilities, and inadequate sandboxing, with over 90 instances observed across government, marketing, and finance sectors.
Show sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
Information Snippets
-
Over 1 million exposed AI services were identified across 2 million hosts using certificate transparency logs, with a focus on self-hosted LLM infrastructure.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
No authentication was enabled by default in many AI projects, leaving user conversations, API keys, and internal workflows exposed to unauthenticated access.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
Exposed chatbot instances based on OpenUI and similar frameworks revealed full conversation histories, including enterprise user interactions and NSFW content.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
Unauthenticated Ollama APIs were found on 31% of 5,200+ queried servers, with 518 instances wrapping paid frontier models from Anthropic, Deepseek, Moonshot, Google, and OpenAI.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
Exposed agent management platforms (e.g., n8n, Flowise) revealed entire business logic, credential lists, and integrated third-party tool access without access controls.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
Analysis identified insecure deployment practices including hardcoded credentials, Docker misconfigurations, applications running as root, and arbitrary code execution in popular AI projects.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30
-
Over 90 exposed instances were observed across sectors including government, marketing, and finance, enabling attackers to modify workflows, redirect traffic, or poison responses.
First reported: 05.05.2026 13:301 source, 1 articleShow sources
- We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is — thehackernews.com — 05.05.2026 13:30