Logo
FrontierNews.ai

175,000 Exposed AI Servers Found Online: Why Your AI Agent Framework Might Be a Security Disaster

A new security scan has revealed a staggering vulnerability in how organizations are deploying AI agent frameworks: nearly 175,000 Ollama instances and more than 8,000 Model Context Protocol (MCP) servers are exposed to the public internet with little to no authentication protecting them. This discovery underscores a critical gap between the rapid adoption of agentic AI systems and the security practices needed to protect them. As companies rush to deploy AI agents using frameworks like OpenClaw, LangChain, and other orchestration tools, many are leaving their inference endpoints wide open to potential attackers.

What Are These Exposed AI Endpoints, and Why Should You Care?

An AI endpoint is essentially a server running an AI model or agent framework that accepts requests from the internet. When these endpoints lack authentication, anyone can access them, query the models, extract sensitive information, or abuse the tools the agent has been given. According to research from Bishop Fox, a cybersecurity firm, nearly half of the exposed endpoints discovered offer code execution capabilities without any authentication barrier. This means an attacker could potentially run arbitrary code on systems that organizations believed were secure.

The problem extends across multiple popular frameworks and tools. AIMap, a free open-source security scanner released by Bishop Fox, now maps and tests these exposed endpoints at internet scale. The tool discovered vulnerabilities spanning:

  • Ollama instances: Over 175,000 exposed servers running Ollama, a framework for running large language models locally
  • MCP servers: More than 8,000 open Model Context Protocol endpoints designed for tool use and function calling
  • Framework deployments: Exposed instances of LangServe, LangChain, vLLM, LiteLLM, LocalAI, and OpenClaw systems
  • Web interfaces: Unprotected Open WebUI, LibreChat, Gradio, and Streamlit applications
  • Inference proxies: Exposed OpenAI-compatible endpoints and other generic inference APIs

The scale of the problem is staggering. In a demonstration of AIMap's capabilities, researchers scanned roughly 2,000 live AI endpoints across 50 countries and found that 91 percent lacked any form of authentication. This suggests that the problem is not isolated to a few careless deployments; it reflects a systemic gap in how organizations are approaching AI agent security.

How Do Attackers Exploit These Exposed Endpoints?

An exposed AI endpoint creates multiple attack vectors. The most dangerous involve prompt injection, tool abuse, and model extraction. When an agent framework like OpenClaw or LangChain is exposed without authentication, attackers can craft malicious prompts designed to override the agent's intended behavior. For instance, an attacker could send a prompt that tricks the agent into executing code, deleting files, or accessing sensitive data the agent was never meant to touch.

Tool exposure is particularly concerning. AI agents are powerful precisely because they can interact with external systems through tools and function calling. If an agent has access to a database query tool, a file system tool, or a code execution tool, and that agent is exposed to the internet without authentication, an attacker gains access to those same capabilities. According to the AIMap research, many exposed endpoints revealed not just the models available but also the specific tools and capabilities the agent could invoke.

Another vulnerability is system prompt leakage. The system prompt is the hidden instruction that tells an AI agent how to behave. If an attacker can extract this prompt, they gain insight into the agent's capabilities, constraints, and intended use cases. This information can be used to craft more sophisticated attacks. AIMap's fingerprinting module specifically checks for system prompt leakage as part of its risk scoring.

Why Are Organizations Leaving AI Endpoints Exposed?

The root cause is a mismatch between development speed and security maturity. Organizations are deploying AI agents rapidly because the business value is clear: agents can automate complex workflows, interact with external systems, and handle multi-step reasoning tasks that traditional chatbots cannot. However, the infrastructure and security practices needed to protect these agents have not kept pace.

OpenClaw, for example, is designed with a modular architecture that makes it easy to deploy agents across multiple channels, including Slack, Discord, WhatsApp, Telegram, and Signal. This flexibility is powerful for productivity, but it also means that developers must carefully manage authentication and access controls across multiple integration points. Many teams skip these steps in the rush to get agents into production.

Additionally, many organizations lack AI-specific security controls. According to Bishop Fox's research, only 13 percent of organizations surveyed had dedicated security controls for AI endpoints. This suggests that most companies are treating AI agent deployments like traditional software deployments, without accounting for the unique risks that come with giving AI systems the ability to execute code and interact with external systems.

How Can Organizations Secure Their AI Agent Frameworks?

Securing AI agent deployments requires a multi-layered approach that addresses discovery, authentication, monitoring, and testing. Here are the key steps organizations should take:

  • Implement authentication: Every AI endpoint should require authentication before allowing access. This can be API keys, OAuth, Bearer tokens, or other standard authentication mechanisms. AIMap's fingerprinting module checks for authentication status by examining HTTP response codes; a 200 response on paths like /v1/models indicates no authentication, while 401 or 403 signals that authentication is configured.
  • Scan for exposed endpoints: Organizations should use tools like AIMap to discover their own exposed AI endpoints before attackers do. The tool queries Shodan and uses 32 tuned signatures to find Ollama instances, MCP servers, and other AI frameworks on the public internet.
  • Isolate agents in sandboxed environments: OpenClaw and similar frameworks support sandboxed environments, which are isolated digital containers where agents can work without accidentally accessing production systems or sensitive data.
  • Monitor and log all agent actions: Every action an agent takes should be logged and auditable. This provides a clear trail for human supervisors to review and helps detect suspicious behavior.
  • Test for common vulnerabilities: AIMap includes active attack modules that test for prompt injection, tool abuse, and model extraction. Organizations should run these tests on their own endpoints to identify weaknesses before attackers do.

The security researcher Aashiq Ramachandran from Bishop Fox explained the importance of distinguishing between endpoints that are merely reachable and those that are truly open. He stated, "AIMap distinguishes endpoints that are merely reachable from those that are truly open by interpreting HTTP responses," noting that this distinction is critical for proper risk assessment and triage.

"AIMap distinguishes endpoints that are merely reachable from those that are truly open by interpreting HTTP responses," explained Aashiq Ramachandran, a Bishop Fox security researcher.

Aashiq Ramachandran, Security Researcher at Bishop Fox

What Does This Mean for the Future of AI Agent Adoption?

The discovery of 175,000 exposed Ollama instances and 8,000 open MCP servers signals a critical moment for the AI agent industry. Organizations are moving rapidly from experimentation to production deployment, but security practices have not caught up. The emergence of tools like AIMap suggests that the industry is beginning to recognize this gap and is taking steps to address it.

For developers building with frameworks like OpenClaw, LangChain, and other agentic AI tools, the lesson is clear: security cannot be an afterthought. The ability to give an AI agent access to tools, code execution, and external systems is powerful, but it also creates significant risk. Organizations must treat AI agent deployments with the same rigor they apply to other critical infrastructure, implementing authentication, monitoring, and regular security testing as standard practice.

As the industry matures, we can expect to see more sophisticated security controls built directly into agent frameworks, better tooling for discovering and assessing exposed endpoints, and clearer best practices for secure agent deployment. Until then, organizations deploying AI agents should assume that their endpoints are discoverable and act accordingly.

" }