McpVanguard

An open-source security proxy and active firewall for the Model Context Protocol (MCP).

McpVanguard

A security proxy for AI agents that use MCP

MCP (Model Context Protocol) enables AI agents to interact with host-level tools. McpVanguard interposes between the agent and the system, provide real-time inspection and enforcement prefixing every tool call.

Transparent integration. Zero-configuration requirements for existing servers.

Tests PyPI version License: Apache 2.0 Python 3.11+

Part of the Provnai Open Research Initiative β€” Building the Immune System for AI.


⚑ Quickstart

pip install mcp-vanguard

Local stdio wrap (no network):

vanguard start --server "npx @modelcontextprotocol/server-filesystem ."

Cloud Security Gateway (SSE, deploy on Railway):

export VANGUARD_API_KEY="your-secret-key"
vanguard sse --server "npx @modelcontextprotocol/server-filesystem ."

Deploy on Railway

πŸ“– Full Railway Deployment Guide


πŸ›‘οΈ Getting Started (New Users)

Bootstrap your security workspace with a single command:

# 1. Initialize safe zones and .env template
vanguard init

# 2. (Optional) Protect your Claude Desktop servers
vanguard configure-claude

# 3. Launch the visual security dashboard
vanguard ui --port 4040

🧠 How it works

Every time an AI agent calls a tool (e.g. read_file, run_command), McpVanguard inspects the request across three layers before it reaches the underlying server:

LayerWhat it checksLatency
L1 β€” Safe Zones & RulesKernel-level isolation (openat2 / Windows canonicalization) and 50+ deterministic signatures~16ms
L2 β€” SemanticLLM-based intent scoring via OpenAI, DeepSeek, Groq or OllamaAsync
L3 β€” BehavioralShannon Entropy ($H(X)$) scouter and sliding-window anomaly detectionStateful

Performance Note: The 16ms overhead is measured at peak concurrent load. In standard operation, the latency is well under 2msβ€”negligible relative to typical LLM inference times.

If a request is blocked, the agent receives a standard JSON-RPC error response. The underlying server never sees it.

Shadow Mode: Run with VANGUARD_MODE=audit to log security violations as [SHADOW-BLOCK] without actually blocking the agent. Perfect for assessing risk in existing production workflows.


πŸ›‘οΈ What gets blocked

  • Sandbox Escapes: TOCTOU symlink attacks, Windows 8.3 shortnames (PROGRA~1), DOS device namespaces
  • Data Exfiltration: High-entropy payloads (H > 7.5 cryptographic keys) and velocity-based secret scraping
  • Filesystem attacks: Path traversal (../../etc/passwd), null bytes, restricted paths (~/.ssh), Unicode homograph evasion
  • Command injection: Pipe-to-shell, reverse shells, command chaining via ; && \n, expansion bypasses
  • SSRF & Metadata Protection: Blocks access to cloud metadata endpoints (AWS/GCP/Azure) and hex/octal encoded IPs.
  • Jailbreak Detection: Actively identifies prompt injection patterns and instruction-ignore sequences.
  • Continuous Monitoring: Visualize all of the above in real-time with the built-in Security Dashboard.

πŸ“Š Security Dashboard

Launch the visual monitor to see your agent's activity and security status in real-time.

vanguard ui --port 4040

The dashboard provides a low-latency, HTMX-powered feed of:

  • Real-time Blocks: Instantly see which rule or layer triggered a rejection.
  • Entropy Scores: Pulse-check the $H(X)$ levels of your agent's data streams.
  • Audit History: Contextual log fragments for rapid incident response.

VEX Protocol β€” Deterministic Audit Log

When McpVanguard blocks an attack, it creates an OPA/Cerbos-compatible Secure Tool Manifest detailing the Principal, Action, Resource, and environmental snapshot.

This manifest is then sent as a cryptographically-signed report to the VEX Protocol. VEX anchors that report to the Bitcoin blockchain via the CHORA Gate.

This means an auditor can independently verify exactly what was blocked, the entropy score, and why β€” without relying on your local logs.

export VANGUARD_VEX_URL="https://api.vexprotocol.com"
export VANGUARD_VEX_KEY="your-agent-jwt"
export VANGUARD_AUDIT_FORMAT="json" # Optional: Route JSON logs directly into SIEM (ELK, Splunk)
vanguard sse --server "..." --behavioral

Architecture

                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  AI Agent            β”‚            McpVanguard Proxy                    β”‚
 (Claude, GPT)        β”‚                                                 β”‚
      β”‚               β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
      β”‚  JSON-RPC      β”‚  β”‚ L1 β€” Rules Engine                        β”‚  β”‚
      │──────────────▢│  β”‚  50+ YAML signatures (path, cmd, net...)  β”‚  β”‚
      β”‚  (stdio/SSE)   β”‚  β”‚  BLOCK on match β†’ error back to agent    β”‚  β”‚
      β”‚               β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
      β”‚               β”‚                   β”‚ pass                         β”‚
      β”‚               β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
      β”‚               β”‚  β”‚ L2 β€” Semantic Scorer (optional)           β”‚  β”‚
      β”‚               β”‚  β”‚  OpenAI / MiniMax / Ollama scoring 0.0β†’1.0β”‚  β”‚
      β”‚               β”‚  β”‚  Async β€” never blocks the proxy loop      β”‚  β”‚
      β”‚               β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
      β”‚               β”‚                   β”‚ pass                         β”‚
      β”‚               β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
      β”‚               β”‚  β”‚ L3 β€” Behavioral Analysis (optional)       β”‚  β”‚
      β”‚               β”‚  β”‚  Sliding window: scraping, enumeration    β”‚  β”‚
      β”‚               β”‚  β”‚  In-memory or Redis (multi-instance)      β”‚  β”‚
      β”‚               β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
      β”‚               β”‚                   β”‚                              β”‚
      │◀── BLOCK ─────│──────────────────── (any layer)                 β”‚
      β”‚  (JSON-RPC    β”‚                   β”‚ ALLOW                        β”‚
      β”‚   error)      β”‚                   β–Ό                              β”‚
      β”‚               β”‚           MCP Server Process                     β”‚
      β”‚               β”‚        (filesystem, shell, APIs...)              β”‚
      β”‚               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚                                  β”‚
      │◀─────────────── response β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚
      β”‚   (on BLOCK)
      └──────────────▢ VEX API ──▢ CHORA Gate ──▢ Bitcoin Anchor
                       (async, fire-and-forget audit receipt)

L2 Semantic Backend Options

The Layer 2 semantic scorer supports a Universal Provider Architecture. Set the corresponding API keys to activate a backend β€” the first available key wins (priority: Custom > OpenAI > MiniMax > Ollama):

BackendEnv VarsNotes
Universal Custom (DeepSeek, Groq, Mistral, vLLM)VANGUARD_SEMANTIC_CUSTOM_KEY, VANGUARD_SEMANTIC_CUSTOM_MODEL, VANGUARD_SEMANTIC_CUSTOM_URLFast, cheap inference. Examples:
Groq: https://api.groq.com/openai/v1
DeepSeek: https://api.deepseek.com/v1
OpenAIVANGUARD_OPENAI_API_KEY, VANGUARD_OPENAI_MODELDefault model: gpt-4o-mini
MiniMaxVANGUARD_MINIMAX_API_KEY, VANGUARD_MINIMAX_MODEL, VANGUARD_MINIMAX_BASE_URLDefault model: MiniMax-M2.5
Ollama (local)VANGUARD_OLLAMA_URL, VANGUARD_OLLAMA_MODELDefault model: phi4-mini. No API key required
# Example: Use Groq for ultra-fast L2 scoring
export VANGUARD_SEMANTIC_ENABLED=true
export VANGUARD_SEMANTIC_CUSTOM_KEY="your-groq-key"
export VANGUARD_SEMANTIC_CUSTOM_MODEL="llama3-8b-8192"
export VANGUARD_SEMANTIC_CUSTOM_URL="https://api.groq.com/openai/v1"
vanguard start --server "npx @modelcontextprotocol/server-filesystem ."

Project Status

PhaseGoalStatus
Phase 1Foundation (Proxy, CLI, Defensive Rules)[DONE]
Phase 2Intelligence (L2 Semantic, L3 Behavioral)[DONE]
Phase 3Flight Recorder (VEX & CHORA Integration)[DONE]
Phase 4Distribution (stable PyPI release)[DONE]
Phase 5Production Hardening (v1.1.3 stability)[DONE]
Phase 6Security Audit Remediation (v1.1.4 hardening)[DONE]
Phase 7Titan-Grade L1 Perimeter (v1.5.0 Forensic Hardening)[DONE]
Phase 8Production Hardening & Cloud Scaling (v1.6.0 Release)[DONE]
Phase 9Agent Identity & VEX v0.2 Spec[IN PROGRESS]

Resources


License

Apache License 2.0 β€” see LICENSE.

Built by the Provnai Open Research Initiative. "Verifying the thoughts and actions of autonomous agents."

Server Terkait