Kitsune MCP

Shape-shifting MCP hub โ€” shapeshift() into 10,000+ servers at runtime. One entry point, no restarts, 7 registries.

Kitsune MCP

๐ŸฆŠ Kitsune MCP

One MCP entry. 10,000+ servers on demand.
Load only the tools you need. Switch instantly. No restarts.

PyPI Python CI Coverage License: MIT Smithery Discord


Why Kitsune?

In Japanese folklore, the Kitsune (็‹) is a fox spirit of extraordinary intelligence and magical power. What makes it remarkable is how it grows: with age and wisdom, a Kitsune gains additional tails โ€” each one representing a new ability it has mastered. It can shapeshift, take on any form it chooses, borrow the powers of others, and just as freely cast them off when the purpose is fulfilled. One fox. Many forms. Total fluidity.

This tool works the same way.

shapeshift("brave-search") โ€” the fox takes on a new form, its tools appear natively. shiftback() โ€” it returns to its true shape, ready to become something else.

Each server it shapeshifts into is a new tail. Each capability borrowed and released cleanly. One entry in your config. Every server in the MCP ecosystem, on demand.

I am not Japanese, and I use this name with the highest respect for the mythology and culture it comes from. The parallel felt too precise to ignore โ€” a spirit that shapeshifts between forms, gains new powers, and releases them at will. That is exactly what this tool does.


The problem with static MCP setups

Every server you add to your config loads all its tools at startup โ€” and keeps them there, all session long. Whether your agent uses them or not.

Five servers means 3,000โ€“5,000 tokens of overhead on every request. Your agent sees 50+ tools and has to reason about all of them before it can act.

Kitsune MCP is one entry that replaces all of them.

shapeshift("brave-search", tools=["web_search"])  # only the tool you need
# task done โ€” switch instantly:
shiftback()
shapeshift("supabase")                            # different server, no restart
shiftback()
shapeshift("@modelcontextprotocol/server-github") # and again

One config entry. Any server across 7 registries. Load only the tools the current task needs โ€” 2 out of 20 if that's all you need. Your agent stays focused and your costs stay low.

Base overhead: 7 tools, ~650 tokens (measured). Each mounted server adds only what you actually load.


Built for two audiences

Adaptive agents

An agent that loads everything upfront burns tokens on tools it never calls โ€” and makes worse decisions because it sees too many options at once. An agent that mounts on demand is leaner, faster, and more focused:

  • Shapeshift into only what the current task needs โ€” shiftback when done
  • shapeshift(server_id, tools=[...]) to cherry-pick โ€” load 2 tools from a server that has 20
  • Chain across multiple servers in one session without touching config or restarting
  • Token overhead stays flat: ~650 base + only what you load

Kitsune MCP is designed around the real economics of an agent loop.

MCP developers

Beyond MCP Inspector's basic schema viewer, Kitsune MCP gives you a full development workflow inside your actual AI client:

NeedTool
Explore a server's tools and schemasinspect(server_id)
Quality-score your server end-to-endtest(server_id) โ†’ score 0โ€“100
Benchmark tool latencybench(server_id, tool, args) โ†’ p50, p95, min, max
Prototype endpoint-backed tools livecraft(name, description, params, url)
Test inside real Claude/Cursor workflowsshapeshift() โ†’ call tools natively โ†’ shiftback()
Compare two servers side by sideshapeshift into one, test, shiftback, shapeshift into the other

No separate web UI. No isolated test environment. Test how your server actually behaves when an AI uses it.


Two modes

kitsune-mcpkitsune-forge
PurposeAdaptive agents, everyday mountingMCP evaluation, benchmarking, crafting
Tools7 (shapeshift, shiftback, search, inspect, call, key, status)All 17
Token overhead~650 tokens~1,700 tokens
Use whenAgents mounting per task, minimal token budgetDiscovering, testing, benchmarking, prototyping

Token numbers are measured from actual registered schemas โ€” see examples/benchmark.py.

Both modes from the same package:

{ "command": "kitsune-mcp" }                        โ† lean (default)
{ "command": "kitsune-forge" }                      โ† full suite
{ "command": "kitsune-mcp",
  "env": { "KITSUNE_TOOLS": "shapeshift,shiftback,key" } }  โ† custom

How It Fits Together

Kitsune MCP โ€” lean profile

shapeshift() injects tools directly at runtime via FastMCP's live API. Token overhead stays flat regardless of how many servers you explore.

Need the full evaluation suite? kitsune-forge adds execution, connection management, benchmarking, and tool crafting:

Protean Forge โ€” extended suite

Quick Start

pip install kitsune-mcp

Add to your MCP client config โ€” once, globally:

{
  "mcpServers": {
    "kitsune": {
      "command": "kitsune-mcp"
    }
  }
}

Works with Claude Desktop, Claude Code, Cursor, Cline, OpenClaw, Continue.dev, Zed, and any MCP-compatible client. No API keys needed.

ClientGlobal config file
Claude Desktop (macOS)~/Library/Application Support/Claude/claude_desktop_config.json
Claude Desktop (Windows)%APPDATA%\Claude\claude_desktop_config.json
Claude Code~/.claude/mcp.json
Cursor / Windsurf~/.cursor/mcp.json
Cline / Continue.devVS Code settings / ~/.continue/config.json
OpenClawMCP config in OpenClaw settings

Server Sources

Kitsune MCP searches across 7 registries in parallel โ€” tens of thousands of servers, no single one required.

RegistryAuthregistry= value
modelcontextprotocol/serversNoneofficial
registry.modelcontextprotocol.ioNonemcpregistry
GlamaNoneglama
npmNonenpm
PyPINonepypi
GitHub reposNonegithub:owner/repo
SmitheryFree API keysmithery

Default search() fans out across all no-auth registries automatically. Add a SMITHERY_API_KEY to extend discovery with Smithery's hosted server catalog (HTTP servers, no local install required).


How It Works

The proxy model

Kitsune MCP is a dynamic MCP proxy. It sits between your AI client and any number of other MCP servers, connecting to them on demand:

Your AI client
    โ”‚
    โ–ผ
Kitsune MCP          โ† the one entry in your config
    โ”‚
    โ”œโ”€โ”€ (on shapeshift) โ”€โ”€โ–บ filesystem server   (spawned subprocess)
    โ”œโ”€โ”€ (on shapeshift) โ”€โ”€โ–บ brave-search server (spawned subprocess)
    โ””โ”€โ”€ (on shapeshift) โ”€โ”€โ–บ remote HTTP server  (HTTP+SSE connection)

Nothing is copied. When you call a mounted tool, Kitsune MCP forwards the call to the original server via JSON-RPC and returns the result. The server's logic always runs on the server โ€” Kitsune MCP only relays the schema and the call.

What shapeshift() does, step by step

  1. Connects to the target server via the right transport (stdio subprocess, HTTP, WebSocket)
  2. Handshakes โ€” sends MCP initialize / notifications/initialized
  3. Fetches tools/list, resources/list, prompts/list from the server
  4. Registers each tool as a native FastMCP tool โ€” a proxy closure with the exact signature from the schema
  5. Notifies the AI client (notifications/tools/list_changed) so the new tools appear immediately

The AI sees read_file, write_file, list_directory as if they were always there. There's no wrapper or call_tool("filesystem", ...) indirection โ€” the tools are first-class.

shiftback() reverses all of it: deregisters the proxy closures, clears resources and prompts, notifies the client.

Resources and prompts

shapeshift() proxies all three MCP primitives, not just tools:

PrimitiveWhat gets proxied
ToolsEvery tool from tools/list, registered with its exact parameter schema
ResourcesStatic resources from resources/list โ€” readable via the MCP resources API
PromptsEvery prompt from prompts/list, with its argument signature

Template URIs (e.g. file:///{path}) are skipped โ€” they require parameter binding that adds complexity with little practical gain. Everything else is proxied.

Transport is automatic

Server sourceHow it runs
npm packagenpx <package> โ€” spawned locally
pip packageuvx <package> โ€” spawned locally
GitHub reponpx github:user/repo or uvx --from git+https://...
Docker imagedocker run --rm -i --memory 512m <image>
Smithery hostedHTTP+SSE (requires SMITHERY_API_KEY)
WebSocket serverws:// / wss://

Why inspect() before shapeshift()

inspect() connects to the server and fetches its schemas โ€” but does not register anything. Zero tools added to context, zero tokens consumed by the AI.

Use it to:

  • See exact parameter names and types before committing
  • Check credential requirements upfront (avoid a cryptic error mid-task)
  • Get the measured token cost of the mount so you can budget
  • Verify the server actually starts and responds before a live session
inspect("mcp-server-brave-search")
# โ†’ CREDENTIALS
# โ†’   โœ— missing  BRAVE_API_KEY โ€” Brave Search API key
# โ†’   Add to .env:  BRAVE_API_KEY=your-value
# โ†’ Token cost: ~99 tokens (measured)

# Add the key to .env โ€” picked up immediately, no restart needed
# Then mount and use in the same session:
shapeshift("mcp-server-brave-search")
call("brave_web_search", arguments={"query": "MCP protocol 2025"})

Security

Kitsune MCP introduces a trust model for servers you haven't personally audited.

Trust tiers

Every shapeshift(), call(), and connect() result shows where the server comes from:

TierSourcesIndicator
Highofficial (modelcontextprotocol/servers)โœ“ Source: official
Mediummcpregistry, glama, smitheryโœ“ Source: smithery
Communitynpm, pypi, githubโš ๏ธ Source: npm (community โ€” not verified)

Install command validation

Before spawning any subprocess, Kitsune MCP validates the executable name:

  • Blocks shell metacharacters (&, ;, |, `, $) โ€” prevents injection via a crafted server ID
  • Blocks path traversal (../) โ€” prevents escaping to arbitrary binaries

Arguments are passed directly to asyncio.create_subprocess_exec (never a shell), so they are not subject to shell interpretation.

Credential warnings

shapeshift() probes tool descriptions for unreferenced environment variable patterns. If a tool mentions BRAVE_API_KEY and that variable isn't set, you get a warning immediately โ€” before you call anything:

โš ๏ธ  Credentials may be required โ€” add to .env:
  BRAVE_API_KEY=your-value
  Or: key("BRAVE_API_KEY", "your-value")

Process isolation and sandboxing

  • stdio servers run as separate OS processes โ€” no shared memory with Kitsune MCP
  • Docker servers run with --rm -i --memory 512m --label kitsune-mcp=1
  • fetch() blocks private IPs, loopback, and non-HTTPS URLs (SSRF protection)
  • The process pool has a hard cap of 10 concurrent processes and evicts idle ones after 1 hour

What You Can Access

One kitsune-mcp entry unlocks any of these on demand โ€” no config changes, no restart:

CategoryServersKey neededLean tokens
Web searchBrave Search, Exa, Linkup, ParallelFree API keys~150โ€“993
Web scrapingFirecrawl, ScrapeGraph AIFree tiers~400 (lean)
Code & reposGitHub (official, 26 tools)Free GitHub token~500 (lean)
ProductivityNotion, Linear, SlackFree workspace keys~400 (lean)
GoogleMaps, Calendar, Gmail, DriveFree GCP key / OAuthvaries
MemoryMem0, knowledge graphsFree tiers~300
No key requiredFilesystem, Git, weather, Yahoo Financeโ€”~300โ€“1,000

The same pattern works for all of them:

shapeshift("brave")                                    # web search in 2 tools
call("brave_web_search", arguments={"query": "โ€ฆ"})

shapeshift("firecrawl-mcp", tools=["scrape","search"]) # scraping, lean (2 of 9 tools)
call("scrape", arguments={"url": "https://โ€ฆ"})

shapeshift("@modelcontextprotocol/server-github", tools=["create_issue","search_repositories"])
call("create_issue", arguments={"owner": "โ€ฆ", "repo": "โ€ฆ", "title": "โ€ฆ"})

Token cost scales with what you load, not what exists. A 26-tool GitHub server costs ~500 tokens if you only mount 3 tools. See .env.example for the full key catalog with lean mount hints.

Security note on .env

Kitsune MCP re-reads .env on every call โ€” which means adding a key instantly activates it. That convenience comes with a responsibility: .env is the single place all your API keys live. A few practices worth following:

  • Add .env to .gitignore โ€” never commit real keys
  • Use project-level .env for project-specific keys; ~/.kitsune/.env for personal global keys
  • Prefer minimal OAuth scopes and fine-grained tokens (e.g. GitHub fine-grained tokens with per-repo permissions)
  • Rotate keys that get exposed; Kitsune MCP picks up the new value immediately without restart

Why Not Just X?

"Can't I just add more servers to mcp.json?" โ€” Every configured server starts at launch and exposes all tools constantly. You can't add or remove mid-session without a restart. With 5+ servers you're burning thousands of tokens on every request for tools rarely needed. Kitsune MCP keeps the tool list minimal โ€” shapeshift into what you need, shiftback when done.

"What about MCP Inspector?" โ€” MCP Inspector is a standalone web UI that connects to one server and lets you inspect schemas and call tools manually. It's useful for basic debugging but isolated from real AI workflows. Kitsune MCP tests servers inside actual Claude or Cursor sessions โ€” how an AI really uses them. It adds test() scoring, bench() latency numbers, side-by-side server comparison, and craft() for live endpoint prototyping. It also discovers and installs servers on demand; Inspector requires you to already have one running.

"What about mcp-dynamic-proxy?" โ€” It hides tools behind call_tool("brave", "web_search", {...}) โ€” always a wrapper. After shapeshift("mcp-server-brave-search"), Kitsune MCP gives you a real native brave_web_search with the actual schema. It also can't discover or install packages at runtime.

"Can FastMCP do this natively?"

FastMCP nativeKitsune MCP
Proxy a known HTTP/SSE serverโœ…โœ…
Load tools at runtimeโœ… (write code)โœ… shapeshift()
Search registries to discover serversโŒโœ… npm ยท official ยท Glama ยท Smithery
Install npm / PyPI / GitHub packages on demandโŒโœ…
Atomic shift back โ€” retract all shapeshifted tools at onceโŒโœ… shiftback()
Persistent stdio process poolโŒโœ…
Zero boilerplate โ€” works after pip installโŒโœ…

Configuration

Minimal (no API keys)

{
  "mcpServers": {
    "protean": { "command": "kitsune-mcp" }
  }
}

Optional integrations

{
  "mcpServers": {
    "kitsune": {
      "command": "kitsune-mcp",
      "env": { "SMITHERY_API_KEY": "your-key" }
    }
  }
}

Get a free key at smithery.ai/account/api-keys. Without it, Kitsune MCP is fully functional via npm, PyPI, official registries, and GitHub.

Frictionless credentials โ€” Kitsune MCP re-reads .env on every inspect(), shapeshift(), and call(). Add a key mid-session and it takes effect immediately โ€” no restart:

# .env (CWD, ~/.env, or ~/.kitsune/.env โ€” all checked, CWD wins)
BRAVE_API_KEY=your-key
GITHUB_TOKEN=ghp_...

Or use key() to write to .env and activate in one step:

key("BRAVE_API_KEY", "your-key")   # writes to .env, active immediately

All Tools

kitsune-mcp โ€” lean profile (7 tools, ~650 token overhead)

ToolDescription
shapeshift(server_id, tools)Load a server's tools live. tools=[...] for lean shapeshift.
shiftback(kill)Remove shapeshifted tools. kill=True terminates the process immediately.
search(query, registry)Search MCP servers across registries.
inspect(server_id)Show tools, schemas, and live credential status (โœ“/โœ— per key).
call(tool_name, server_id, args)Call a tool. server_id optional when shapeshifted โ€” current form used.
key(env_var, value)Save an API key to .env and load it immediately.
status()Show current form, active connections (PID + RAM), token stats.

kitsune-forge โ€” full suite (all 17 tools, ~1,700 token overhead)

Everything above, plus:

ToolDescription
call(tool_name, server_id, args)Already in lean profile โ€” listed here for completeness.
run(package, tool, args)Run from npm/pip directly. uvx:pkg-name for Python.
auto(task, tool, args)Search โ†’ pick best server โ†’ call in one step.
fetch(url, intent)Fetch a URL, return compressed text (~17x smaller than raw HTML).
craft(name, description, params, url)Register a custom tool backed by your HTTP endpoint. shiftback() removes it.
connect(command, name)Start a persistent server. Accepts server_id or shell command.
release(name)Kill a persistent connection by name.
setup(name)Step-by-step setup wizard for a connected server.
test(server_id, level)Quality-score a server 0โ€“100.
bench(server_id, tool, args)Benchmark tool latency โ€” p50, p95, min, max.
skill(qualified_name)Load a skill into context. Persisted across sessions.

Usage Examples

Adaptive agent โ€” multi-server session, zero config

# Task 1: read some files
shapeshift("@modelcontextprotocol/server-filesystem", tools=["read_file"])
read_file(path="/tmp/data.csv")
shiftback()

# Task 2: search the web
shapeshift("mcp-server-brave-search")
brave_web_search(query="latest MCP servers 2025")
shiftback()

# Task 3: run a git query
shapeshift("@modelcontextprotocol/server-git", tools=["git_log"])
git_log(repo_path=".", max_count=5)
shiftback()
# Three different servers. One session. Zero config edits.

MCP developer workflow โ€” test your server

# Evaluate your server before publishing
inspect("my-server")               # review schemas and credentials
test("my-server")                  # quality score 0โ€“100
bench("my-server", "my_tool", {})  # p50, p95 latency

# Prototype a tool backed by your local endpoint
craft(
    name="my_tool",
    description="Calls my ranking service",
    params={"query": {"type": "string"}},
    url="http://localhost:8080/rank"
)
my_tool(query="test")   # call it natively inside Claude
shiftback()

Same-session usage with call()

After shapeshift(), use call() immediately โ€” no restart, no server_id needed:

shapeshift("@modelcontextprotocol/server-filesystem")
# โ†’ "In this session: call('tool_name', arguments={...})"

call("list_directory", arguments={"path": "/Users/me/project"})
call("read_file", arguments={"path": "/Users/me/project/README.md"})
shiftback()

Search, shapeshift, use, shiftback

search("web search")
shapeshift("mcp-server-brave-search")
key("BRAVE_API_KEY", "your-key")   # picked up immediately
call("brave_web_search", arguments={"query": "MCP protocol 2025"})
shiftback()

Persistent server with setup guidance

connect("uvx voice-mode", name="voice")
setup("voice")                      # shows missing env vars
key("DEEPGRAM_API_KEY", "your-key")
setup("voice")                      # confirms ready
shapeshift("voice-mode")
speak(text="Hello from Kitsune MCP!")
shiftback(kill=True)                    # terminates process, frees RAM

Installation

pip install kitsune-mcp        # from PyPI
# or
git clone https://github.com/kaiser-data/kitsune-mcp && pip install -e .

Requirements: Python 3.12+ ยท node/npx (for npm servers) ยท uvx from uv (for pip servers)


Contributing

make dev     # install with dev dependencies
make test    # pytest
make lint    # ruff

Issues and PRs: github.com/kaiser-data/kitsune-mcp


MIT License ยท Python 3.12+ ยท Built on FastMCP

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension