MCP Shell
Execute secure shell commands from AI assistants and other MCP clients, with configurable security settings.
mcp-shell
MCP server that runs shell commands. Your LLM gets a tool; you get control over what runs and how.
Built on mark3labs/mcp-go. Written in Go.
Run it
Docker (easiest):
docker run -it --rm -v /tmp/mcp-workspace:/tmp/mcp-workspace sonirico/mcp-shell:latest
From source:
git clone https://github.com/sonirico/mcp-shell && cd mcp-shell
make install
mcp-shell
Configure it
Security is off by default. To enable it, point to a YAML config:
export MCP_SHELL_SEC_CONFIG_FILE=/path/to/security.yaml
mcp-shell
Secure mode (recommended) — no shell interpretation, executable allowlist only:
security:
enabled: true
use_shell_execution: false
allowed_executables:
- ls
- cat
- grep
- find
- echo
- /usr/bin/git
blocked_patterns: # optional: restrict args on allowed commands
- '(^|\s)remote\s+(-v|--verbose)(\s|$)'
max_execution_time: 30s
max_output_size: 1048576
working_directory: /tmp/mcp-workspace
audit_log: true
Legacy mode — shell execution, allowlist/blocklist by command string (vulnerable to injection if not careful):
security:
enabled: true
use_shell_execution: true
allowed_commands: [ls, cat, grep, echo]
blocked_patterns: ['rm\s+-rf', 'sudo\s+']
max_execution_time: 30s
audit_log: true
Wire it up
Claude Desktop — add to your MCP config:
{
"mcpServers": {
"shell": {
"command": "docker",
"args": ["run", "--rm", "-i", "sonirico/mcp-shell:latest"],
"env": { "MCP_SHELL_LOG_LEVEL": "info" }
}
}
}
For custom config, mount the file and set the env:
{
"command": "docker",
"args": ["run", "--rm", "-i", "-v", "/path/to/security.yaml:/etc/mcp-shell/security.yaml", "-e", "MCP_SHELL_SEC_CONFIG_FILE=/etc/mcp-shell/security.yaml", "sonirico/mcp-shell:latest"]
}
Tool API
| Parameter | Type | Description |
|---|---|---|
command | string | Shell command to run (required) |
base64 | boolean | Encode stdout/stderr as base64 (default: false) |
Response includes status, exit_code, stdout, stderr, command, execution_time, and optional security_info.
Environment variables
| Variable | Description |
|---|---|
MCP_SHELL_SEC_CONFIG_FILE | Path to security YAML |
MCP_SHELL_SERVER_NAME | Server name (default: "mcp-shell 🐚") |
MCP_SHELL_LOG_LEVEL | debug, info, warn, error, fatal |
MCP_SHELL_LOG_FORMAT | json, console |
MCP_SHELL_LOG_OUTPUT | stdout, stderr, file |
Development
make install dev-tools # deps + goimports, golines
make fmt test lint
make docker-build # build image locally
make release # binary + docker image
Security
- Default: No restrictions. Commands run with full access. Fine for local dev; dangerous otherwise.
- Secure mode (
use_shell_execution: false): Executable allowlist, no shell parsing. Blocks injection. - Docker: Runs as non-root, Alpine-based. Use it in production.
Contributing
Fork, branch, make fmt test, open a PR.
相關伺服器
Scout Monitoring MCP
贊助Put performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
贊助Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Chrome Debug MCP
This MCP allows you to record browser interactions with a chrome extension that include screenshots and console logs. The data is then saved to a local database that feeds the data to an AI system like claude code to search.
MCP Agentic Development Platform
A comprehensive MCP development environment with interactive visualizations, multiple client interfaces, and advanced agentic capabilities.
OpenAPI2MCP
Converts OpenAPI specifications into MCP tools, enabling AI clients to interact with external APIs seamlessly.
PageBolt
Take screenshots, generate PDFs, and create OG images from your AI assistant
Refine Prompt
Refines and structures prompts for large language models using the Anthropic API.
oyemi-mcp
MCP server for the Oyemi semantic lexicon. Provides deterministic word-to-code mapping and valence/sentiment analysis for AI agents like Claude, ChatGPT, and Gemini.
GraphQL Schema
Exposes GraphQL schema information to LLMs, allowing them to explore and understand the schema using specialized tools.
Command Executor
Execute pre-approved shell commands securely on a server.
MCP Sandbox
An interactive sandbox to safely execute Python code and install packages in isolated Docker containers.
Skills-ContextManager
Don’t pollute your AI agent’s context with 1,000 skills. Use Skills-ContextManager, a self-hosted web UI for managing AI skills and workflows by providing skills to an AI agent via MCP only when needed. Simply add skills to your library and enable or disable them with a toggle. Choose whether a skill is always loaded into context or dynamically activated when the AI agent determines it’s needed.