Use command line tools in a secure fashion as MCP tools.
The MCPShell is a tool that allows LLMs to safely execute command-line tools through the Model Context Protocol (MCP). It provides a secure bridge between LLMs and operating system commands.
Imagine you want Cursor (or some other MCP client) help you with your space problems in your hard disk.
Create a configuration file /my/example.yaml
defining your tools:
mcp:
description: |
Tool for analyzing disk usage to help identify what's consuming space.
run:
shell: bash
tools:
- name: "disk_usage"
description: "Check disk usage for a directory"
params:
directory:
type: string
description: "Directory to analyze"
required: true
max_depth:
type: number
description: "Maximum depth to analyze (1-3)"
default: 2
constraints:
- "directory.startsWith('/')" # Must be absolute path
- "!directory.contains('..')" # Prevent directory traversal
- "max_depth >= 1 && max_depth <= 3" # Limit recursion depth
- "directory.matches('^[\\w\\s./\\-_]+$')" # Only allow safe path characters, prevent command injection
run:
command: |
du -h --max-depth={{ .max_depth }} {{ .directory }} | sort -hr | head -20
output:
prefix: |
Disk Usage Analysis (Top 20 largest directories):
Take a look at the examples directory for more sophisticated and useful examples. Maybe you prefer to let the LLM know about your Kubernetes cluster with kubectl? Or let it run some AWS CLI commands?
Configure the MCP server in Cursor (or in any other LLM client with support for MCP)
For example, for Cursor, create .cursor/mcp.json
:
{
// you need the "go" command available
"mcpServers": {
"mcp-cli-examples": {
"command": "go",
"args": [
"run", "github.com/inercia/MCPShell@v0.1.5",
"mcp", "--config", "/my/example.yaml",
"--logfile", "/some/path/mcpshell/example.log"
]
}
}
}
See more details on how to configure Cursor or Visual Studio Code. Other LLMs with support for MCPs should be configured in a similar way.
Make sure your MCP client is refreshed (Cursor should recognize it automatically the firt time, but any change in the config file will require a refresh).
Ask your LLM some questions it should be able to answer with the new tool. For example: "I'm running out of space in my hard disk. Could you help me finding the problem?".
Take a look at all the command in this document.
Configuration files use a YAML format defined here. See the this directory for some examples.
MCPShell can also be run in agent mode, providing direct connectivity between Large Language Models (LLMs) and your command-line tools without requiring a separate MCP client. In this mode, MCPShell connects to an OpenAI-compatible API (including local LLMs like Ollama), makes your tools available to the model, executes requested tool operations, and manages the conversation flow. This enables the creation of specialized AI assistants that can autonomously perform system tasks using the tools you define in your configuration. The agent mode supports both interactive conversations and one-shot executions, and allows you to define system and user prompts directly in your configuration files.
For detailed information on using agent mode, see the Agent Mode documentation.
So you will probably thing "this AI has helped me finding all those big files. What if I create another tool for removing files?". Don't do that!.
Please read the Security Considerations document before using this software.
Contributions are welcome! Take a look at the development guide. Please open an issue or submit a pull request on GitHub.
This project is licensed under the MIT License - see the LICENSE file for details.
GitLab API, enabling project management
Create crafted UI components inspired by the best 21st.dev design engineers.
ALAPI MCP Tools,Call hundreds of API interfaces via MCP
APIMatic MCP Server is used to validate OpenAPI specifications using APIMatic. The server processes OpenAPI files and returns validation summaries by leveraging APIMatic’s API.
MCP to interface with multiple blockchains, staking, DeFi, swap, bridging, wallet management, DCA, Limit Orders, Coin Lookup, Tracking and more.
Enable AI agents to interact with the Atla API for state-of-the-art LLMJ evaluation.
Get prescriptive CDK advice, explain CDK Nag rules, check suppressions, generate Bedrock Agent schemas, and discover AWS Solutions Constructs patterns.
Query and analyze your Axiom logs, traces, and all other event data in natural language
Bring the full power of BrowserStack’s Test Platform to your AI tools, making testing faster and easier for every developer and tester on your team.
Flag features, manage company data, and control feature access using Bucket