Laravel Forge MCP Server
Manage Laravel Forge servers and sites using the Forge API.
Laravel Forge MCP Server
Mininum build to solve just one problem.
To build the MCP server, run:
npm install && npm run build
This will compile the typescript files and produce a build directory plus it will output the json you can copy/paste into your MCP client (Claude Desktop, Windsurf, Cursor, etc.)
If all things go well, this will produce an output similar to this:
{ "mcpServers": { "laravel-forge-mcp-server": { "command": "npx", "args": [ "path/to/your/laravel-forge-mcp-server/build/bin.js" ], "env": { "FORGE_API_KEY": "", "FORGE_SERVER_ID": "", "FORGE_SITE_ID": "" } } } }
Register an account at Laravel Forge
Sharing
If you have found value in this service please share it on social media. You can tag me @jordandalton on X, or jdcarnivore on Reddit.
相关服务器
Alpha Vantage MCP Server
赞助Access financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Everything
Reference / test server with prompts, resources, and tools
Kubernetes Automated Installation
An agent for automatically installing Kubernetes in a Rocky Linux environment using MCP.
vnsh
Ephemeral encrypted file sharing for AI. Client-side AES-256 encryption, 24h auto-vaporization.
MCP-Portainer Bridge
Manage Docker containers through the Portainer API.
OpenRPC MCP Server
Provides JSON-RPC functionality through the OpenRPC specification.
Matter AI
Provides advanced code review, implementation planning, and pull request generation using Matter AI.
Zeropath
Interact with the Zeropath vulnerability management API.
MCP RAG Server
A Python server providing Retrieval-Augmented Generation (RAG) functionality. It indexes various document formats and requires a PostgreSQL database with pgvector.
Proxyman MCP
Proxyman MCP allows AI to inspect HTTP traffic, create debugging rules, and control Proxyman - all through natural language conversations.
VectorMCP
A Ruby gem for building Model Context Protocol (MCP) servers to expose tools, resources, and prompts to LLM clients.