Interact with the Prefect API for workflow orchestration and management.
This repository provides a Prefect MCP server configuration using the prefect-mcp-server
package with a reliable running mechanism via uvx
. The configuration is tailored for use with the Cursor IDE.
Create and activate your virtual environment, then install Prefect MCP Server:
uv venv --python 3.12 && source .venv/bin/activate
uv pip install -U prefect-mcp-server
The server is configured via the .cursor/mcp.json
file. The updated configuration is as follows:
{
"mcpServers": {
"prefect": {
"command": "uvx",
"args": [
"prefect-mcp-server"
],
"env": {}
}
}
}
This configuration ensures that the server uses the uvx
command with the exact package version installed via uv pip install
. This approach provides enhanced reliability and consistency in your development environment.
Set the following environment variables to configure your Prefect environment. You can create a file named .env
in the project root with entries such as:
PREFECT_API_URL=http://localhost:4200/api
Additionally, if needed, set other environment variables like PREFECT_API_KEY
to authenticate with your Prefect server or Prefect Cloud.
To start the server, you can run the following command:
uv run <script>
Alternatively, if you are using the Cursor IDE with its configuration, the server will be automatically invoked with the command specified in .cursor/mcp.json
.
Detailed documentation on the Prefect MCP Server functionality and usage is available in the docs/prefect_mcp_documentation.md file. The documentation includes:
This repository includes Cursor Rules for working with the Prefect MCP Server, located in the .cursor/rules/
directory. These rules provide contextual help and guidance when working with Prefect MCP in the Cursor IDE.
uv run
for running scripts within the configured environment as recommended by Cursor.Happy coding!
MCP server for Autodesk Maya
Reference implementations of Model Context Protocol (MCP) servers in Typescript and Python, showcasing MCP features and SDK usage.
A unified framework for integrating various language models and embedding providers to generate text completions and embeddings.
A Model Context Protocol server for generating visual charts using AntV.
An MCP server providing searchable access to multiple AI/ML SDK documentation and source code.
Generate images using the Together AI API. Supports custom aspect ratios, save paths, and batch generation.
Interact with the Honeybadger API for error monitoring and reporting using LLMs.
Interact with MiniMax's powerful APIs for text-to-speech, voice cloning, and video/image generation.
Turns any command-line interface (CLI) command into a simple StdIO-based MCP server.
A specialized MCP gateway for LLM enhancement prompts and jailbreaks with dynamic schema adaptation. Provides prompts for different LLMs using an enum-based approach.