Retrieves relevant code snippets and documents to assist in generating PyMilvus code, requiring a running Milvus instance.
A Model Context Protocol server that retrieves relevant code snippets or documents to help generating pymilvus code.
Before using this MCP server, ensure you have:
The recommended way to use this MCP server is through FastMCP, which provides better performance and easier configuration.
For the first time running the server, use the main FastMCP server which will automatically update the document database:
uv run src/mcp_pymilvus_code_generate_helper/fastmcp_server.py
This will:
# Connect to remote Milvus server
uv run src/mcp_pymilvus_code_generate_helper/fastmcp_server.py --milvus_uri http://your-server:19530 --milvus_token your_token
# Change server host and port
uv run src/mcp_pymilvus_code_generate_helper/fastmcp_server.py --host 0.0.0.0 --port 8080
# Use different transport (default is http)
uv run src/mcp_pymilvus_code_generate_helper/fastmcp_server.py --transport sse
After the initial setup, you can use the lightweight FastMCP server for faster startup:
uv run examples/fastmcp_server.py
This lightweight version:
# Custom configuration for lightweight server
uv run examples/fastmcp_server.py --milvus_uri http://your-server:19530 --host 0.0.0.0 --port 8080 --transport http
Cursor
> Settings
> MCP
+ Add New Global MCP Server
button{
"mcpServers": {
"pymilvus-code-generate-helper": {
"url": "http://localhost:8000/mcp"
}
}
}
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"url": "http://localhost:8000"
}
}
}
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"command": "/PATH/TO/uv",
"args": [
"--directory",
"/path/to/mcp-pymilvus-code-generate-helper",
"run",
"examples/fastmcp_server.py",
"--transport",
"stdio",
"--milvus_uri",
"http://localhost:19530"
],
"env": {
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY"
}
}
}
}
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"url": "http://localhost:8000/mcp"
}
}
}
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"command": "/PATH/TO/uv",
"args": [
"--directory",
"/path/to/mcp-pymilvus-code-generate-helper",
"run",
"examples/fastmcp_server.py",
"--transport",
"stdio",
"--milvus_uri",
"http://localhost:19530"
],
"env": {
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY"
}
}
}
}
⚠️ Note: Remember to set the
OPENAI_API_KEY
environment variable when using STDIO transport.
The server provides three powerful tools for Milvus code generation and translation:
milvus_code_generator
Generate or provide sample PyMilvus/Milvus code based on natural language input.
query
: Your natural language request for code generationorm_client_code_convertor
Convert between ORM and PyMilvus client code formats.
query
: List of Milvus API names to convert (e.g., ["create_collection", "insert"]
)milvus_code_translator
Translate Milvus code between different programming languages.
query
: List of Milvus API names in escaped double quotes format (e.g., [\"create_collection\", \"insert\", \"search\"]
)source_language
: Source programming language (python, java, go, csharp, node, restful)target_language
: Target programming language (python, java, go, csharp, node, restful)⚠️ Important: You don't need to specify tool names or parameters manually. Just describe your requirements naturally, and the MCP system will automatically select the appropriate tool and prepare the necessary parameters.
For backward compatibility, the server also supports SSE and STDIO transport modes:
# Start SSE server
uv run src/mcp_pymilvus_code_generate_helper/sse_server.py --milvus_uri http://localhost:19530
# Cursor configuration for SSE
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"url": "http://localhost:23333/milvus-code-helper/sse"
}
}
}
# Start STDIO server
uv run src/mcp_pymilvus_code_generate_helper/stdio_server.py --milvus_uri http://localhost:19530
# Cursor configuration for STDIO
{
"mcpServers": {
"pymilvus-code-generate-helper": {
"command": "/PATH/TO/uv",
"args": [
"--directory",
"/path/to/mcp-pymilvus-code-generate-helper",
"run",
"src/mcp_pymilvus_code_generate_helper/stdio_server.py",
"--milvus_uri",
"http://localhost:19530"
],
"env": {
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY"
}
}
}
}
You can also run the server using Docker:
docker build -t milvus-code-helper .
# First time run with document update
docker run -p 8000:8000 \
-e OPENAI_API_KEY=your_openai_key \
-e MILVUS_URI=http://your-milvus-host:19530 \
-e MILVUS_TOKEN=your_milvus_token \
milvus-code-helper
# Lightweight mode for subsequent runs
docker run -p 8000:8000 \
-e OPENAI_API_KEY=your_openai_key \
-e MILVUS_URI=http://your-milvus-host:19530 \
-e MILVUS_TOKEN=your_milvus_token \
milvus-code-helper examples/fastmcp_server.py
Parameter | Description | Default |
---|---|---|
--milvus_uri | Milvus server URI | http://localhost:19530 |
--milvus_token | Milvus authentication token | "" |
--db_name | Milvus database name | default |
--host | Server host address | 0.0.0.0 |
--port | Server port | 8000 |
--path | HTTP endpoint path | /mcp |
--transport | Transport protocol | http |
http
: RESTful HTTP transport (recommended)sse
: Server-Sent Events transportstdio
: Standard input/output transportOPENAI_API_KEY
: Required for document processing and embedding generationMILVUS_URI
: Alternative way to specify Milvus server URIMILVUS_TOKEN
: Alternative way to specify Milvus authentication token--port
parameterEnable debug logging:
PYTHONPATH=src python -m logging --level DEBUG src/mcp_pymilvus_code_generate_helper/fastmcp_server.py
Contributions are welcome! If you have ideas for improving the retrieval results or adding new features, please submit a pull request or open an issue.
This project is licensed under the MIT License.
A command-line interface wrapper for the Google Gemini API, enabling interaction with Gemini's Search and Chat tools.
Seamlessly bring real-time production context—logs, metrics, and traces—into your local environment to auto-fix code faster.
An MCP server providing searchable access to multiple AI/ML SDK documentation and source code.
Model Kontext Protocol Server for Kubernetes that allows LLM-powered applications to interact with Kubernetes clusters through native Go implementation with direct API integration and comprehensive resource management.
An example of a remote MCP server deployable on Cloudflare Workers without authentication.
Set up and interact with your unstructured data processing workflows in Unstructured Platform
Enables IDEs like Cursor and Windsurf to analyze large codebases using Gemini's 1M context window.
Integrates Zeek network analysis with conversational AI clients. Requires an external Zeek installation.
A proxy server that enables existing REST APIs to be used as Model Context Protocol (MCP) servers.
Interacting with Phabricator API