MCP-Mem0
Integrate long-term memory into AI agents using Mem0.
SFCore TH Dev: CumulusCI Integration for AI Agents
An implementation of the Model Context Protocol (MCP) server integrated with CumulusCI for providing AI agents with Salesforce development capabilities.
This server enables AI agents to interact with CumulusCI commands without developers needing to remember complex CLI syntax.
Overview
This project demonstrates how to build an MCP server that enables AI agents to execute CumulusCI operations for Salesforce development workflows. It serves as a foundation for creating more comprehensive CCI integrations.
The implementation follows the best practices laid out by Anthropic for building MCP servers, allowing seamless integration with any MCP-compatible client.
Features
The server currently provides one essential CCI operation:
create_scratch_org: Create a new scratch org using the CCI dev_org flow
Prerequisites
- Python 3.12+
- Access to a Salesforce org (Dev Hub for scratch orgs)
- Docker if running the MCP server as a container (recommended)
Environment Setup
The MCP server provides CCI installation checking and setup instructions. When you encounter CCI command not found errors, use the check_cci_installation tool which will guide you through:
# Check if CCI is installed
cci version
# Install CCI if not present
pipx install cumulusci-plus-azure-devops
# Upgrade CCI if needed
pipx install cumulusci-plus-azure-devops --force
Installation
Using uv
-
Install uv if you don't have it:
pip install uv -
Clone this repository:
git clone <your-repo-url> cd sfcore-th-dev -
Install dependencies:
uv pip install -e . -
Ensure CumulusCI is installed and configured:
pip install cumulusci cci org connect <your-dev-hub>
Using Docker (Recommended)
-
Build the Docker image:
docker build -t ghcr.io/jorgesolebur/mcp-sfcore-th-dev:latest --build-arg PORT=8050 . -
Push to GitHub Container Registry:
docker push ghcr.io/jorgesolebur/mcp-sfcore-th-dev:latestNote: You'll need to authenticate with GitHub Container Registry first:
echo $GITHUB_TOKEN | docker login ghcr.io -u jorgesolebur --password-stdin
Configuration
The following environment variables can be configured in your .env file:
| Variable | Description | Default |
|---|---|---|
TRANSPORT | Transport protocol (sse or stdio) | sse |
HOST | Host to bind to when using SSE transport | 0.0.0.0 |
PORT | Port to listen on when using SSE transport | 8050 |
The server relies on CumulusCI being properly configured on the system where it runs.
Running the Server
Using uv
SSE Transport
# Set TRANSPORT=sse in .env then:
uv run src/main.py
The MCP server will run as an API endpoint that you can connect to with the config shown below.
Stdio Transport
With stdio, the MCP client itself can spin up the MCP server, so nothing to run at this point.
Using Docker
SSE Transport
docker run -d -p 8050:8050 ghcr.io/jorgesolebur/mcp-sfcore-th-dev:latest
The MCP server will run as an API endpoint within the container.
Stdio Transport
With stdio, the MCP client itself can spin up the MCP server container, so nothing to run at this point.
Integration with MCP Clients
SSE Configuration
Once you have the server running with SSE transport, you can connect to it using this configuration:
{
"mcpServers": {
"sfcore-th-dev": {
"transport": "sse",
"url": "http://localhost:8050/sse"
}
}
}
Note for Windsurf users: Use
serverUrlinstead ofurlin your configuration:{ "mcpServers": { "sfcore-th-dev": { "transport": "sse", "serverUrl": "http://localhost:8050/sse" } } }
Note for n8n users: Use host.docker.internal instead of localhost since n8n has to reach outside of its own container to the host machine:
So the full URL in the MCP node would be: http://host.docker.internal:8050/sse
Make sure to update the port if you are using a value other than the default 8050.
Python with Stdio Configuration
Add this server to your MCP configuration for Claude Desktop, Windsurf, or any other MCP client:
{
"mcpServers": {
"sfcore-th-dev": {
"command": "your/path/to/sfcore-th-dev/.venv/Scripts/python.exe",
"args": ["your/path/to/sfcore-th-dev/src/main.py"],
"env": {
"TRANSPORT": "stdio"
}
}
}
}
Docker with Stdio Configuration
{
"mcpServers": {
"sfcore-th-dev": {
"command": "docker",
"args": ["run", "--rm", "-i",
"-e", "TRANSPORT",
"ghcr.io/jorgesolebur/mcp-sfcore-th-dev:latest"],
"env": {
"TRANSPORT": "stdio"
}
}
}
}
Extending the Server
This template provides a foundation for building more comprehensive CCI integrations. To add new CCI tools:
- Create a new
@mcp.tool()method - Use the
get_cci_command_instructions()utility function for consistent behavior - Example:
@mcp.tool() async def deploy_to_org(org_name: str = "dev") -> str: command = f"cci flow run deploy --org {org_name}" purpose = f"Deploy to org '{org_name}'" return get_cci_command_instructions(command, purpose)
This ensures all tools have consistent command execution and error handling.
Available Tools
Environment Setup
check_cci_installation: Checks if CumulusCI is installed and provides installation/upgrade instructions
Scratch Org Management
create_dev_scratch_org: Creates a development scratch org usingcci flow run dev_org --org <org_name>create_feature_scratch_org: Creates a feature/QA scratch org usingcci flow run ci_feature_2gp --org <org_name>create_beta_scratch_org: Creates a beta/regression scratch org usingcci flow run regression_org --org <org_name>list_orgs: Lists all connected CumulusCI orgs usingcci org listopen_org: Opens the specified org in a browser usingcci org browser --org <org_name>
Development Operations
run_tests: Runs Apex tests in a specified org usingcci task run run_all_tests_locally --org <org_name>retrieve_changes: Retrieves metadata changes from the specified org usingcci task run retrieve_changes --org <org_name>deploy: Deploys local metadata to the specified org usingcci task run deploy --org <org_name>
Generic CCI Task Handler
run_generic_cci_task: Handles any CCI task that doesn't have a dedicated tool following a 3-step approach:- Checks if the task exists using
cci task list - Gets task information and parameters using
cci task infoorcci task run --help - Runs the task with appropriate parameters after collecting required values from the user
- Checks if the task exists using
All CCI tools provide setup guidance if needed and follow consistent error handling patterns.
MCP Resources
The server provides framework-specific documentation through MCP resources. These resources give agents contextual information about development practices and standards.
Available Resources
Access resources using the URI pattern: framework://<framework-name>
-
framework://salesforce-triggers: Comprehensive guidelines for developing Apex triggers- Trigger framework architecture
- Handler pattern implementation
- Best practices and anti-patterns
- Testing strategies
- Performance considerations
-
framework://salesforce-logging: Logging standards and best practices- Custom logger implementation
- Log levels and usage guidelines
- Performance considerations
- Production logging strategies
- Security and privacy considerations
-
framework://salesforce-cache-manager: Platform Cache management framework for performance optimization- Three cache types: Organization, Session, and Transaction
- Declarative configuration with custom metadata
- Usage examples and best practices
- Performance monitoring and debugging
- Security considerations for cached data
Using Resources
Agents can request framework documentation when needed:
{
"method": "resources/read",
"params": {
"uri": "framework://salesforce-triggers"
}
}
This provides on-demand access to framework-specific guidance without cluttering tool descriptions.
Future Enhancements
Consider adding:
- Additional framework resources (LWC, Aura, Flows)
- More CCI operation tools
- Integration with CI/CD pipelines
- Advanced testing workflows
İlgili Sunucular
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
NestJS MCP Server Module
A NestJS module for building MCP servers to expose tools and resources for AI, with support for multiple transport types.
NativeWind
Transform Tailwind components to NativeWind 4.
Facets Module
Create and manage Terraform modules for cloud-native infrastructure using the Facets.cloud FTF CLI.
MCP Reticle
Reticle intercepts, visualizes, and profiles JSON-RPC traffic between your LLM and MCP servers in real-time, with zero latency overhead. Stop debugging blind. Start seeing everything.
Comet Opik
Query and analyze your Opik logs, traces, prompts and all other telemtry data from your LLMs in natural language.
symbolica-mcp
A scientific computing server for symbolic math, data analysis, and visualization using popular Python libraries like NumPy, SciPy, and Pandas.
Postman MCP Server
Interact with the Postman API via an MCP server. Requires a Postman API key.
Lightning Tools MCP Server
An MCP server for accessing useful Bitcoin Lightning tools.
Base MCP Server
An MCP server providing onchain tools for AI applications to interact with the Base Network and Coinbase API.
Tree-Hugger-JS
Analyze and transform JavaScript/TypeScript code using the tree-hugger-js library.