MCPOmni Connect

A universal command-line interface (CLI) gateway to the MCP ecosystem, integrating multiple MCP servers, AI models, and transport protocols.


๐ŸŽฌ See It In Action

import asyncio
from omnicoreagent import OmniCoreAgent, MemoryRouter, ToolRegistry

# Create tools in seconds
tools = ToolRegistry()

@tools.register_tool("get_weather")
def get_weather(city: str) -> dict:
    """Get current weather for a city."""
    return {"city": city, "temp": "22ยฐC", "condition": "Sunny"}

# Build a production-ready agent
agent = OmniCoreAgent(
    name="assistant",
    system_instruction="You are a helpful assistant with access to weather data.",
    model_config={"provider": "openai", "model": "gpt-4o"},
    local_tools=tools,
    memory_router=MemoryRouter("redis"),  # Start with Redis
    agent_config={
        "context_management": {"enabled": True},  # Auto-manage long conversations
        "guardrail_config": {"strict_mode": True},  # Block prompt injections
    }
)

async def main():
    # Run the agent
    result = await agent.run("What's the weather in Tokyo?")
    print(result["response"])
    
    # Switch to MongoDB at runtime โ€” no restart needed
    await agent.switch_memory_store("mongodb")
    
    # Keep running with a different backend
    result = await agent.run("How about Paris?")
    print(result["response"])

asyncio.run(main())

What just happened?

  • โœ… Registered a custom tool with type hints
  • โœ… Built an agent with memory persistence
  • โœ… Enabled automatic context management
  • โœ… Switched from Redis to MongoDB while running

โšก Quick Start

pip install omnicoreagent
echo "LLM_API_KEY=your_api_key" > .env
from omnicoreagent import OmniCoreAgent

agent = OmniCoreAgent(
    name="my_agent",
    system_instruction="You are a helpful assistant.",
    model_config={"provider": "openai", "model": "gpt-4o"}
)

result = await agent.run("Hello!")
print(result["response"])

That's it. You have an AI agent with session management, memory, and error handling.

๐Ÿ“š Want to learn more? Check out the Cookbook โ€” progressive examples from "Hello World" to production deployments.


๐ŸŽฏ What Makes OmniCoreAgent Different?

FeatureWhat It Means For You
Runtime Backend SwitchingSwitch Redis โ†” MongoDB โ†” PostgreSQL without restarting
Cloud Workspace StorageAgent files persist in AWS S3 or Cloudflare R2 โšก NEW
Context EngineeringSession memory + agent loop context + tool offloading = no token exhaustion
Tool Response OffloadingLarge tool outputs saved to files, 98% token savings
Built-in GuardrailsPrompt injection protection out of the box
MCP NativeConnect to any MCP server (stdio, SSE, HTTP with OAuth)
Background AgentsSchedule autonomous tasks that run on intervals
Workflow OrchestrationSequential, Parallel, and Router agents for complex tasks
Production ObservabilityMetrics, tracing, and event streaming built in

๐ŸŽฏ Core Features

๐Ÿ“– Full documentation: docs-omnicoreagent.omnirexfloralabs.com/docs

#FeatureDescriptionDocs
1OmniCoreAgentThe heart of the framework โ€” production agent with all featuresOverview โ†’
2Multi-Tier Memory5 backends (Redis, MongoDB, PostgreSQL, SQLite, in-memory) with runtime switchingMemory โ†’
3Context EngineeringDual-layer system: agent loop context management + tool response offloadingContext โ†’
4Event SystemReal-time event streaming with runtime switchingEvents โ†’
5MCP ClientConnect to any MCP server (stdio, streamable_http, SSE) with OAuthMCP โ†’
6DeepAgentMulti-agent orchestration with automatic task decompositionDeepAgent โ†’
7Local ToolsRegister any Python function as an AI tool via ToolRegistryLocal Tools โ†’
8Community Tools100+ pre-built tools (search, AI, comms, databases, DevOps, finance)Community Tools โ†’
9Agent SkillsPolyglot packaged capabilities (Python, Bash, Node.js)Skills โ†’
10Workspace MemoryPersistent file storage with S3/R2/Local backendsWorkspace โ†’
11Sub-AgentsDelegate tasks to specialized agentsSub-Agents โ†’
12Background AgentsSchedule autonomous tasks on intervalsBackground โ†’
13WorkflowsSequential, Parallel, and Router agent orchestrationWorkflows โ†’
14BM25 Tool RetrievalAuto-discover relevant tools from 1000+ using BM25 searchAdvanced Tools โ†’
15GuardrailsPrompt injection protection with configurable sensitivityGuardrails โ†’
16ObservabilityPer-request metrics + Opik distributed tracingObservability โ†’
17Universal Models9 providers via LiteLLM (OpenAI, Anthropic, Gemini, Groq, Ollama, etc.)Models โ†’
18OmniServeTurn any agent into a production REST/SSE API with one commandOmniServe โ†’

๐Ÿ“š Examples & Cookbook

All examples are in the Cookbook โ€” organized by use case with progressive learning paths.

CategoryWhat You'll BuildLocation
Getting StartedYour first agent, tools, memory, eventscookbook/getting_started
WorkflowsSequential, Parallel, Router agentscookbook/workflows
Background AgentsScheduled autonomous taskscookbook/background_agents
ProductionMetrics, guardrails, observabilitycookbook/production
๐Ÿ† ShowcaseFull production applicationscookbook/showcase

๐Ÿ† Showcase: Full Production Applications

ApplicationDescriptionFeatures
OmniAuditHealthcare Claims Audit SystemMulti-agent pipeline, ERISA compliance
DevOps CopilotAI-Powered DevOps AutomationDocker, Prometheus, Grafana
Deep Code AgentCode Analysis with SandboxSandbox execution, session management

โš™๏ธ Configuration

Environment Variables

# Required
LLM_API_KEY=your_api_key

# Optional: Memory backends
REDIS_URL=redis://localhost:6379/0
DATABASE_URL=postgresql://user:pass@localhost:5432/db
MONGODB_URI=mongodb://localhost:27017/omnicoreagent

# Optional: Observability
OPIK_API_KEY=your_opik_key
OPIK_WORKSPACE=your_workspace

Agent Configuration

agent_config = {
    "max_steps": 15,                    # Max reasoning steps
    "tool_call_timeout": 30,            # Tool timeout (seconds)
    "request_limit": 0,                 # 0 = unlimited
    "total_tokens_limit": 0,            # 0 = unlimited
    "memory_config": {"mode": "sliding_window", "value": 10000},
    "enable_advanced_tool_use": True,   # BM25 tool retrieval
    "enable_agent_skills": True,        # Specialized packaged skills
    "memory_tool_backend": "local"      # Persistent working memory
}

๐Ÿ“– Full configuration reference: Configuration Guide โ†’


๐Ÿงช Testing & Development

# Clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git
cd omnicoreagent

# Setup
uv venv && source .venv/bin/activate
uv sync --dev

# Test
pytest tests/ -v
pytest tests/ --cov=src --cov-report=term-missing

๐Ÿ” Troubleshooting

ErrorFix
Invalid API keyCheck .env: LLM_API_KEY=your_key
ModuleNotFoundErrorpip install omnicoreagent
Redis connection failedStart Redis or use MemoryRouter("in_memory")
MCP connection refusedEnsure MCP server is running

๐Ÿ“– More troubleshooting: Basic Usage Guide โ†’


๐Ÿ“ Changelog

See the full Changelog โ†’ for version history.


๐Ÿค Contributing

# Fork & clone
git clone https://github.com/omnirexflora-labs/omnicoreagent.git

# Setup
uv venv && source .venv/bin/activate
uv sync --dev
pre-commit install

# Submit PR

See CONTRIBUTING.md for guidelines.


๐Ÿ“„ License

MIT License โ€” see LICENSE


๐Ÿ‘จโ€๐Ÿ’ป Author & Credits

Created by Abiola Adeshina

๐ŸŒŸ The OmniRexFlora Ecosystem

ProjectDescription
๐Ÿง  OmniMemorySelf-evolving memory for autonomous agents
๐Ÿค– OmniCoreAgentProduction-ready AI agent framework (this project)
โšก OmniDaemonEvent-driven runtime engine for AI agents

๐Ÿ™ Acknowledgments

Built on: LiteLLM, FastAPI, Redis, Opik, Pydantic, APScheduler


Related Servers