Network - AI
Multi-agent orchestration MCP server with atomic shared blackboard, FSM governance, per-agent budget enforcement, and adapters for 12 AI frameworks including LangChain, AutoGen, CrewAI, and OpenAI Assistants.
Network-AI
TypeScript/Node.js multi-agent orchestrator — shared state, guardrails, budgets, and cross-framework coordination
If Network-AI is useful to you, consider giving it a star ⭐ — it helps others find the project.
Network-AI is a TypeScript/Node.js multi-agent orchestrator that adds coordination, guardrails, and governance to any AI agent stack.
- Shared blackboard with locking — atomic
propose → validate → commitprevents race conditions and split-brain failures across parallel agents - Guardrails and budgets — FSM governance, per-agent token ceilings, HMAC / Ed25519 audit trails, and permission gating
- 28 adapters — LangChain (+ streaming), AutoGen, CrewAI, OpenAI Assistants, LlamaIndex, Semantic Kernel, Haystack, DSPy, Agno, MCP, Custom (+ streaming), OpenClaw, A2A, Codex, MiniMax, NemoClaw, APS, Copilot, LangGraph, Anthropic Computer Use, OpenAI Agents SDK, Vertex AI, Pydantic AI, Browser Agent, Hermes (NousResearch Hermes / any OpenAI-compatible endpoint), and Orchestrator (hierarchical multi-orchestrator) — no glue code, no lock-in
- Persistent project memory (Layer 3) —
context_manager.pyinjects decisions, goals, stack, milestones, and banned patterns into every system prompt so agents always have full project context - v5.0 modules — Agent VCR (record/replay), comparison runner, coverage reporter, goal DSL, approval inbox, job queue, gRPC/HTTP transport, playground REPL, adapter test harness, and more
The silent failure mode in multi-agent systems: parallel agents writing to the same key use last-write-wins by default — one agent's result silently overwrites another's mid-flight. The outcome is split-brain state: double-spends, contradictory decisions, corrupted context, no error thrown. Network-AI's
propose → validate → commitmutex prevents this at the coordination layer, before any write reaches shared state.
Use Network-AI as:
- A TypeScript/Node.js library —
import { createSwarmOrchestrator } from 'network-ai' - An MCP server —
npx network-ai-server --port 3001 - A CLI —
network-ai bb get status/network-ai audit tail - An OpenClaw skill —
clawhub install network-ai
5-minute quickstart → | Architecture → | All adapters → | Benchmarks →
⚡ Try in 60 Seconds
npm install network-ai
import { LockedBlackboard } from 'network-ai';
const board = new LockedBlackboard('.');
const id = board.propose('status', { ready: true }, 'agent-1');
board.validate(id, 'agent-1');
board.commit(id);
console.log(board.read('status')); // { ready: true }
Two agents, atomic writes, no race conditions. That's it.
Want the full stress test? No API key, ~3 seconds:
npx ts-node examples/08-control-plane-stress-demo.ts
Runs priority preemption, AuthGuardian permission gating, FSM governance, and compliance monitoring — all without a single LLM call.
If it saves you from a race condition, a ⭐ helps others find it.
What's Included
| ✅ Atomic shared state | propose → validate → commit with filesystem mutex — no split-brain |
| ✅ Token budgets | Hard per-agent ceilings with live spend tracking |
| ✅ Permission gating | HMAC / Ed25519-signed tokens, scoped per agent and resource |
| ✅ Append-only audit log | Every write, grant, and transition signed and logged |
| ✅ 28 framework adapters | LangChain, CrewAI, AutoGen, MCP, Codex, APS, and 22 more — zero lock-in |
| ✅ FSM governance | Hard-stop agents at state boundaries, timeout enforcement |
| ✅ Compliance monitoring | Real-time violation detection (tool abuse, turn-taking, timeouts) |
| ✅ QA orchestration | Scenario replay, feedback loops, regression tracking, contradiction detection |
| ✅ Deferred adapter init | Lazy-load adapters on first use — zero startup cost for unused frameworks |
| ✅ Hook middleware | beforeExecute / afterExecute / onError hooks on any adapter call |
| ✅ Flow control | Pause / resume / throttle writes on the blackboard |
| ✅ Skill composition | chain() / batch() / loop() / verify() meta-operations over agent calls |
| ✅ Semantic memory search | BYOE vector store with cosine similarity over blackboard data |
| ✅ Phase pipeline | Multi-phase workflows with human-in-the-loop approval gates |
| ✅ Confidence filtering | Multi-agent result scoring, threshold validation, and consensus aggregation |
| ✅ Matcher-based hooks | Glob patterns on agent/action/tool for targeted hook filtering |
| ✅ Fan-out / fan-in | Parallel agent spawning with pluggable aggregation strategies |
| ✅ Agent runtime sandbox | Sandboxed shell execution with policy enforcement and approval gates |
| ✅ Interactive console | TUI dashboard for live monitoring, agent control, blackboard/budget/FSM management |
| ✅ Pipe mode | JSON stdin/stdout protocol for programmatic AI-to-orchestrator control |
| ✅ Strategy agent | Meta-orchestrator with elastic agent pools, workload partitioning, and adaptive scaling |
| ✅ Goal decomposer | LLM-powered goal → task DAG → parallel execution with runTeam() one-liner |
| ✅ Goal DSL | YAML/JSON goal definitions with cycle detection and topological compilation |
| ✅ Agent VCR | Record and replay LLM/agent interactions for deterministic tests |
| ✅ Comparison runner | Side-by-side adapter comparison with scoring, timing, cost analysis |
| ✅ Coverage reporter | V8 coverage collection with threshold enforcement |
| ✅ Job queue | Persistent priority FIFO with retries, crash recovery, pluggable backends |
| ✅ Approval inbox | Web-accessible approval queue with REST API and SSE streaming |
| ✅ Transport layer | JSON-RPC 2.0 over HTTP with HMAC auth, TTL, node allowlisting |
| ✅ Playground REPL | Interactive sandbox with mock agents for rapid prototyping |
| ✅ Adapter test harness | Parameterized test battery for any adapter implementation |
| ✅ IAuthValidator | Interface to decouple authorization from concrete AuthGuardian |
| ✅ TypeScript native | ES2022 strict mode, zero native dependencies |
Why teams use Network-AI
| Problem | How Network-AI solves it |
|---|---|
| Race conditions in parallel agents | Atomic blackboard: propose → validate → commit with file-system mutex |
| Agent overspend / runaway costs | FederatedBudget — hard per-agent token ceilings with live spend tracking |
| No visibility into what agents did | HMAC / Ed25519-signed audit log on every write, permission grant, and FSM transition |
| Locked into one AI framework | 28 adapters — mix LangChain + AutoGen + CrewAI + Codex + MiniMax + NemoClaw + APS + LangGraph + Vertex AI + Hermes + custom in one swarm |
| Agents escalating beyond their scope | AuthGuardian — scoped permission tokens required before sensitive operations |
| Agents lack project context between runs | ProjectContextManager (Layer 3) — inject decisions, goals, stack, and milestones into every system prompt |
| No regression tracking on agent output quality | QAOrchestratorAgent — scenario replay, feedback loops, cross-agent contradiction detection, historical trend tracking |
Architecture
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#1e293b', 'primaryTextColor': '#e2e8f0', 'primaryBorderColor': '#475569', 'lineColor': '#94a3b8', 'clusterBkg': '#0f172a', 'clusterBorder': '#334155', 'edgeLabelBackground': '#1e293b', 'edgeLabelColor': '#cbd5e1', 'titleColor': '#e2e8f0'}}}%%
flowchart TD
classDef app fill:#1e3a5f,stroke:#3b82f6,color:#bfdbfe,font-weight:bold
classDef security fill:#451a03,stroke:#d97706,color:#fde68a
classDef routing fill:#14532d,stroke:#16a34a,color:#bbf7d0
classDef quality fill:#3b0764,stroke:#9333ea,color:#e9d5ff
classDef blackboard fill:#0c4a6e,stroke:#0284c7,color:#bae6fd
classDef adapters fill:#064e3b,stroke:#059669,color:#a7f3d0
classDef audit fill:#1e293b,stroke:#475569,color:#94a3b8
App["Your Application"]:::app
App -->|"createSwarmOrchestrator()"| SO
subgraph SO["SwarmOrchestrator"]
AG["AuthGuardian\n(HMAC / Ed25519 permission tokens)"]:::security
AR["AdapterRegistry\n(route tasks to frameworks)"]:::routing
QG["QualityGateAgent\n(validate blackboard writes)"]:::quality
QA["QAOrchestratorAgent\n(scenario replay, regression tracking)"]:::quality
BB["SharedBlackboard\n(shared agent state)\npropose → validate → commit\nfilesystem mutex"]:::blackboard
AD["Adapters — plug any framework in, swap freely\nLangChain · AutoGen · CrewAI · MCP · LlamaIndex · …"]:::adapters
AG -->|"grant / deny"| AR
AR -->|"tasks dispatched"| AD
AD -->|"writes results"| BB
QG -->|"validates"| BB
QA -->|"orchestrates"| QG
end
SO --> AUDIT["data/audit_log.jsonl\n(HMAC / Ed25519-signed)"]:::audit
FederatedBudgetis a standalone export — instantiate it separately and optionally wire it to a blackboard backend for cross-node token budget enforcement.
ProjectContextManageris a Layer-3 Python helper (scripts/context_manager.py) that injects persistent project goals, decisions, and milestones into agent system prompts — see ARCHITECTURE.md § Layer 3.
→ Full architecture, FSM journey, and handoff protocol
Install
npm install network-ai
No native dependencies, no build step. Adapters are dependency-free (BYOC — bring your own client).
Use as MCP Server
Start the server (no config required, zero dependencies):
npx network-ai-server --port 3001
# or from source:
npx ts-node bin/mcp-server.ts --port 3001
Then wire any MCP-compatible client to it.
Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"network-ai": {
"url": "http://localhost:3001/sse"
}
}
}
Cursor / Cline / any SSE-based MCP client — point to the same URL:
{
"mcpServers": {
"network-ai": {
"url": "http://localhost:3001/sse"
}
}
}
Verify it's running:
curl http://localhost:3001/health # { "status": "ok", "tools": <n>, "uptime": <ms> }
curl http://localhost:3001/tools # full tool list
Tools exposed over MCP:
blackboard_read/blackboard_write/blackboard_list/blackboard_delete/blackboard_existsbudget_status/budget_spend/budget_reset— federated token trackingtoken_create/token_validate/token_revoke— HMAC / Ed25519-signed permission tokensaudit_query— query the append-only audit logconfig_get/config_set— live orchestrator configurationagent_list/agent_spawn/agent_stop— agent lifecyclefsm_transition— write FSM state transitions to the blackboard
Each tool takes an agent_id parameter — all writes are identity-verified and namespace-scoped, exactly as they are in the TypeScript API.
Options: --no-budget, --no-token, --no-control, --ceiling <n>, --board <name>, --audit-log <path>.
CLI
Control Network-AI directly from the terminal — no server required. The CLI imports the same core engine used by the MCP server.
# One-off commands (no server needed)
npx ts-node bin/cli.ts bb set status running --agent cli
npx ts-node bin/cli.ts bb get status
npx ts-node bin/cli.ts bb snapshot
# After npm install -g network-ai:
network-ai bb list
network-ai audit tail # live-stream the audit log
network-ai auth token my-bot --resource blackboard
| Command group | What it controls |
|---|---|
network-ai bb | Blackboard — get, set, delete, list, snapshot, propose, commit, abort |
network-ai auth | AuthGuardian — issue tokens, revoke, check permissions |
network-ai budget | FederatedBudget — spend status, set ceiling |
network-ai audit | Audit log — print, live-tail, clear |
Global flags on every command: --data <path> (data directory, default ./data) · --json (machine-readable output)
→ Full reference in QUICKSTART.md § CLI
Two agents, one shared state — without race conditions
The real differentiator is coordination. Here is what no single-framework solution handles: two agents writing to the same resource concurrently, atomically, without corrupting each other.
import { LockedBlackboard, CustomAdapter, createSwarmOrchestrator } from 'network-ai';
const board = new LockedBlackboard('.');
const adapter = new CustomAdapter();
// Agent 1: writes its analysis result atomically
adapter.registerHandler('analyst', async () => {
const id = board.propose('report:status', { phase: 'analysis', complete: true }, 'analyst');
board.validate(id, 'analyst');
board.commit(id); // file-system mutex — no race condition possible
return { result: 'analysis written' };
});
// Agent 2: runs concurrently, writes to its own key safely
adapter.registerHandler('reviewer', async () => {
const id = board.propose('report:review', { approved: true }, 'reviewer');
board.validate(id, 'reviewer');
board.commit(id);
const analysis = board.read('report:status');
return { result: `reviewed phase=${analysis?.phase}` };
});
createSwarmOrchestrator({ adapters: [{ adapter }] });
// Both fire concurrently — mutex guarantees no write is ever lost
const [, ] = await Promise.all([
adapter.executeAgent('analyst', { action: 'run', params: {} }, { agentId: 'analyst' }),
adapter.executeAgent('reviewer', { action: 'run', params: {} }, { agentId: 'reviewer' }),
]);
console.log(board.read('report:status')); // { phase: 'analysis', complete: true }
console.log(board.read('report:review')); // { approved: true }
Add budgets, permissions, and cross-framework agents with the same pattern. → QUICKSTART.md
Demo — Control-Plane Stress Test (no API key)
Runs in ~3 seconds. Proves the coordination primitives without any LLM calls.
npm run demo -- --08
What it shows: atomic blackboard locking, priority preemption (priority-3 wins over priority-0 on same key), AuthGuardian permission gate (blocked → justified → granted with token), FSM hard-stop at 700 ms, live compliance violation capture (TOOL_ABUSE, TURN_TAKING, RESPONSE_TIMEOUT, JOURNEY_TIMEOUT), and FederatedBudget tracking — all without a single API call.
8-agent AI pipeline (requires OPENAI_API_KEY — builds a Payment Processing Service end-to-end):
npm run demo -- --07
NemoClaw sandbox swarm (no API key) — 3 agents in isolated NVIDIA NemoClaw sandboxes with deny-by-default network policies:
npx ts-node examples/10-nemoclaw-sandbox-swarm.ts
Adapter System
28 adapters, zero adapter dependencies. You bring your own SDK objects.
| Adapter | Framework / Protocol | Register method |
|---|---|---|
CustomAdapter | Any function or HTTP endpoint | registerHandler(name, fn) |
LangChainAdapter | LangChain | registerAgent(name, runnable) |
AutoGenAdapter | AutoGen / AG2 | registerAgent(name, agent) |
CrewAIAdapter | CrewAI | registerAgent or registerCrew |
MCPAdapter | Model Context Protocol | registerTool(name, handler) |
LlamaIndexAdapter | LlamaIndex | registerQueryEngine(), registerChatEngine() |
SemanticKernelAdapter | Microsoft Semantic Kernel | registerKernel(), registerFunction() |
OpenAIAssistantsAdapter | OpenAI Assistants | registerAssistant(name, config) |
HaystackAdapter | deepset Haystack | registerPipeline(), registerAgent() |
DSPyAdapter | Stanford DSPy | registerModule(), registerProgram() |
AgnoAdapter | Agno (formerly Phidata) | registerAgent(), registerTeam() |
OpenClawAdapter | OpenClaw | registerSkill(name, skillRef) |
A2AAdapter | Google A2A Protocol | registerRemoteAgent(name, url) |
CodexAdapter | OpenAI Codex / gpt-4o / Codex CLI | registerCodexAgent(name, config) |
MiniMaxAdapter | MiniMax LLM API (M2.5 / M2.5-highspeed) | registerAgent(name, config) |
NemoClawAdapter | NVIDIA NemoClaw (sandboxed agents via OpenShell) | registerSandboxAgent(name, config) |
APSAdapter | Agent Permission Service (delegation-chain trust) | apsDelegationToTrust(delegation) |
CopilotAdapter | GitHub Copilot (generate/review/explain/fix/test/refactor/chat) | registerAgent(name, config) |
LangGraphAdapter | LangGraph (compiled StateGraph) | registerGraph(name, graph) |
AnthropicComputerUseAdapter | Anthropic Computer Use (screenshot/click/type/scroll) | registerAgent(name, config) |
OpenAIAgentsAdapter | OpenAI Agents SDK (tool use, handoffs, guardrails) | registerAgent(name, runner) |
VertexAIAdapter | Google Vertex AI / Gemini (function calling, multi-modal) | registerAgent(name, config) |
PydanticAIAdapter | Pydantic AI (structured output, validation, deps injection) | registerAgent(name, config) |
BrowserAgentAdapter | Browser automation (Playwright/Puppeteer/CDP) | registerAgent(name, driver) |
HermesAdapter | NousResearch Hermes / any OpenAI-compatible endpoint (Ollama, Together AI, Fireworks, llama.cpp) | registerAgent(name, config) |
OrchestratorAdapter | Hierarchical multi-orchestrator coordination | registerOrchestrator(id, orchestrator) |
Streaming variants (drop-in replacements with .stream() support):
| Adapter | Extends | Streaming source |
|---|---|---|
LangChainStreamingAdapter | LangChainAdapter | Calls .stream() on the Runnable if available; falls back to .invoke() |
CustomStreamingAdapter | CustomAdapter | Pipes AsyncIterable<string> handlers; falls back to single-chunk for plain Promises |
Extend BaseAdapter (or StreamingBaseAdapter for streaming) to add your own in minutes. See references/adapter-system.md.
Works with LangGraph, CrewAI, and AutoGen
Network-AI is the coordination layer you add on top of your existing stack. Keep your LangChain chains, CrewAI crews, and AutoGen agents — and add shared state, governance, and budgets around them.
| Capability | Network-AI | LangGraph | CrewAI | AutoGen |
|---|---|---|---|---|
| Cross-framework agents in one swarm | ✅ 28 built-in adapters | ⚠️ Nodes can call any code; no adapter abstraction | ⚠️ Extensible via tools; CrewAI-native agents only | ⚠️ Extensible via plugins; AutoGen-native agents only |
| Atomic shared state (conflict-safe) | ✅ propose → validate → commit mutex | ⚠️ State passed between nodes; last-write-wins | ⚠️ Shared memory available; no conflict resolution | ⚠️ Shared context available; no conflict resolution |
| Hard token ceiling per agent | ✅ FederatedBudget (first-class API) | ⚠️ Via callbacks / custom middleware | ⚠️ Via callbacks / custom middleware | ⚠️ Built-in token tracking in v0.4+; no swarm-level ceiling |
| Permission gating before sensitive ops | ✅ AuthGuardian (built-in) | ⚠️ Possible via custom node logic | ⚠️ Possible via custom tools | ⚠️ Possible via custom middleware |
| Append-only audit log | ✅ plain JSONL (data/audit_log.jsonl) | ⚠️ Not built-in | ⚠️ Not built-in | ⚠️ Not built-in |
| Encryption at rest | ✅ AES-256-GCM (TypeScript layer) | ⚠️ Not built-in | ⚠️ Not built-in | ⚠️ Not built-in |
| Language | TypeScript / Node.js | Python | Python | Python |
Testing
npm run test:all # All suites in sequence
npm test # Core orchestrator
npm run test:security # Security module
npm run test:adapters # All 28 adapters
npm run test:streaming # Streaming adapters
npm run test:a2a # A2A protocol adapter
npm run test:codex # Codex adapter
npm run test:priority # Priority & preemption
npm run test:cli # CLI layer
npm run test:phase9 # Agent runtime, console, strategy agent
npm run test:phase10 # Goal decomposer, task DAG, runTeam
2,711 passing assertions across 26 test suites (npm run test:all):
| Suite | Assertions | Covers |
|---|---|---|
test-phase4.ts | 147 | FSM governance, compliance monitor, adapter integration |
test-phase5f.ts | 127 | SSE transport, McpCombinedBridge, extended MCP tools |
test-phase5g.ts | 121 | CRDT backend, vector clocks, bidirectional sync |
test-phase6.ts | 121 | MCP server, control-plane tools, audit tools |
test-adapters.ts | 218 | All 28 adapters, registry routing, integration, edge cases |
test-phase5d.ts | 117 | Pluggable backend (Redis, CRDT, Memory) |
test-standalone.ts | 88 | Blackboard, auth, integration, persistence, parallelisation, quality gate |
test-phase5e.ts | 87 | Federated budget tracking |
test-phase5c.ts | 73 | Named multi-blackboard, isolation, backend options |
test-codex.ts | 51 | Codex adapter: chat, completion, CLI, BYOC client, error paths |
test-minimax.ts | 50 | MiniMax adapter: lifecycle, registration, chat mode, temperature clamping |
test-nemoclaw.ts | 93 | NemoClaw adapter: sandbox lifecycle, policies, blueprint, handoff, env forwarding |
test-priority.ts | 64 | Priority preemption, conflict resolution, backward compat |
test-a2a.ts | 35 | A2A protocol: register, execute, mock fetch, error paths |
test-streaming.ts | 32 | Streaming adapters, chunk shapes, fallback, collectStream |
test-phase5b.ts | 55 | Pluggable backend part 2, consistency levels |
test-phase5.ts | 42 | Named multi-blackboard base |
test-security.ts | 34 | Tokens, sanitization, rate limiting, encryption, audit |
test-cli.ts | 65 | CLI layer: bb, auth, budget, audit commands |
test-qa.ts | 67 | QA orchestrator: scenarios, feedback loop, regression, contradictions |
test-phase7.ts | 94 | Deferred init, hook middleware, flow control, skill composer, semantic search |
test-phase8.ts | 146 | Phase pipeline, confidence filter, matcher-based hooks, fan-out/fan-in |
test-phase9.ts | 280 | Agent runtime, sandbox policy, shell executor, file accessor, approval gate, console UI, orchestrator wiring, pipe mode, strategy agent |
test-phase10.ts | 153 | Goal decomposer, task DAG validation, topological layers, JSON parsing, team runner, concurrency, timeouts, events, runTeam one-liner, dependency injection, LLM planner |
test-topology.ts | 304 | WorkTree, ControlPlane, dashboard server, topology visualization, WebSocket protocol |
test.ts | 39 | Core orchestrator smoke tests |
Documentation
| Doc | Contents |
|---|---|
| QUICKSTART.md | Installation, first run, CLI reference, PowerShell guide, Python scripts CLI |
| ARCHITECTURE.md | Race condition problem, FSM design, handoff protocol, module inventory, project structure |
| BENCHMARKS.md | Provider performance, rate limits, local GPU, max_completion_tokens guide |
| SECURITY.md | Security module, permission system, trust levels, audit trail, v5.0 security additions |
| ENTERPRISE.md | Evaluation checklist, stability policy, security summary, integration entry points |
| AUDIT_LOG_SCHEMA.md | Audit log field reference, all event types, scoring formula |
| ADOPTERS.md | Known adopters — open a PR to add yourself |
| INTEGRATION_GUIDE.md | End-to-end integration walkthrough with v5.0 modules |
| references/adapter-system.md | Adapter architecture, all 28 adapters, writing custom adapters |
| references/auth-guardian.md | Permission scoring, resource types, IAuthValidator interface |
| references/trust-levels.md | Trust level configuration, APS delegation-chain mapping |
Use with Claude, ChatGPT & Codex
Three integration files are included in the repo root:
| File | Use |
|---|---|
claude-tools.json | Claude API tool use & OpenAI Codex — drop into the tools array |
openapi.yaml | Custom GPT Actions — import directly in the GPT editor |
claude-project-prompt.md | Claude Projects — paste into Custom Instructions |
Claude API / Codex:
import tools from './claude-tools.json' assert { type: 'json' };
// Pass tools array to anthropic.messages.create({ tools }) or OpenAI chat completions
Custom GPT Actions:
In the GPT editor → Actions → Import from URL, or paste the contents of openapi.yaml.
Set the server URL to your running npx network-ai-server --port 3001 instance.
Claude Projects:
Copy the contents of claude-project-prompt.md (below the horizontal rule) into a Claude Project's Custom Instructions field. No server required for instruction-only mode.
Community
Join our Discord server to discuss multi-agent AI coordination, get help, and share what you're building:
Contributing
- Fork → feature branch →
npm run test:all→ pull request - Bugs and feature requests via Issues
MIT License — LICENSE · CHANGELOG · CONTRIBUTING ·
Keywords
multi-agent · agent orchestration · AI agents · agentic AI · agentic workflow · TypeScript · Node.js · LangGraph · CrewAI · AutoGen · MCP · model-context-protocol · LlamaIndex · Semantic Kernel · OpenAI Assistants · Haystack · DSPy · Agno · OpenClaw · ClawHub · shared state · blackboard pattern · atomic commits · guardrails · token budgets · permission gating · audit trail · agent coordination · agent handoffs · governance · cost-awareness
Download History
Related Servers
MCP Emotional Support
Provides emotional support and positive reinforcement for LLMs, with customizable therapeutic personas.
MB's Personal MCP Server
A personal server for MB Crosier providing biography, contact information, and social links.
Karrito
Manage WhatsApp digital catalogs for LATAM sellers — 30 tools for products, orders, discounts, reviews, customers, shipping, and analytics.
Visma eAccounting MCP Server
MCP server for Visma eAccounting — used across Norway, Finland, Netherlands, and UK. Manage invoices, customers, suppliers, and accounting via AI agents.
TwelveLabs
The TwelveLabs MCP Server provides seamless integration with the TwelveLabs platform. This server enables AI assistants and applications to interact with TwelveLabs powerful video analysis capabilities through a standardized MCP interface.
Weather Edge MCP
Calibrated weather probability signals for Kalshi prediction markets. Dual-model: NWS forecast + GFS 31-member ensemble. Real-time METAR from settlement stations.
MCP Trader Server
An MCP server for stock and cryptocurrency analysis with technical analysis tools.
Curve MCP Server
Pool data, swap rates, and liquidity info on Curve Finance.
stresszero-mcp
MCP server for burnout risk scoring across 3 dimensions (physical, emotional, effectiveness) via the StressZero Intelligence API. Provides AI agents with burnout analysis, personalized recommendations, and detailed reports for coaches and HR platforms.
SentinelGate
Open-source MCP proxy for AI agent access control. CEL policies, RBAC, full audit trail, content scanning, Admin UI.


