kemdiCode MCP

kemdiCode MCP is a Model Context Protocol server that gives AI agents and IDE assistants access to 124 specialized tools for code analysis, generation, git operations, file management, AST-aware editing, project memory, cognition & self-improvement, multi-board kanban, and multi-agent coordination.

kemdiCode MCP


kemdiCode MCP is a Model Context Protocol server that gives AI agents and IDE assistants access to 147 specialized tools for code analysis, generation, git operations, file management, AST-aware editing, project memory, cognition & self-improvement, multi-board kanban with subtasks, multi-agent coordination, cluster bus with distributed LLM magistrale, typed data flow bus, structured output, and LLM-driven task management.

New in 1.29

Cluster Bus File Read & Bus Fixes — New cluster-bus-file-read tool enables AI to read files across cluster nodes during analysis, with automatic local fallback. Fixed critical broadcast bug: tool was sending unicast to literal "broadcast" cluster ID instead of using proper broadcast method. Cleaned up duplicate signal dedup logic in handleIncoming() (removed 3 redundant filter blocks). Added file:read / file:read-result signal handler in cluster init for cross-node file serving. 147 tools.

New in 1.28

Deep Tool Improvements — 3 new tools: agent-init (5-step agent onboarding), task-subtask (parent-child task hierarchy with cascade delete), board-workflow (custom workflow columns). Cycle detection via DFS in task dependency graphs. TF-IDF + bigram similarity replaces Jaccard for intent drift detection. Pipeline conditional branching with evaluateCondition(). AsyncLocalStorage for automatic session ID propagation. agent-watch removed (covered by monitor + agent-history). Pagination (offset) for task-list, list-memories, agent-list. Silent mode for all project tools. Cluster bus fixes: SCAN replaces blocking KEYS, fan-in aggregator cleanup on replacement. 146 tools, 649 tests passing.

New in 1.27

Tool UX & 12-Bug Audit — Cognition tools no longer require sessionId (auto-detected). New git-tag tool. git-log gains format:"json". file-search adds mode enum. 12-bug security audit fixing prototype pollution, infinite recursion, HMAC integrity, and more.

New in 1.26

CI/CD Multicast & Fan-In Aggregation — 6 CI signal types, fan-out to multiple clusters, fan-in result aggregation (all/first/majority/custom modes). Meta-router CI routing rules. Improved agent orchestration limits.


Previous Releases

  • Full-duplex inter-cluster communication via Redis Pub/Sub with typed signal envelopes
  • 12 signal types, 3 send modes (unicast/broadcast/routed)
  • Signal Flow Controller with backpressure, rate limiting, priority filtering
  • Health Monitor with heartbeat tracking and stale detection
  • LLM Magistrale: dispatch prompts across cluster nodes (4 strategies: first-wins, best-of-n, consensus, fallback-chain)
  • Self-regulating Pass Controller (3 strategies: min-passes, quality-target, fixed)
  • enhance-prompt tool for iterative prompt refinement
  • Data Flow Bus: 12 typed channels with Zod schemas, correlation tracking, priority routing, Redis bridge
  • Hardening: bloom filter dedup, circuit breaker, HMAC auth
  • 559 unit tests. Read the full whitepaper →
  • generateObject() with Zod schema validation, retry logic, and automatic JSON repair via jsonrepair
  • 8th LLM provider: Perplexity for research-tier queries (3-layer routing)
  • Tool Annotations: all tools carry MCP-level hints (readOnlyHint, destructiveHint, openWorldHint)
  • task-cluster with 11 actions for LLM-driven task grouping
  • task-complexity: LLM-scored 1–10 analysis with subtask recommendations
  • Data Flow Bus: typed message bus with 12 channels
  • Global Event Bus with Redis Pub/Sub bridge
  • MCP Client Capabilities: client-sampling, client-elicit, client-roots
  • agent-orchestrate for autonomous AI agent loops
  • Ambient Learning & Agent Ranking (bronze → diamond tiers)
  • session-recover: single-tool context restore after compaction
  • executeWithGuard() deduplication: −622 lines across 262 files
  • 8 interconnected cognition tools: decision-journal, confidence-tracker, mental-model, intent-tracker, error-pattern, self-critique, smart-handoff, context-budget
  • In-process event bus with 9 reactive handlers (decision → confidence, error → fix lookup, drift → critique)
  • CognitionCrossLinker for bidirectional Redis links between cognition records
  • self-critiquecheck-application action; mental-modelimpact-analysis, dependency-chain, invariant-check
  • smart-handoff auto-enriched with full cognition snapshot
  • consoleLogger migration across 14 files (~70 call sites)
  • ESLint warnings fixed, version header corrected
  • thinking-chain tool with 7 actions, forward-only constraint, branching, Redis-backed with 7-day TTL
  • git-add, git-commit, git-stash, task-get, task-delete, task-comment, board-delete, workspace-delete, file-delete, file-move, file-copy, file-backup-restore, pipeline, checkpoint-diff
  • Metadata for all tools, auto-sessionId, board/workspace name lookup

Cognition Layer: How AI Remembers

The cognition layer gives agents persistent self-awareness across sessions. As the agent works, it writes structured records to Redis — decisions, confidence levels, error patterns, intent hierarchies, and lessons learned.

During a session: The agent records intents, logs decisions with reasoning, tracks confidence, and matches errors against its cross-session database. At the end, self-critique extracts lessons and smart-handoff creates a structured briefing auto-enriched with a full cognition snapshot.

New session: The agent calls smart-handoff:latest (or session-recover) and gets back the intent hierarchy, approach rationale, status, warnings, lessons, and the single most important next action — no re-explanation needed.

Cross-tool intelligence: Tools react to each other through a global event bus. Recording a decision auto-creates a confidence record. Low confidence triggers drift detection. Errors scan recent decisions. Lessons cross-link to matching error patterns. All backed by CognitionCrossLinker with bidirectional Redis links.

Data lives in Redis with configurable TTL (default 7 days). Nothing is sent to external services.


Usage Examples

Using kemdiCode MCP tools from your AI agent prompt

You don't call these tools directly — your AI agent (Claude Code, Cursor, etc.) invokes them when you describe what you need. Here are real prompts and what happens behind the scenes:

Code review before committing:

You: "Review the auth module for security issues"
→ Agent calls: code-review --files "@src/auth/**/*.ts" --focus "security"

Fix a bug with AI assistance:

You: "There's a race condition in the queue processor, find and fix it"
→ Agent calls: fix-bug --description "race condition in queue processor" --files "@src/queue/"

Multi-model comparison for architecture decisions:

You: "Ask 3 models whether we should use event sourcing or CRUD for the order service"
→ Agent calls: consensus-prompt \
    --prompt "Event sourcing vs CRUD for an order management service with 10k orders/day" \
    --boardModels '["o:gpt-5","a:claude-sonnet-4-5","g:gemini-3-pro"]' \
    --ceoModel "a:claude-opus-4-5:4k"

Project memory for persistent context:

You: "Remember that we use JWT with RS256 for auth in this project"
→ Agent calls: write-memory --name "auth-strategy" --content "JWT with RS256, keys in /etc/keys/" --tags '["auth","architecture"]'

You: "What was our auth strategy?"
→ Agent calls: read-memory --name "auth-strategy"

Multi-agent task distribution:

You: "Set up 3 agents: backend, frontend, QA. Backend works on the API, frontend on React components"
→ Agent calls: agent-register → task-create → task-push-multi
→ Agents coordinate via shared-thoughts and queue-message

What's Next

Install from npm

npm install -g kemdicode-mcp

Then add to your AI IDE:

# Claude Code
claude mcp add kemdicode-mcp -- kemdicode-mcp

# Or add to ~/.claude.json / Cursor / KiroCode / RooCode config:
{
  "mcpServers": {
    "kemdicode-mcp": {
      "command": "kemdicode-mcp"
    }
  }
}

Tell the agent what you want — it picks the right tools

kemdiCode MCP works best when you tell the agent to use it. Add a line to your project's CLAUDE.md, .cursorrules, or system prompt:

You have access to kemdiCode MCP server. Use its tools for:
- Project memory (write-memory, read-memory) to persist decisions across sessions
- Cognition tools (decision-journal, smart-handoff) to track your reasoning
- Kanban (task-create, task-list) for project management
- Code analysis (code-review, find-definition) for deep code understanding

Example: Building a landing page

You: "Build a landing page for a SaaS product. Use kemdiCode tools to track progress
     and remember design decisions."

What the agent does:
1. write-memory --name "landing-design" → saves design system choices
2. decision-journal → records "chose Tailwind over CSS modules" with reasoning
3. task-create → creates tasks: hero section, pricing, testimonials, footer
4. code-review → reviews each component for accessibility
5. smart-handoff → creates handoff so next session can continue seamlessly

Example: Building a Flappy Bird clone for Android

You: "Build a Flappy Bird clone in Kotlin for Android. Track architecture decisions
     and use the kanban board."

What the agent does:
1. intent-tracker → sets mission "Flappy Bird Android clone"
2. mental-model → maps architecture: GameView, Bird, Pipe, ScoreManager, GameLoop
3. board-create → creates "Flappy Bird Sprint 1"
4. task-create → physics engine, rendering, collision detection, scoring, sounds
5. decision-journal → records "chose Canvas over OpenGL" (simpler for 2D, faster iteration)
6. error-pattern → when bitmap loading fails, records fix for next time
7. self-critique → "physics feels floaty, adjust gravity constant next session"
8. smart-handoff → full briefing for the next session with all context

The agent doesn't just write code — it builds a persistent understanding of your project that survives across sessions, compactions, and context resets.


Highlights

CapabilityDescription
146 MCP ToolsCode review, refactoring, testing, git, file management, AST editing, memory, checkpoints, kanban with subtasks, cognition, cluster bus, data flow, pipelines, structured output, task clustering
Cluster BusDistributed LLM orchestration: 18 signal types, 4 send modes (incl. multicast), magistrale with 4 aggregation strategies, multi-pass quality control, CI/CD fan-in
Data Flow Bus12 typed channels (ai:*, kanban:*, cognition:*, agent:*, system:*) with Zod schemas, correlation tracking, Redis bridge
Cognition Layer8 self-improvement tools: decision journal, confidence tracking, mental models, intent hierarchy with TF-IDF drift detection, error patterns, self-critique, smart handoff, context budget
Cross-Tool IntelligenceGlobal event bus + cross-linker: tools react to each other across cognition, kanban, session, and recursive modules
8 LLM ProvidersNative SDKs for OpenAI, Anthropic, Gemini + OpenAI-compatible for Groq, DeepSeek, Ollama, OpenRouter, Perplexity
Multi-AgentAgent onboarding (agent-init), ranking (bronze→diamond), coordination via kanban boards and Redis Pub/Sub
Structured OutputgenerateObject() with Zod schemas, JSON repair, and retry logic for reliable LLM-to-data extraction
Parallel Multi-ModelSend one prompt to N models simultaneously; CEO-and-Board consensus pattern
Thinking TokensUnified syntax across providers: o:gpt-5:higha:claude-sonnet-4-5:4kg:gemini-3-pro:8k
Tree-sitter ASTLanguage-aware navigation and symbol editing for 19 languages
Project MemoryPersistent per-project key-value store with TTL and tags
Session Resurrectionloci-recall + smart-handoff restore full context after compaction
Hot ReloadChange provider, model, or config at runtime without restart
Cross-RuntimeRuns on Bun (recommended) or Node.js with automatic detection

Compatibility

IDE / EditorStatusConfig location
Claude Codeclaude mcp add or ~/.claude.json
CursorSettings → Features → MCP
KiroCode~/.kirocode/mcp.json
RooCodeVS Code extension settings

Quick Start

Prerequisites

  • Bun ≥ 1.0 (recommended) or Node.js ≥ 18
  • Redis (optional — required only for multi-agent features and cognition layer)

Install & Run

git clone https://github.com/kemdi-pl/kemdicode-mcp.git
cd kemdicode-mcp
bun install && bun run build:bun
bun run start:bun
npm install && npm run build && npm run start

IDE Configuration

claude mcp add kemdicode-mcp -- bun /path/to/kemdicode-mcp/dist/index.js

Or add to ~/.claude.json:

{
  "mcpServers": {
    "kemdicode-mcp": {
      "command": "bun",
      "args": ["/path/to/kemdicode-mcp/dist/index.js"]
    }
  }
}

Settings → Features → MCP:

{
  "mcpServers": {
    "kemdicode-mcp": {
      "command": "bun",
      "args": ["/path/to/kemdicode-mcp/dist/index.js", "-m", "gpt-5"]
    }
  }
}

Add to ~/.kirocode/mcp.json:

{
  "mcpServers": {
    "kemdicode-mcp": {
      "command": "bun",
      "args": [
        "/path/to/kemdicode-mcp/dist/index.js",
        "-m", "claude-sonnet-4-5",
        "--redis-host", "127.0.0.1"
      ]
    }
  }
}

Add to VS Code settings (RooCode extension):

{
  "mcpServers": {
    "kemdicode-mcp": {
      "command": "bun",
      "args": [
        "/path/to/kemdicode-mcp/dist/index.js",
        "-m", "claude-sonnet-4-5",
        "--redis-host", "127.0.0.1"
      ]
    }
  }
}

Multi-Provider LLM

kemdiCode MCP ships with 8 built-in providers. Each can be activated by setting the corresponding API key:

export OPENAI_API_KEY=sk-...            # OpenAI
export ANTHROPIC_API_KEY=sk-ant-...     # Anthropic
export GEMINI_API_KEY=AI...             # Google Gemini
export GROQ_API_KEY=gsk_...            # Groq
export DEEPSEEK_API_KEY=sk-...          # DeepSeek
export OPENROUTER_API_KEY=sk-or-...     # OpenRouter
export PERPLEXITY_API_KEY=pplx-...     # Perplexity (research tier)
# Ollama — no key required (local)

Provider Syntax

Use provider:model (or the short alias) anywhere a model is accepted:

openai:gpt-5               o:gpt-5              # Latest flagship model
anthropic:claude-sonnet-4-5  a:claude-sonnet-4-5  # Best balance
anthropic:claude-opus-4-5    a:claude-opus-4-5    # Maximum intelligence
gemini:gemini-3-pro          g:gemini-3-pro       # Most intelligent
groq:llama-3.3-70b           q:llama-3.3-70b      # Fast inference
deepseek:deepseek-chat       d:deepseek-chat      # Cost effective
ollama:llama3.3              l:llama3.3           # Local deployment
openrouter:gpt-5             r:gpt-5              # Aggregator access
perplexity:sonar-pro         p:sonar-pro          # Research queries

Thinking / Reasoning Tokens

Append a third segment to enable extended thinking:

ProviderSyntaxEffect
OpenAI (reasoning)o:gpt-5:highSets reasoning_effort to low / medium / high
Anthropica:claude-sonnet-4-5:4kAllocates 4 096 extended thinking tokens
Geminig:gemini-3-pro:8kAllocates 8 192 thinking tokens

Tool Reference

146 tools across 23 categories.

Category#Tools
Cluster Bus8cluster-bus-status cluster-bus-topology cluster-bus-send cluster-bus-magistrale cluster-bus-flow cluster-bus-routing cluster-bus-inspect cluster-bus-file-read
Cognition8decision-journal confidence-tracker mental-model intent-tracker error-pattern self-critique smart-handoff context-budget
AI Agents4plan build brainstorm ask-ai
Multi-LLM3multi-prompt consensus-prompt enhance-prompt
Code Analysis8code-review explain-code find-definition find-references find-symbols semantic-search code-outline analyze-deps
Line Editing4insert-at-line delete-lines replace-lines replace-content
Symbol Editing3insert-before-symbol insert-after-symbol rename-symbol
Code Modification5fix-bug refactor auto-fix auto-fix-agent write-tests
Project Memory8write-memory read-memory list-memories edit-memory delete-memory checkpoint-save checkpoint-restore checkpoint-diff
Git9git-status git-diff git-log git-blame git-branch git-add git-commit git-stash git-tag
File Operations9file-read file-write file-search file-tree file-diff file-delete file-move file-copy file-backup-restore
Project5project-info run-script run-tests run-lint check-types
Kanban — Tasks13task-create task-get task-list task-update task-delete task-comment task-claim task-assign task-push-multi task-subtask board-status task-cluster task-complexity
Kanban — Workspaces5workspace-create workspace-list workspace-join workspace-leave workspace-delete
Kanban — Boards7board-create board-list board-share board-members board-invite board-delete board-workflow
Recursive4invoke-tool invoke-batch invocation-log agent-orchestrate
Multi-Agent14agent-init agent-list agent-register agent-alert agent-inject agent-history monitor agent-summary agent-rank queue-message shared-thoughts get-shared-context feedback batch
Orchestration1pipeline
Session6session-list session-info session-create session-switch session-delete session-recover
MCP Client3client-sampling client-elicit client-roots
Knowledge Graph4graph-query graph-find-path loci-recall sequence-recommend
Thinking Chain1thinking-chain
MPC Security4mpc-split mpc-distribute mpc-reconstruct mpc-status
RL Learning2rl-reward-stats rl-dopamine-log
System8env-info memory-usage ai-config ai-models tool-health config ping help

Architecture

LayerComponentDescription
ClientsClaude Code, Cursor, KiroCode, RooCodeConnect via SSE + JSON-RPC (MCP Protocol)
HTTP Server:3100 (Bun or Node.js)Routes: /sse, /message, /resume, /stream
Session ManagerPer-client isolationCWD injection, activity tracking, SSE keep-alive
Tool Registry146 tools, 23 categoriesZod schema validation, tool annotations, lazy loading
Cluster BusDistributed signal busFull-duplex inter-cluster signals via Redis Pub/Sub
Data Flow Bus12 typed channelsZod schemas, correlation tracking, priority routing
Cognition LayerGlobal event bus + cross-linker9 reactive handlers, bidirectional Redis links
Provider Registry8 LLM providersNative SDKs + OpenAI-compatible. Hot-reload, structured output
Tree-sitter AST19 languagesWASM parsers, symbol navigation, rename, insert
RuntimeBun / Node.jsAuto-detection, unified HTTP, process, crypto
Redis (DB 2)Shared statemcp:context:*, mcp:agents:*, mcp:kanban:*, mcp:memory:*, mcp:cognition:*

→ Full diagram: docs/architecture-overview.md

The server uses a 3-layer bus with 3 independent Redis paths and anti-amplification bridges:

+====================================================================+
||  L3: ClusterBus  (Redis Pub/Sub, mcp:cluster:*)                  ||
||                                                                  ||
||  18 signal types | 4 send modes (unicast/broadcast/routed/mcast) ||
||  SignalFlowCtrl | MetaRouter | HealthMonitor | FanInAggregator   ||
||                                                                  ||
||  +---------------------+  +------------------------+             ||
||  | EventBridge  L3<>L1 |  | DataFlowBridge  L3<>L2 |             ||
||  | hop limit = 5       |  | hop limit = 5          |             ||
||  +---------------------+  +------------------------+             ||
+====================================================================+
||  L2: DataFlowBus  (in-process + Redis mcp:dataflow:{channel})    ||
||                                                                  ||
||  ai:completion | kanban:task-change | cognition:decision          ||
||  ai:structured | kanban:complexity  | cognition:intent            ||
||  ai:research   |                    | cognition:error             ||
||  agent:status  | agent:message      | system:health | system:cfg ||
||                                                                  ||
||  DataFlowEnvelope: correlation, priority 0-3, TTL, Zod schemas   ||
+====================================================================+
||  L1: GlobalEventBus  (in-process + Redis mcp:events:{type})      ||
||                                                                  ||
||  namespaced events | async queueMicrotask | max chain depth = 8  ||
||  CognitionEventBus wrapper (auto-prefix "cognition:")            ||
+====================================================================+
         |
  Module Handlers: cognition (9) | kanban (2) | loop (2)

→ Full documentation: docs/architecture-3-layer-bus.md

Distributed LLM orchestration across cluster nodes with typed signals and multi-pass quality control.

# Register a cluster node
cluster-bus-topology --action "register" --clusterId "backend-ai" \
  --clusterName "Backend LLM" --capabilities '["typescript","code-review"]' \
  --metaTags '["role:worker","tier:pro"]'

# Magistrale: dispatch to multiple clusters, pick best result
cluster-bus-magistrale --prompt "Design a rate limiter" --strategy "best-of-n" \
  --maxTargets 3 --timeoutMs 60000 --minResponses 2 --qualityThreshold 0.85 \
  --passStrategy "quality-target" --maxPasses 5

Magistrale strategies: first-winsbest-of-nconsensusfallback-chain

Pass strategies: min-passesquality-targetfixed

→ Full guide: examples/08-cluster-bus-magistrale.md

12 typed channels for structured inter-module communication with Zod schemas and correlation tracking.

Automatic flows:

  • ask-aiai:completion → cognition subscribes → logs decision context
  • task-updatekanban:task-change → agent subscribes → notifies assignee
  • error detected → cognition:error → error-pattern DB → suggests fix
  • tool-healthsystem:health → monitor → alerts on degradation

Every message follows DataFlowEnvelope: unique ID, correlation chain, priority (0–3), TTL, Zod-validated payload. Redis bridge for cross-session sync.

→ Full guide: examples/09-dataflow-bus.md


Multi-Agent Orchestration

Register agents, distribute work across kanban boards, and coordinate via Redis Pub/Sub:

# Quick onboarding — register, count tasks, claim, summarize, set alerts
agent-init --sessionId "sess-1" --agentName "backend-dev" --role "worker" \
  --capabilities '["typescript","postgresql"]' --boardId "sprint-1"

# Or manual registration
agent-register --agents '[
  {"id":"backend","role":"backend","capabilities":["typescript","postgresql"]},
  {"id":"frontend","role":"frontend","capabilities":["react","tailwind"]},
  {"id":"qa","role":"quality","capabilities":["jest","cypress"]}
]'

# Distribute tasks
task-push-multi --taskIds '["api-1","api-2"]' --agents '["backend"]' --mode assign

# Broadcast a requirement
queue-message --broadcast true --message "Use OpenAPI 3.0 spec" --priority high

# Real-time monitoring
monitor --view hierarchy

Multi-Model Consensus

Send one prompt to N models in parallel, then let a CEO model synthesize:

# CEO-and-Board consensus
consensus-prompt \
  --prompt "Redis vs PostgreSQL for sessions?" \
  --boardModels '["o:gpt-5", "a:claude-sonnet-4-5", "g:gemini-3-pro"]' \
  --ceoModel "a:claude-opus-4-5:4k"

All board models run via Promise.allSettled() — individual failures never block the others.


Kanban Task Management

# Create a workspace
workspace-create --name "Project Alpha"

# Add boards with custom workflow
board-create --name "Backend Sprint 1" --workspaceId <ws-id>
board-workflow --boardId <board-id> --action set --columns '["backlog","dev","review","qa","done"]'

# Batch-create tasks with subtasks
task-create --tasks '[
  {"title":"Auth API","priority":"high","boardId":"<id>"},
  {"title":"Rate limiter","priority":"medium","boardId":"<id>"}
]'
task-subtask --action create --parentTaskId "t-1" --title "JWT validation" --priority "high"

# Push to agents
task-push-multi --taskIds '["t-1","t-2"]' --agents '["agent-1"]' --mode assign

Features: workspaces • multiple boards • custom workflow columns • parent-child subtasks with cascade delete • dependency cycle detection • role-based access • batch ops (1-20 per call) • assign / clone / notify • append-only task comments • pagination with offset


Recursive Tool Invocation

Sub-agents can invoke other tools with built-in safety limits (max depth 2, rate-limited):

invoke-batch --invocations '[
  {"tool":"file-read","args":{"path":"@src/index.ts"}},
  {"tool":"run-tests","args":{}}
]' --mode parallel

CLI Reference

bun dist/index.js [options]
FlagDefaultDescription
-m, --modelPrimary AI model
-f, --fallback-modelFallback on quota / error
--port3100HTTP server port
--host127.0.0.1Bind address
--redis-host127.0.0.1Redis host
--redis-port6379Redis port
--no-contextDisable Redis context sharing
-v, --verboseFull output with decorations
--compactEssential fields only

Development

Build & Run

CommandDescription
bun installInstall all dependencies
bun run build:bunBundle for Bun runtime
bun run start:bunStart server on :3100
bun run dev:bunWatch mode with hot-reload
npm run buildTypeScript compilation for Node.js
npm run startStart with Node.js

Quality

CommandDescription
bun run typecheckType-check without emitting
bun run lintESLint
bun run formatPrettier
bun run prepareAll checks (pre-commit)

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GEMINI_API_KEYGoogle Gemini API key
GROQ_API_KEYGroq API key
DEEPSEEK_API_KEYDeepSeek API key
OPENROUTER_API_KEYOpenRouter API key
PERPLEXITY_API_KEYPerplexity API key (research tier)
MPC_MASTER_SECRETMaster secret for MPC security tools

Documentation

DocumentDescription
Technical Whitepaper (PDF)Full architecture description covering protocol layers, cognition system, and LLM Magistrale with formal specifications
Architecture OverviewHigh-level system layers diagram
3-Layer Bus ArchitectureDetailed L3/L2/L1 bus design with bridges
Examples12 practical guides covering all major features

Authors

Dawid Irzykdawid@kemdi.pl Kemdi Sp. z o.o.

License

This project is licensed under the GNU General Public License v3.0 — see the LICENSE file for details.

Related Servers