Smart-Thinking
An advanced MCP server for multi-dimensional, adaptive, and collaborative reasoning.
Smart-Thinking
Smart-Thinking is a Model Context Protocol (MCP) server that delivers graph-based, multi-step reasoning without relying on external AI APIs. Everything happens locally: similarity search, heuristic-based scoring, verification tracking, memory, and visualization all run in a deterministic pipeline designed for transparency and reproducibility.
Core Capabilities
- Graph-first reasoning that connects thoughts with rich relationships (supports, contradicts, refines, contextual links, and more).
- Local TF-IDF + cosine similarity engine powering memory lookups and graph expansion without third-party embedding services.
- Heuristic quality evaluation that scores confidence, relevance, and quality using transparent rules instead of LLM calls.
- Verification workflow with detailed statuses and calculation tracing to surface facts, guardrails, and uncertainties.
- Persistent sessions that can be resumed across runs, keeping both the reasoning graph and verification ledger in sync.
Reasoning Flow
- Session bootstrap –
ReasoningOrchestratorinitializes a session, restores any saved graph state, and prepares feature flags. - Pre-verification – deterministic guards inspect the incoming thought, perform light-weight calculation checks, and annotate the payload.
- Graph integration – the thought is inserted into
ThoughtGraph, linking to context, prior thoughts, and relevant memories. - Heuristic evaluation –
QualityEvaluatorandMetricsCalculatorcompute weighted scores and traces that explain the decision path. - Verification feedback – statuses from
VerificationServiceand heuristic traces are attached to the node and propagated across connections. - Persistence & response – updates are written to
MemoryManager/VerificationMemory, and a structured MCP response is returned with a timeline of reasoning steps.
Each step is logged with structured metadata so you can visualize the reasoning fabric, audit decisions, and replay sessions deterministically.
Installation
Smart-Thinking ships as an npm package compatible with Windows, macOS, and Linux.
Global install (recommended)
npm install -g smart-thinking-mcp
Run with npx
npx -y smart-thinking-mcp
From source
git clone https://github.com/Leghis/Smart-Thinking.git
cd Smart-Thinking
npm install
npm run build
npm link
Need platform-specific configuration details? See
GUIDE_INSTALLATION.mdfor step-by-step instructions covering Windows, macOS, Linux, and Claude Desktop integration.
Quick Tour
smart-thinking-mcp— start the MCP server (globally installed package).npx -y smart-thinking-mcp— launch without a global install.npm run start— execute the built server from source.npm run demo:session— run the built-in CLI walkthrough that feeds sample thoughts through the reasoning pipeline and prints the resulting timeline.
The demo script showcases how the orchestrator adds nodes, evaluates heuristics, and records verification feedback step by step.
MCP Client Compatibility
Smart-Thinking is validated across the most popular MCP clients and operating systems. Use the new connector mode (--mode=connector or SMART_THINKING_MODE=connector) when a client only accepts the search and fetch tools required by ChatGPT connectors.1
| Client | Transport | Notes |
|---|---|---|
| ChatGPT Connectors & Deep Research | HTTP + SSE | Deploy with SMART_THINKING_MODE=connector node build/index.js --transport=http --host 0.0.0.0 --port 8000. Point ChatGPT to https://<host>/sse and keep only search/fetch enabled, aligning with OpenAI’s remote MCP guidance.1 |
| OpenAI Codex CLI & Agents SDK | Streamable HTTP / SSE | Configure the Codex agent with http://localhost:3000/mcp or http://localhost:3000/sse and set SMART_THINKING_MODE=connector when only knowledge retrieval is needed.2 |
| Claude Desktop / Claude Code | stdio | Add "command": "smart-thinking-mcp" (or an npx command) to claude_desktop_config.json. Full toolset is available.3 |
| Cursor IDE | stdio / SSE / Streamable HTTP | Add the server to ~/.cursor/mcp.json or the project .cursor/mcp.json. Cursor supports prompts, roots, elicitation, and streaming.4 |
| Cline (VS Code) | stdio | Place the command in ~/Documents/Cline/MCP/smart-thinking.json or use the in-app marketplace to register the toolset.3 |
| Kilo Code | stdio | Register via the MCP marketplace and run the server locally; Smart-Thinking exposes deterministic tooling for autonomous edits.3 |
Need a minimal deployment footprint? Combine
--transport=http --mode=connectorwith a reverse proxy (ngrok, fly.io, render, etc.) so remote clients can consume the server without exposing the full toolset.
For registry scanners and fallback metadata extraction, Smart-Thinking also exposes:
GET /.well-known/mcp/server-card.json
Configuration & Feature Flags
feature-flags.tstoggles advanced behaviours such as external integrations (disabled by default) and verbose tracing.config.tsaligns platform-specific paths and verification thresholds.memory-manager.tsandverification-memory.tsstore session graphs, metrics, and calculation results using deterministic JSON snapshots.
Zero-API-Key Mode (Default)
- Smart-Thinking runs fully in local deterministic mode without any API key.
- External verification/search connectors are disabled by default in
ToolIntegrator. - To explicitly enable external connectors, set:
export SMART_THINKING_ENABLE_EXTERNAL_TOOLS=true
- If external connectors are disabled (default), verification suggestions stay local (
executePython,executeJavaScript) and external tool calls return a local fallback result. FeatureFlags.externalLlmEnabledandFeatureFlags.externalEmbeddingEnabledremain disabled by default, so no remote LLM/embedding provider is required.
Development Workflow
npm run build # Compile TypeScript sources
npm run lint # ESLint across src/
npm run test # Jest test suite
npm run test:coverage # Jest coverage report
npm run watch # Incremental TypeScript compilation
See docs/modernisation-smart-thinking-v12-plan.md for the modernization checklist and rollout tracking.
Quality & Support
- Deterministic heuristics and verification eliminate dependency on remote LLMs.
- Latest validation (February 6, 2026):
80.47%statements,81.59%lines,84.34%functions,63.48%branches. - CI recommendations: run
npm run lintandnpm run test:coveragebefore each release candidate.
Contributing
Contributions are welcome. Please open an issue or pull request describing the change, and run the quality checks above before submitting.
License
Footnotes
-
OpenAI, “Building MCP servers for ChatGPT and API integrations,” highlights that connectors require
searchandfetchtools for remote use. (https://platform.openai.com/docs/mcp) ↩ ↩2 -
OpenAI Agents SDK documentation on MCP transports (stdio, SSE, streamable HTTP). (https://openai.github.io/openai-agents-python/mcp/) ↩
-
Model Context Protocol client catalogue listing Claude, Cline, Kilo Code, and other MCP-compatible applications. (https://modelcontextprotocol.io/clients) ↩ ↩2 ↩3
-
Cursor documentation for configuring MCP servers via stdio/SSE/HTTP transports. (https://cursor.com/docs/context/mcp) ↩
関連サーバー
McpVanguard
An open-source security proxy and active firewall for the Model Context Protocol (MCP).
Weather Edge MCP
Calibrated weather probability signals for Kalshi prediction markets. Dual-model: NWS forecast + GFS 31-member ensemble. Real-time METAR from settlement stations.
Geneva Forecasting MCP
MCP server that gives Claude and other AI assistants enterprise-grade time series forecasting powered by the Geneva Forecasting engine - the same forecasting engine shipped in Oracle products for 25+ years.
Stumpy
Persistent AI agents that run 24/7 in your Slack, Telegram, SMS, or email
Philidor MCP
DeFi vault risk analytics for AI agents. Search 700+ vaults across Morpho, Aave, Yearn, Beefy, Spark, and more. Compare risk scores, analyze protocols, run due diligence — all through natural language. No API key required. No installation needed.
Strider Amazon
MCP server for Amazon shopping - AI agents can search products, check prices, add to cart, and manage shopping lists.
CryptoAPIs MCP HD Wallet
MCP server for HD wallet management, balance retrieval, and sync on EVM, UTXO, and XRP blockchains via Crypto APIs
Hidden Empire
Play a legendary text adventure by talking to your AI — no commands to memorize. The Hidden Empire puts a full underground world of puzzles, treasures, and trolls inside your conversation. Speak naturally: say 'head north,' 'grab the lantern,' or 'what am I carrying?' and your AI handles the rest. Execute multi-move plans in one shot, undo mistakes instantly, and save up to 20 named playthroughs you can resume from any session. Based on the MIT-licensed Zork I source, rebuilt from the ground up for AI-native play.
Cloaked Agent
Give AI agents spending power without giving them your wallet keys. Cloaked creates on-chain spending accounts with enforced constraints that agents cannot bypass - even if jailbroken or compromised.
Vibe Math MCP
A high-performance Model Context Protocol (MCP) server for math-ing whilst vibing with LLMs. Built with Polars, Pandas, NumPy, SciPy, and SymPy for optimal calculation speed and comprehensive mathematical capabilities from basic arithmetic to advanced calculus and linear algebra.
