Semantic-Sift
A reasoning-first MCP middleware that uses heuristics and Neural BERT models to distil context and eliminate noise
π Semantic-Sift
The Reasoning-First Middleware for High-Fidelity Agentic Workflows.
"It saves tokens while preserving context - maximizing reasoning, minimizing hallucination."
Semantic-Sift is a local Model Context Protocol (MCP) server that acts as an intelligent "Sanitation Tier" between your raw data and your AIβs context window.
While modern LLMs have massive context windows, their reasoning accuracy often degrades as noise increases. Semantic-Sift solves this by distilling technical logs, long-form documents, and chat histories into high-density context. It treats your context window as a precious resourceβoptimizing for Signal-to-Noise Ratio (SNR) so your models spend more time reasoning and less time navigating boilerplate.
π§ Philosophy: The Studio of Two
Semantic-Sift is grounded in the Studio of Two philosophy: the belief that the future of engineering is a high-fidelity partnership between a human architect and a sovereign AI sidecar. By managing the friction of raw data ingestion, Sift allows this "Studio" to focus on building systems, not just applying patches. It acts as a cognitive filter that ensures both you and your agent are collaborating on the cleanest, most relevant representation of the technical truth.
β‘ Quickstart (60 seconds)
# 1. Install
pip install "semantic-sift[neural]"
# 2. Onboard your project (writes IDE hooks and opencode.json)
semantic-sift-onboard # or: ask your AI "Run sift_onboard()"
# 3. Add to your MCP config (example: Cursor / Claude Desktop)
# { "mcpServers": { "semantic-sift": { "command": "semantic-sift" } } }
# 4. Warm up the model (optional β avoids first-call latency)
# Ask your AI: "Run sift_warmup()"
Full setup guide (venv layout, IDE config matrix, Sovereign Pattern): doc/INTEGRATION_ENCYCLOPEDIA.md
ποΈ Multidisciplinary Value
Semantic-Sift is a strategic layer designed to manage attention across four key professional personas:
- For the Senior Engineer: A local-first, low-latency middleware using a dual-engine approach (Heuristic Sieve + Neural Reranker). It refined timestamps, repetitive boilerplate, and redundant JSON before they hit the wire, reducing latency and preventing "Lost in the Middle" reasoning failures.
- For the Project Manager: "Context Insurance." By reducing token overhead by 30-70%, Sift provides direct ROI on API costs and reduces the "retry loop" caused by model hallucinations in messy data environments.
- For the Researcher: Data integrity at scale. Supports MarkItDown (via the
[multi-modal]optional extra) to convert complex.pdf,.docx, and.xlsxinto structured, distilled Markdown, allowing for the rapid synthesis of massive technical repositories without losing critical semantic anchors. - For the Knowledge Partner: Cognitive Load Management. Sift manages the friction of raw data ingestion, allowing the human-AI partnership to focus on high-level strategy and architectural decisions rather than manual data triage.
π° Value Engineering: Operational vs. Economic ROI
Semantic-Sift provides a dual-layer of value. While the economic benefits depend on your billing plan, the operational benefits apply to every professional workflow.
1. The Economic ROI (Direct Savings)
Target: Users on Per-Token API plans (GPT-4o, Claude 3.5).
- Wallet Protection: Sift acts as a local filter, typically reducing outgoing token volume by 30-70%.
- Compound Interest: In iterative agentic loops, these savings compound rapidly. Every character pruned is money that stays in your budget.
2. The Operational ROI (Quality & Performance)
Target: EVERYONE (including "Unlimited" or Per-Request subscription users).
- Attention Precision: Even with "infinite" context, LLMs suffer from "Lost in the Middle" syndrome. By removing noise, you ensure the model's full reasoning power is focused on the technical signal, resulting in higher-quality code and fewer hallucinations.
- Latency Reduction: Smaller prompts = Faster "Time to First Token" (TTFT). You spend less time waiting for the "cloud" to process boilerplate and more time in your flow state.
- Context Insurance: Prevents "Context length exceeded" errors on complex tasks. Sift ensures that 100% of your model's limit is filled with information, not formatting.
π Master Documentation Index
All technical details, architectural logic, and integration guides are strictly maintained in the doc/ directory to prevent data loss through summarization.
- doc/INDEX.md: The navigational roadmap and source of truth for the documentation structure.
- doc/ARCHITECTURE.md: Specifications of the Sift Hook Interceptor, the Distillation Kernel (Heuristic/Semantic/Ranking engines), and Caching.
- doc/TOOL_REFERENCE.md: Exhaustive operator's manual for all FastMCP tools (e.g.,
sift_read_file,sift_logs,sift_chat,sift_rank). - doc/INTEGRATION_ENCYCLOPEDIA.md: Master Compatibility Map, Hook Injector logic, Payload Structures, and the Master Configuration Matrix for connecting IDEs (Cursor, Gemini, VS Code, OpenCode, etc.).
- doc/TELEMETRY_SPEC.md: Design of the OpenTelemetry tracing, Echo-Detector (Double-Sifting Prevention), Audit Headers, and Privacy controls.
- doc/ORCHESTRATION_BLUEPRINTS.md: Actionable workflows for AI agents, including File Ingestion decision trees, Multi-Document RAG, and History Compaction.
π― High-Impact Use Cases
π The Knowledge Hunter (Researchers & Architects)
- The Pain: Reading 50-page PDFs, complex Word specs, or cluttered documentation sites.
- The Sift: Supports MarkItDown via the
[multi-modal]optional extra to natively ingest.pdf,.docx, and.xlsx. It converts corporate "noise" into structured Markdown, allowing your agent to synthesize multiple 14MB documents in a single turn.
π οΈ The Log Hunter (DevOps & SREs)
- The Pain: Finding a single error in 100,000 lines of technical logs.
- The Sift: The Heuristic Sieve refines timestamps and boilerplate in milliseconds. The Subconscious Hook automatically reranks results, so your agent only sees the most relevant data blocks.
π§ The Context Strategist (AI Engineers)
- The Pain: LLM hallucination and reasoning degradation caused by messy data streams.
- The Sift: By delivering high-density context with 95% of the meaning preserved, Sift acts as a Cognitive Bridge. It ensures your LLM's attention is focused exclusively on the signal.
β‘ Performance Tiers
Semantic-Sift ships in two performance tiers. Choose based on your use case:
Python MCP Server (pip install semantic-sift) | Rust CLI Sidecar (sift-core) | |
|---|---|---|
| Heuristic log sifting | β ~500ms | β <1ms (native) |
| Neural semantic sift | β ~500ms (PyTorch) | β ~150ms (ONNX) |
| Python dependency | Required | None |
| Rust toolchain | Not required | Not required (pre-built) |
| Delivered via | PyPI wheel (includes pre-built sift-core) | Bundled in wheel; use fetch_sift_core.py for dev installs |
PyPI wheel (pip install semantic-sift): The pre-compiled sift-core binary is bundled β no Rust toolchain required.
Editable/dev install (pip install -e .): The Rust compile step is skipped. Run once to fetch the pre-built binary:
python scripts/fetch_sift_core.py
Optional [native] marker: For dependency management tools that need an explicit handle, pip install semantic-sift[native] is available as a no-op extra (the binary is always included in the wheel).
π Quick Start
1. Installation
Option A: Quick Install (PyPI)
βΉοΈ What you get: The PyPI wheel includes the pre-compiled
sift-coreRust binary β no Rust toolchain required. The[neural]extra adds PyTorch (~1.5 GB) for large-payload fallback;[multi-modal]adds MarkItDown for PDF/DOCX/XLSX ingestion. Expect several minutes for the first install due to PyTorch download size.
uv venv
# Windows: .\.venv\Scripts\activate
# macOS/Linux: source .venv/bin/activate
uv pip install semantic-sift[neural,multi-modal]
Option B: Sovereign Pattern (Recommended)
Clone the repository to gain access to the native Rust sidecar source code and benchmarks:
β οΈ Rust Compiler Required: The Sovereign Pattern builds
sift-corefrom source. You must have the Rust compiler installed (rustup.rs) before running the install command below. If you do not want to install Rust, use Option A (PyPI) instead.
git clone https://github.com/luismichio/semantic-sift.git
cd semantic-sift
# Use Python 3.12 for torch/CUDA compatibility
python3.12 -m venv venv312
# Windows:
.\venv312\Scripts\activate
# macOS/Linux:
# source venv312/bin/activate
uv pip install -e .[neural,multi-modal]
Windows Tip (
uvenvironment discovery): Ifuvfails to find your environment (error: "No virtual environment found"), explicitly point to your interpreter:uv pip install -e . --python venv312\Scripts\python.exe
Note: If you are using Context-Pipe's Sovereign Dual-Repo Pattern,
semantic-siftis cross-installed intocontext-pipe/venvinstead (viauv pip install -e ../semantic-sift). Thevenv312above is only needed for the standalone ML runtime or runningserver.pydirectly.
π Python Environment Guidance
Choosing the right Python path for your MCP configuration is critical for stability:
| Setup Type | Path Example | Pros | Cons |
|---|---|---|---|
| Dedicated Venv (Win) | .../semantic-sift/venv312/Scripts/python.exe | Isolated dependencies, no torch version conflicts. | Slightly more disk space. |
| Dedicated Venv (Mac/Linux) | .../semantic-sift/venv312/bin/python | Same isolation benefit on Unix. | Same. |
| Global Python | C:/Users/User/AppData/Local/.../python.exe | Shared libraries, fast setup. | High risk of version conflicts (e.g., transformers mismatches). |
Recommendation: Always use the Dedicated Venv path in your mcp_config.json to ensure the sifting kernel is isolated and reliable.
Note on Orchestration: Semantic-Sift is an "Intelligence Kernel." For complex multi-tool workflows, we strongly recommend installing Context-Pipe, the universal switchboard that natively routes data to Semantic-Sift without blocking your IDE.
For development tools (mypy, pytest):
uv pip install -e .[dev]
Rust binary for editable installs:
pip install -e .skips the Rust compile step, sosift-corewon't be on your PATH. Instead of compiling from source, download the pre-built binary for your platform from the matching GitHub release in one command:python scripts/fetch_sift_core.pyThis places
sift-core[.exe]directly into your active environment'sScripts/bindirectory. Re-run it whenever you bump the version.
2. Connect the MCP
CRITICAL: For exact configuration paths for Cursor, Gemini, OpenCode, VS Code, and Claude, reference the Master Configuration Matrix.
3. Auto-Onboard
Once connected, ask your AI Assistant:
"Run
sift_onboard()to configure this project."
π Telemetry & Management Commands
Semantic-Sift operates invisibly, but you can always audit its performance and token savings without burning LLM tokens to do so.
- Terminal CLI:
- Run
semantic-sift-statsto print a global dashboard of your token savings, latency, and cache hits. - Run
semantic-sift-onboardto manually initialize Sift in any project (supports--envand--dry-run).
- Run
- MCP Prompts: Compatible clients (Claude Desktop, Cursor, Zed) will surface a
sift_dashboardprompt in their UI (often via a slash command or button) to instantly inject your telemetry stats into the chat. - OpenCode & Gemini CLI: The
sift_onboard()tool automatically injects native/sift-statsand/sift-onboardcustom slash commands into your IDE configuration.
π¦ Native Rust Sidecar (Meechi & Desktop Apps)
For high-performance, local-first applications (like Meechi), Semantic-Sift provides a native Rust binary (sift-core). This sidecar is optimized for portability and speed, with zero Python dependencies.
π The Hybrid Engine (semantic-sift-cli)
To provide the best of both worlds, installing the Python package also exposes the semantic-sift-cli command. This acts as an Intelligent Router:
- For short tasks (<30,000 chars), it instantly shells out to the low-latency Rust
sift-core(ONNX). - For massive batch tasks, it dynamically loads the high-throughput PyTorch framework with Flash Attention to prevent memory explosion.
Feature Comparison
| Feature | Python MCP Server | Rust Sift-Core (Sidecar) |
|---|---|---|
| Heuristic Log Sifting | β | β (Native) |
| Semantic Compression | β (PyTorch) | β (ONNX) |
| Multi-Modal Ingestion | β
(via [multi-modal]) | β (Text Only) |
| Supported Formats | .pdf, .xlsx, .docx, .html, .txt | .txt, .log, .out (Text) |
| Startup Latency | 3-5 seconds | ~10ms |
| Binary Size | ~1.5GB (with models) | ~15MB |
Note: For native apps like Meechi, we recommend a Tiered Ingestion strategy: use the app's frontend (e.g.,
pdf.js) to extract text, then pipe it to the Rust sidecar for high-speed semantic sifting.
Usage:
# Sift a log file instantly
cat app.log | sift-core logs
# Perform neural semantic compression
cat document.txt | sift-core semantic --model ./models/llmlingua2 --rate 0.4
π€ For AI Agents: Operational Protocol
If you are an AI Agent (e.g., Gemini CLI, Cursor, Claude Code) reading this repository, follow this protocol:
- Initialization: Call
sift_onboard()to register background hooks. Usesift_onboard(dry_run=True)to preview all planned actions without writing any files. - Context Advisory: Before reading large files (>1,000 chars), call
sift_analyze_file(path)to determine the noise ratio. - Mandatory Sifting: If noise > 15%, pipe data through
sift_logsorsift_chatbefore inclusion in reasoning. For documents, usesift_doc(text, rate=0.4)β adjustrate(0.1β0.9) to trade compression depth against fidelity. - Ranking: Use
sift_rankto identify the most semantically relevant chunks for the user's prompt. - Extraction: When distilling PDFs or scraped content, use
sift_extraction(content, show_diff=True)to see exactly what was removed and verify faithfulness.
π‘οΈ Security & Testing
Semantic-Sift is built on a Zero-Vulnerability Baseline:
- Pytest: 100% pass rate on heuristic integrity.
- Bandit (SAST): Automated static analysis for Python patterns.
- Pip-Audit (SCA): Real-time supply chain monitoring for 0 known vulnerabilities.
Privacy and telemetry controls:
- Set
SIFT_TELEMETRY_OPTED_IN=trueto enable telemetry (opt-in; disabled by default). - Set
SIFT_TELEMETRY_DISABLED=true(legacy kill-switch) to disable telemetry entirely. - Set
SIFT_TELEMETRY_TTL_DAYS=90(default) to control how many days of session history are retained in.pipe_telemetry.jsonbefore old entries are pruned. - Set
SIFT_TELEMETRY_URL=https://your-endpointto route metadata pulses to your own endpoint. - Set
SIFT_PULSE_RATE_LIMIT_S=10(default) to control async telemetry pulse frequency.
Security controls:
- Set
SIFT_ALLOW_GLOBAL_READS=trueto permitsift_read_file/sift_analyze_fileoutside the workspace root (path traversal guard is on by default).
Performance controls:
- Set
SIFT_HOOK_TIMEOUT_MS=3000to cap hook semantic latency before heuristic fallback. - Set
SIFT_MODEL_READY_WAIT_MS=1200to control semantic model warm-up wait time before returning heuristic-mode output. - Set
SIFT_COMPACTION_FIDELITY_THRESHOLD=0.3(default) to control the vocabulary-overlap threshold below which a low-fidelity compaction warning is emitted. - Set
SIFT_RANK_TOP_N=3(default) to set the server-wide default number of results returned bysift_rankwhentop_nis not passed explicitly.
Hook logging controls:
- Set
SIFT_LOG_FILEto override the hook log path (default:.gemini/sift_debug.log). - Set
SIFT_LOG_LEVEL(DEBUG,INFO,WARNING,ERROR) to control hook log verbosity.
See SECURITY.md for our full security policy.
Telemetry schema and endpoint details are documented in doc/TELEMETRY_SPEC.md.
π The Ecosystem (Studio of Two)
Semantic-Sift is a flagship member of the Studio of Two infrastructure. It is designed to work in high-fidelity harmony with:
- Context-Pipe: The universal switchboard for context engineering. While Sift provides the intelligence, Context-Pipe provides the orchestration. We highly recommend using Context-Pipe to chain Sift nodes with masking, search, and multi-modal ingestion tools.
βοΈ Licensing
Semantic-Sift is licensed under the Apache License 2.0. See LICENSE.md for details.
π€ Contributing
Semantic-Sift is Open Source, but Closed to Contributions.
To maintain the strict architectural vision of the "Studio of Two" and keep maintenance overhead at absolute zero, this repository does not accept external pull requests. We encourage you to use, embed, and fork the code under the permissive Apache 2.0 license, but please do not submit PRs for new features or bug fixes. See CONTRIBUTING.md for details.
Related Servers
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
MKP
Model Kontext Protocol Server for Kubernetes that allows LLM-powered applications to interact with Kubernetes clusters through native Go implementation with direct API integration and comprehensive resource management.
TokenTool MCP
Deploy and manage compliant tokens from Claude, Cursor, or any AI agent β by typing a sentence.
SDD MCP
Provides Seam-Driven Development tools for AI-assisted software development.
XLUXX Trust Layer
Runtime trust scoring for MCP servers. Monitors 15,000+ servers with reliability metrics, drift detection, and fallback recommendations.
WhichModel
Cost-optimised LLM model routing for autonomous agents
Sentinel Signal MCP
Agent tools via MCP for workflow scoring, limits/usage, and feedback (trial key supported)
Browser MCP
Control your real Chrome from Claude Code. 29 tools including CAPTCHA solving, file upload, multi-session, and human-in-the-loop.
IMAGIN.studio API Docs
Semantic search over IMAGIN.studio vehicle imagery API documentation, CDN configuration, and integration guides.
YAPI MCP Server
An MCP server for accessing YAPI interface details, configured via environment variables.
nf-core MCP Server
Manage and navigate local nf-core pipeline repositories.