Semantic-Sift

A reasoning-first MCP middleware that uses heuristics and Neural BERT models to distil context and eliminate noise

πŸ” Semantic-Sift

The Reasoning-First Middleware for High-Fidelity Agentic Workflows.

CI Tests Coverage PyPI Python Security License OSI

"It saves tokens while preserving context - maximizing reasoning, minimizing hallucination."

Semantic-Sift is a local Model Context Protocol (MCP) server that acts as an intelligent "Sanitation Tier" between your raw data and your AI’s context window.

While modern LLMs have massive context windows, their reasoning accuracy often degrades as noise increases. Semantic-Sift solves this by distilling technical logs, long-form documents, and chat histories into high-density context. It treats your context window as a precious resourceβ€”optimizing for Signal-to-Noise Ratio (SNR) so your models spend more time reasoning and less time navigating boilerplate.

🧠 Philosophy: The Studio of Two

Semantic-Sift is grounded in the Studio of Two philosophy: the belief that the future of engineering is a high-fidelity partnership between a human architect and a sovereign AI sidecar. By managing the friction of raw data ingestion, Sift allows this "Studio" to focus on building systems, not just applying patches. It acts as a cognitive filter that ensures both you and your agent are collaborating on the cleanest, most relevant representation of the technical truth.


⚑ Quickstart (60 seconds)

# 1. Install
pip install "semantic-sift[neural]"

# 2. Onboard your project (writes IDE hooks and opencode.json)
semantic-sift-onboard   # or: ask your AI "Run sift_onboard()"

# 3. Add to your MCP config (example: Cursor / Claude Desktop)
# { "mcpServers": { "semantic-sift": { "command": "semantic-sift" } } }

# 4. Warm up the model (optional β€” avoids first-call latency)
# Ask your AI: "Run sift_warmup()"

Full setup guide (venv layout, IDE config matrix, Sovereign Pattern): doc/INTEGRATION_ENCYCLOPEDIA.md


πŸ›οΈ Multidisciplinary Value

Semantic-Sift is a strategic layer designed to manage attention across four key professional personas:

  • For the Senior Engineer: A local-first, low-latency middleware using a dual-engine approach (Heuristic Sieve + Neural Reranker). It refined timestamps, repetitive boilerplate, and redundant JSON before they hit the wire, reducing latency and preventing "Lost in the Middle" reasoning failures.
  • For the Project Manager: "Context Insurance." By reducing token overhead by 30-70%, Sift provides direct ROI on API costs and reduces the "retry loop" caused by model hallucinations in messy data environments.
  • For the Researcher: Data integrity at scale. Supports MarkItDown (via the [multi-modal] optional extra) to convert complex .pdf, .docx, and .xlsx into structured, distilled Markdown, allowing for the rapid synthesis of massive technical repositories without losing critical semantic anchors.
  • For the Knowledge Partner: Cognitive Load Management. Sift manages the friction of raw data ingestion, allowing the human-AI partnership to focus on high-level strategy and architectural decisions rather than manual data triage.

πŸ’° Value Engineering: Operational vs. Economic ROI

Semantic-Sift provides a dual-layer of value. While the economic benefits depend on your billing plan, the operational benefits apply to every professional workflow.

1. The Economic ROI (Direct Savings)

Target: Users on Per-Token API plans (GPT-4o, Claude 3.5).

  • Wallet Protection: Sift acts as a local filter, typically reducing outgoing token volume by 30-70%.
  • Compound Interest: In iterative agentic loops, these savings compound rapidly. Every character pruned is money that stays in your budget.

2. The Operational ROI (Quality & Performance)

Target: EVERYONE (including "Unlimited" or Per-Request subscription users).

  • Attention Precision: Even with "infinite" context, LLMs suffer from "Lost in the Middle" syndrome. By removing noise, you ensure the model's full reasoning power is focused on the technical signal, resulting in higher-quality code and fewer hallucinations.
  • Latency Reduction: Smaller prompts = Faster "Time to First Token" (TTFT). You spend less time waiting for the "cloud" to process boilerplate and more time in your flow state.
  • Context Insurance: Prevents "Context length exceeded" errors on complex tasks. Sift ensures that 100% of your model's limit is filled with information, not formatting.

πŸ“š Master Documentation Index

All technical details, architectural logic, and integration guides are strictly maintained in the doc/ directory to prevent data loss through summarization.

  • doc/INDEX.md: The navigational roadmap and source of truth for the documentation structure.
  • doc/ARCHITECTURE.md: Specifications of the Sift Hook Interceptor, the Distillation Kernel (Heuristic/Semantic/Ranking engines), and Caching.
  • doc/TOOL_REFERENCE.md: Exhaustive operator's manual for all FastMCP tools (e.g., sift_read_file, sift_logs, sift_chat, sift_rank).
  • doc/INTEGRATION_ENCYCLOPEDIA.md: Master Compatibility Map, Hook Injector logic, Payload Structures, and the Master Configuration Matrix for connecting IDEs (Cursor, Gemini, VS Code, OpenCode, etc.).
  • doc/TELEMETRY_SPEC.md: Design of the OpenTelemetry tracing, Echo-Detector (Double-Sifting Prevention), Audit Headers, and Privacy controls.
  • doc/ORCHESTRATION_BLUEPRINTS.md: Actionable workflows for AI agents, including File Ingestion decision trees, Multi-Document RAG, and History Compaction.

🎯 High-Impact Use Cases

πŸ“š The Knowledge Hunter (Researchers & Architects)

  • The Pain: Reading 50-page PDFs, complex Word specs, or cluttered documentation sites.
  • The Sift: Supports MarkItDown via the [multi-modal] optional extra to natively ingest .pdf, .docx, and .xlsx. It converts corporate "noise" into structured Markdown, allowing your agent to synthesize multiple 14MB documents in a single turn.

πŸ› οΈ The Log Hunter (DevOps & SREs)

  • The Pain: Finding a single error in 100,000 lines of technical logs.
  • The Sift: The Heuristic Sieve refines timestamps and boilerplate in milliseconds. The Subconscious Hook automatically reranks results, so your agent only sees the most relevant data blocks.

🧠 The Context Strategist (AI Engineers)

  • The Pain: LLM hallucination and reasoning degradation caused by messy data streams.
  • The Sift: By delivering high-density context with 95% of the meaning preserved, Sift acts as a Cognitive Bridge. It ensures your LLM's attention is focused exclusively on the signal.

⚑ Performance Tiers

Semantic-Sift ships in two performance tiers. Choose based on your use case:

Python MCP Server (pip install semantic-sift)Rust CLI Sidecar (sift-core)
Heuristic log siftingβœ… ~500msβœ… <1ms (native)
Neural semantic siftβœ… ~500ms (PyTorch)βœ… ~150ms (ONNX)
Python dependencyRequiredNone
Rust toolchainNot requiredNot required (pre-built)
Delivered viaPyPI wheel (includes pre-built sift-core)Bundled in wheel; use fetch_sift_core.py for dev installs

PyPI wheel (pip install semantic-sift): The pre-compiled sift-core binary is bundled β€” no Rust toolchain required.

Editable/dev install (pip install -e .): The Rust compile step is skipped. Run once to fetch the pre-built binary:

python scripts/fetch_sift_core.py

Optional [native] marker: For dependency management tools that need an explicit handle, pip install semantic-sift[native] is available as a no-op extra (the binary is always included in the wheel).


πŸš€ Quick Start

1. Installation

Option A: Quick Install (PyPI)

ℹ️ What you get: The PyPI wheel includes the pre-compiled sift-core Rust binary β€” no Rust toolchain required. The [neural] extra adds PyTorch (~1.5 GB) for large-payload fallback; [multi-modal] adds MarkItDown for PDF/DOCX/XLSX ingestion. Expect several minutes for the first install due to PyTorch download size.

uv venv
# Windows: .\.venv\Scripts\activate
# macOS/Linux: source .venv/bin/activate
uv pip install semantic-sift[neural,multi-modal]

Option B: Sovereign Pattern (Recommended)

Clone the repository to gain access to the native Rust sidecar source code and benchmarks:

⚠️ Rust Compiler Required: The Sovereign Pattern builds sift-core from source. You must have the Rust compiler installed (rustup.rs) before running the install command below. If you do not want to install Rust, use Option A (PyPI) instead.

git clone https://github.com/luismichio/semantic-sift.git
cd semantic-sift
# Use Python 3.12 for torch/CUDA compatibility
python3.12 -m venv venv312
# Windows:
.\venv312\Scripts\activate
# macOS/Linux:
# source venv312/bin/activate
uv pip install -e .[neural,multi-modal]

Windows Tip (uv environment discovery): If uv fails to find your environment (error: "No virtual environment found"), explicitly point to your interpreter: uv pip install -e . --python venv312\Scripts\python.exe

Note: If you are using Context-Pipe's Sovereign Dual-Repo Pattern, semantic-sift is cross-installed into context-pipe/venv instead (via uv pip install -e ../semantic-sift). The venv312 above is only needed for the standalone ML runtime or running server.py directly.

🐍 Python Environment Guidance

Choosing the right Python path for your MCP configuration is critical for stability:

Setup TypePath ExampleProsCons
Dedicated Venv (Win).../semantic-sift/venv312/Scripts/python.exeIsolated dependencies, no torch version conflicts.Slightly more disk space.
Dedicated Venv (Mac/Linux).../semantic-sift/venv312/bin/pythonSame isolation benefit on Unix.Same.
Global PythonC:/Users/User/AppData/Local/.../python.exeShared libraries, fast setup.High risk of version conflicts (e.g., transformers mismatches).

Recommendation: Always use the Dedicated Venv path in your mcp_config.json to ensure the sifting kernel is isolated and reliable.

Note on Orchestration: Semantic-Sift is an "Intelligence Kernel." For complex multi-tool workflows, we strongly recommend installing Context-Pipe, the universal switchboard that natively routes data to Semantic-Sift without blocking your IDE.

For development tools (mypy, pytest):

uv pip install -e .[dev]

Rust binary for editable installs: pip install -e . skips the Rust compile step, so sift-core won't be on your PATH. Instead of compiling from source, download the pre-built binary for your platform from the matching GitHub release in one command:

python scripts/fetch_sift_core.py

This places sift-core[.exe] directly into your active environment's Scripts/bin directory. Re-run it whenever you bump the version.

2. Connect the MCP

CRITICAL: For exact configuration paths for Cursor, Gemini, OpenCode, VS Code, and Claude, reference the Master Configuration Matrix.

3. Auto-Onboard

Once connected, ask your AI Assistant:

"Run sift_onboard() to configure this project."


πŸ“Š Telemetry & Management Commands

Semantic-Sift operates invisibly, but you can always audit its performance and token savings without burning LLM tokens to do so.

  • Terminal CLI:
    • Run semantic-sift-stats to print a global dashboard of your token savings, latency, and cache hits.
    • Run semantic-sift-onboard to manually initialize Sift in any project (supports --env and --dry-run).
  • MCP Prompts: Compatible clients (Claude Desktop, Cursor, Zed) will surface a sift_dashboard prompt in their UI (often via a slash command or button) to instantly inject your telemetry stats into the chat.
  • OpenCode & Gemini CLI: The sift_onboard() tool automatically injects native /sift-stats and /sift-onboard custom slash commands into your IDE configuration.

πŸ¦€ Native Rust Sidecar (Meechi & Desktop Apps)

For high-performance, local-first applications (like Meechi), Semantic-Sift provides a native Rust binary (sift-core). This sidecar is optimized for portability and speed, with zero Python dependencies.

πŸ”€ The Hybrid Engine (semantic-sift-cli)

To provide the best of both worlds, installing the Python package also exposes the semantic-sift-cli command. This acts as an Intelligent Router:

  • For short tasks (<30,000 chars), it instantly shells out to the low-latency Rust sift-core (ONNX).
  • For massive batch tasks, it dynamically loads the high-throughput PyTorch framework with Flash Attention to prevent memory explosion.

Feature Comparison

FeaturePython MCP ServerRust Sift-Core (Sidecar)
Heuristic Log Siftingβœ…βœ… (Native)
Semantic Compressionβœ… (PyTorch)βœ… (ONNX)
Multi-Modal Ingestionβœ… (via [multi-modal])❌ (Text Only)
Supported Formats.pdf, .xlsx, .docx, .html, .txt.txt, .log, .out (Text)
Startup Latency3-5 seconds~10ms
Binary Size~1.5GB (with models)~15MB

Note: For native apps like Meechi, we recommend a Tiered Ingestion strategy: use the app's frontend (e.g., pdf.js) to extract text, then pipe it to the Rust sidecar for high-speed semantic sifting.

✨ View Native Sidecar Demo

Usage:

# Sift a log file instantly
cat app.log | sift-core logs

# Perform neural semantic compression
cat document.txt | sift-core semantic --model ./models/llmlingua2 --rate 0.4

πŸ€– For AI Agents: Operational Protocol

If you are an AI Agent (e.g., Gemini CLI, Cursor, Claude Code) reading this repository, follow this protocol:

  1. Initialization: Call sift_onboard() to register background hooks. Use sift_onboard(dry_run=True) to preview all planned actions without writing any files.
  2. Context Advisory: Before reading large files (>1,000 chars), call sift_analyze_file(path) to determine the noise ratio.
  3. Mandatory Sifting: If noise > 15%, pipe data through sift_logs or sift_chat before inclusion in reasoning. For documents, use sift_doc(text, rate=0.4) β€” adjust rate (0.1–0.9) to trade compression depth against fidelity.
  4. Ranking: Use sift_rank to identify the most semantically relevant chunks for the user's prompt.
  5. Extraction: When distilling PDFs or scraped content, use sift_extraction(content, show_diff=True) to see exactly what was removed and verify faithfulness.

πŸ›‘οΈ Security & Testing

Semantic-Sift is built on a Zero-Vulnerability Baseline:

  • Pytest: 100% pass rate on heuristic integrity.
  • Bandit (SAST): Automated static analysis for Python patterns.
  • Pip-Audit (SCA): Real-time supply chain monitoring for 0 known vulnerabilities.

Privacy and telemetry controls:

  • Set SIFT_TELEMETRY_OPTED_IN=true to enable telemetry (opt-in; disabled by default).
  • Set SIFT_TELEMETRY_DISABLED=true (legacy kill-switch) to disable telemetry entirely.
  • Set SIFT_TELEMETRY_TTL_DAYS=90 (default) to control how many days of session history are retained in .pipe_telemetry.json before old entries are pruned.
  • Set SIFT_TELEMETRY_URL=https://your-endpoint to route metadata pulses to your own endpoint.
  • Set SIFT_PULSE_RATE_LIMIT_S=10 (default) to control async telemetry pulse frequency.

Security controls:

  • Set SIFT_ALLOW_GLOBAL_READS=true to permit sift_read_file / sift_analyze_file outside the workspace root (path traversal guard is on by default).

Performance controls:

  • Set SIFT_HOOK_TIMEOUT_MS=3000 to cap hook semantic latency before heuristic fallback.
  • Set SIFT_MODEL_READY_WAIT_MS=1200 to control semantic model warm-up wait time before returning heuristic-mode output.
  • Set SIFT_COMPACTION_FIDELITY_THRESHOLD=0.3 (default) to control the vocabulary-overlap threshold below which a low-fidelity compaction warning is emitted.
  • Set SIFT_RANK_TOP_N=3 (default) to set the server-wide default number of results returned by sift_rank when top_n is not passed explicitly.

Hook logging controls:

  • Set SIFT_LOG_FILE to override the hook log path (default: .gemini/sift_debug.log).
  • Set SIFT_LOG_LEVEL (DEBUG, INFO, WARNING, ERROR) to control hook log verbosity.

See SECURITY.md for our full security policy.

Telemetry schema and endpoint details are documented in doc/TELEMETRY_SPEC.md.


πŸ”— The Ecosystem (Studio of Two)

Semantic-Sift is a flagship member of the Studio of Two infrastructure. It is designed to work in high-fidelity harmony with:

  • Context-Pipe: The universal switchboard for context engineering. While Sift provides the intelligence, Context-Pipe provides the orchestration. We highly recommend using Context-Pipe to chain Sift nodes with masking, search, and multi-modal ingestion tools.

βš–οΈ Licensing

Semantic-Sift is licensed under the Apache License 2.0. See LICENSE.md for details.

🀝 Contributing

Semantic-Sift is Open Source, but Closed to Contributions.

To maintain the strict architectural vision of the "Studio of Two" and keep maintenance overhead at absolute zero, this repository does not accept external pull requests. We encourage you to use, embed, and fork the code under the permissive Apache 2.0 license, but please do not submit PRs for new features or bug fixes. See CONTRIBUTING.md for details.

Related Servers

NotebookLM Web Importer

Import web pages and YouTube videos to NotebookLM with one click. Trusted by 200,000+ users.

Install Chrome Extension