Project Synapse MCP Server

Transforms raw text into interconnected knowledge graphs and generates insights using a Neo4j database.

🧠 Project Synapse MCP Server

Autonomous Knowledge Synthesis Engine with LLM-WIKI Integration

Documentation

Project Synapse is an MCP (Model Context Protocol) server that combines a Neo4j 2026.x graph database with an Obsidian Markdown wiki to create a persistent, compounding knowledge base. Raw text is processed through a semantic pipeline into interconnected graph nodes with vector embeddings, while a human-readable wiki layer provides browsable, interlinked Markdown pages.

πŸ“š Documentation

For detailed information on setting up and using Project Synapse, please refer to the following guides:

What This Is (and Isn't)

This is a knowledge system, not a code editor. It's for the thinking, research, and writing that surrounds projects β€” architecture decisions, domain research, design rationale, reference material, meeting notes.

Code lives in its repo. Knowledge about the code lives here.

Use cases:

  • Research deep-dives that accumulate over weeks/months
  • Project knowledge bases (why decisions were made, not just what)
  • Personal knowledge management (articles, books, podcast notes)
  • Collaborative brainstorming with AI as the wiki maintainer

Per-project setup: Create a separate Obsidian vault + GitHub repo for each project. Point the WIKI_VAULT_PATH env var at it. One Neo4j instance can serve multiple projects (graphs coexist).

Architecture

Web / Raw Sources
         β”‚
    [defuddle]          ← cleans web content before ingestion
         β”‚
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Semantic Pipeline   │────▢│  Neo4j Knowledge    β”‚
β”‚  (Montague Grammar,  β”‚     β”‚  Graph (entities,   β”‚
β”‚   NLP, embeddings)   β”‚     β”‚  facts, vectors)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚ Wiki Adapter  β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”
                              β”‚ Obsidian Vaultβ”‚
                              β”‚ (Markdown,    β”‚
                              β”‚  Git-synced)  β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Features

Knowledge Graph (Neo4j 2026.x)

  • Native VECTOR type with ANN semantic search
  • Fulltext BM25 indexes for keyword search
  • Hybrid search (vector + BM25 score fusion)
  • Graph traversal for discovering hidden relationships
  • Montague Grammar parser for formal semantic analysis
  • Zettelkasten engine for autonomous insight generation

LLM-WIKI Integration

  • Bridges Obsidian Markdown vault with the Neo4j graph
  • Full page CRUD with YAML frontmatter
  • Automatic index generation and append-only log
  • Health checks: orphan detection, broken wikilinks, missing frontmatter
  • Delta-sync manifest (content hashing) for efficient graph sync
  • Based on Andrej Karpathy's LLM Wiki pattern

Web Content Ingestion (defuddle)

  • wiki_fetch_url fetches any URL, strips navigation/ads/clutter via defuddle, ingests into Neo4j, and archives to Clippings/ β€” one call, fully automated
  • wiki_ingest_raw auto-moves processed files from raw/ to Clippings/ β€” inbox stays clean
  • raw/ is a true inbox: empty after every session

Local-Only Embeddings (No Paid APIs)

  • sentence-transformers (default) β€” runs on GPU
  • Ollama (optional) β€” any local embedding model
  • All vectors stored natively in Neo4j via db.create.setNodeVectorProperty()

Quick Start

Prerequisites

  • Python 3.12+
  • Neo4j 2026.x (Community or Enterprise)
  • uv package manager (pip install uv)
  • Obsidian with the Git community plugin
  • A GitHub repo for the wiki vault (can be private)
  • Node.js + defuddle (for web content fetching β€” see below)

Neo4j Setup

# Ubuntu/Debian β€” see neo4j.com for other platforms
sudo apt install neo4j
sudo systemctl start neo4j
sudo systemctl enable neo4j
# Set password (default user: neo4j)
sudo neo4j-admin set-initial-password your_password

defuddle Setup

defuddle extracts clean markdown from web pages, stripping navigation, ads, and boilerplate before ingestion. Required for wiki_fetch_url.

# Install Node.js if not present (via nvm recommended)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install --lts
nvm use --lts

# Install defuddle globally
npm install -g defuddle

# Verify
defuddle --version

Note: Synapse finds defuddle automatically via nvm paths even if it's not on your shell's PATH. If wiki_fetch_url reports defuddle not found, ensure it's installed in an nvm-managed Node version.

Obsidian Vault Setup

  1. Create a new vault in Obsidian (or clone your wiki repo)
  2. Install the Git community plugin (Settings β†’ Community Plugins β†’ Browse β†’ "Git")
  3. Configure Git plugin with your GitHub credentials
  4. The vault structure (raw/, wiki/, Clippings/, AGENTS.md) is created automatically by Synapse on first run

Installation

cd /home/ty/Repositories/ai_workspace
git clone <repository-url> project-synapse-mcp
cd project-synapse-mcp
uv venv --python 3.12 --seed
source .venv/bin/activate
uv add -e .
uv run python -m spacy download en_core_web_sm
cp .env.example .env  # edit with your Neo4j password and vault path

Configuration

Edit .env:

NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=your_password
NEO4J_DATABASE=neo4j

# Embedding β€” local only, no paid APIs
EMBEDDING_PROVIDER=sentence-transformers  # or "ollama"
EMBEDDING_MODEL=sentence-transformers/all-mpnet-base-v2
EMBEDDING_DIMENSION=768

# Wiki vault
WIKI_VAULT_PATH=/path/to/your/obsidian-vault
WIKI_GITHUB_REPO=https://github.com/user/wiki-repo

Claude Desktop / MCP Integration

Add to your MCP config:

{
  "mcpServers": {
    "project-synapse": {
      "command": "uv",
      "args": [
        "--directory", "/path/to/project-synapse-mcp",
        "run", "python", "-m", "synapse_mcp.server"
      ],
      "env": {
        "NEO4J_URI": "bolt://localhost:7687",
        "NEO4J_USER": "neo4j",
        "NEO4J_PASSWORD": "your_password",
        "NEO4J_DATABASE": "neo4j",
        "WIKI_VAULT_PATH": "/path/to/obsidian-vault",
        "WIKI_GITHUB_REPO": "https://github.com/user/wiki-repo"
      }
    }
  }
}

MCP Tools

Knowledge Graph

ToolDescription
ingest_textProcess text through semantic pipeline β†’ Neo4j
query_knowledgeVector semantic search with insight-first results
explore_connectionsGraph traversal for hidden relationships
generate_insightsAutonomous Zettelkatten pattern detection
analyze_semantic_structureMontague Grammar semantic analysis

Wiki (LLM-WIKI)

ToolDescription
wiki_fetch_urlFetch URL β†’ defuddle clean β†’ ingest β†’ archive to Clippings/
wiki_ingest_rawIngest file from raw/ β†’ Neo4j + auto-move to Clippings/
wiki_write_pageCreate/update wiki page with frontmatter
wiki_read_pageRead a wiki page by path
wiki_searchKeyword search across wiki pages
wiki_list_pagesList all pages in a subdirectory
wiki_update_indexRebuild the wiki index
wiki_lintHealth check: orphans, broken links, missing frontmatter

Wiki Vault Structure

LLM-WIKI/
β”œβ”€β”€ AGENTS.md           # Agent schema doc β€” conventions and workflows
β”œβ”€β”€ raw/                # INBOX ONLY β€” unprocessed files; empty after each session
β”œβ”€β”€ raw-inbox.base      # Obsidian Base view of pending raw/ queue
β”œβ”€β”€ Clippings/          # Permanent archive β€” all processed sources land here
β”œβ”€β”€ wiki/
β”‚   β”œβ”€β”€ index.md        # Auto-generated page catalogue
β”‚   β”œβ”€β”€ log.md          # Append-only activity log
β”‚   β”œβ”€β”€ entities/       # People, tools, projects
β”‚   β”œβ”€β”€ concepts/       # Ideas, theories, patterns
β”‚   └── sources/        # Summaries of ingested sources

Content Lifecycle

You clip/save β†’ raw/          # your inbox
     or
Agent fetches β†’ wiki_fetch_url # web research
                    β”‚
              [defuddle clean]
                    β”‚
              [semantic pipeline] β†’ Neo4j
                    β”‚
              wiki_write_page β†’ wiki/sources/
                    β”‚
              auto-move β†’ Clippings/   # permanent archive

raw/ is always empty after a session. Clippings/ is the permanent record of everything that's been processed. Source pages in wiki/sources/ reference the original URL, not the file path.

Workflow

  1. Web research: wiki_fetch_url(url) β†’ fetches, cleans, ingests, archives in one call
  2. Manual clip: Drop into raw/, call wiki_ingest_raw(filename) β†’ auto-archives after ingest
  3. Query: query_knowledge (graph) or wiki_search (files) β†’ synthesize answer
  4. Lint: wiki_lint β†’ fix orphans, broken links, stale claims
  5. Rollback: Git handles version control via Obsidian Git plugin

Theoretical Foundation

  • Montague Grammar: Formal compositional semantics for meaning extraction
  • Zettelkasten Method: Atomic linked notes with emergent structure
  • Graph Theory: Community detection, centrality, path analysis
  • Karpathy LLM-WIKI: Persistent knowledge compilation vs stateless RAG
  • Vannevar Bush's Memex: Private associative knowledge with maintained trails

License

MIT β€” see LICENSE.


Project Synapse: From reactive RAG to persistent, compounding knowledge.

Server Terkait

NotebookLM Web Importer

Impor halaman web dan video YouTube ke NotebookLM dengan satu klik. Dipercaya oleh 200.000+ pengguna.

Instal Ekstensi Chrome