wellread
A shared knowledge base for AI agents
wellread - Another dev already searched that.
Your agent's next research task was probably already solved. Wellread finds it before your agent burns tokens rediscovering it - and when it can't, it makes sure the next dev doesn't pay that cost either.
Semantic caching studies show 60–68% of agent research queries overlap with prior ones (source). And AI-driven live web searches grew 15x in 2025 (Cloudflare). Wellread is the cache that layer has been missing.
The compounding effect
| Without wellread | With wellread | |
|---|---|---|
| Turn 1 (fresh session) | 200K tokens · 10 turns · 67s | 647 tokens · 1 turn · 28s |
| Turn 30 (~40K context) | 1.2M tokens | 647 tokens |
| Turn 100 (~150K context) | 3.5M tokens | 647 tokens |
| Turn 250 (~480K context) | 11M tokens | 647 tokens |
The deeper your session, the more expensive research gets - and the more wellread saves.
The problem
- Your agent researches every technical question from scratch. When it doesn't, it hallucinates - outdated APIs, wrong examples, broken code.
- Every turn re-sends the whole conversation. By turn 100, you've paid for the same context a hundred times.
The fix
Before your agent hits the web, wellread checks what other devs already found.
- Hit → instant answer from verified sources. Zero web searches. One turn.
- Partial → starts from what exists, only researches the gaps.
- Miss → normal research, then saves the summary for whoever comes next.
Your agent doesn't just spend fewer tokens. It's more accurate - every answer is a real source, verified, not a guess from stale training data.
Install
npx wellread
Restart your editor. That's it.
Update: npx wellread@latest - Uninstall: npx wellread uninstall
Singleplayer from day one
You don't need a crowd for wellread to pay off.
Singleplayer - your own research comes back to you. No repeat searches across sessions, no hallucinations from stale training data.
Multiplayer - when another dev has already cracked that Auth.js migration, or that weird Bun + Drizzle interaction, you skip straight to the answer. One person researches, everyone benefits.
Early users build the network. Their contributions get credited - and permanent.
Freshness
Each entry knows how fast its topic changes:
| Type | Fresh | Re-check | Re-research |
|---|---|---|---|
| Timeless (TCP, SQL basics) | 1 year | - | after |
| Stable (React, PostgreSQL) | 6 months | 1 year | after |
| Evolving (Next.js, Bun) | 30 days | 90 days | after |
| Volatile (betas, pre-release) | 7 days | 30 days | after |
When an agent re-verifies, the clock resets for everyone.
Privacy
Six layers between your private context and the shared network:
- Hook instruction - before anything leaves your machine, the hook tells your agent to sanitize the query: strip project names, API keys, file paths, credentials. Only the generic technical concept is sent.
- Search schema - the search tool's parameter description reinforces: "Remove project names, API keys, file paths, credentials."
- Save schema - the save tool explicitly says: "NEVER include project/repo/company names, internal URLs, file paths, credentials, business logic. Content is PUBLIC."
- URL gate (server, hard reject) - every source must start with
https://orhttp://. File paths, library identifiers, internal URLs → rejected. The contribution is not saved. - Path detection (server, hard reject) - the server scans content and search surface for local paths (
/Users/...,/home/...,file://,C:\...). If found → rejected. - By design - your agent doesn't forward your input. It synthesizes from public sources. What gets saved is a distilled summary of public docs, not your code or conversation.
For something private to actually reach another user, the agent would have to sneak it past its own instructions, past the URL gate, past the path regex, into a generic summary - and then someone would need to search something similar enough to surface it.
Stats
Ask your agent:
"show me my wellread stats"
See your token savings, your top contributions, and how many devs used research you saved.
Supported tools
Works with any MCP client. Best experience with Claude Code. Also supports Cursor, Windsurf, Gemini CLI, VS Code, OpenCode.
Links
License
関連サーバー
MCP Omnisearch
Unified access to multiple search providers and AI tools like Tavily, Perplexity, Kagi, Jina AI, Brave, and Firecrawl.
Tavily Search
Perform web searches using the Tavily Search API.
ArtistLens
Access the Spotify Web API to search and retrieve information about tracks, albums, artists, and playlists.
NPMLens MCP
NPMLens MCP lets your coding agent (such as Claude, Cursor, Copilot, Gemini or Codex) search the npm registry and fetch package context (README, downloads, GitHub info, usage snippets). It acts as a Model‑Context‑Protocol (MCP) server, giving your AI assistant a structured way to discover libraries and integrate them quickly.
AgentBridge
A specialized gateway for AI agents to fetch and parse Chinese web content into clean Markdown. Pay-per-fetch via x402.
Cala
Cala turns internet chaos into structured, verified knowledge that AI agents and LLMs can call as a tool.
Greenbook
A lightweight Model Context Protocol (MCP) server that exposes Greenbook data and tools for market research professionals, analysts, and related workflows.
Metro MCP
A MCP server of washington DC's Metro
Wikipedia Simple English MCP Server
Access Wikipedia content, prioritizing Simple English with a fallback to regular English.
AcreLens
US land due-diligence MCP — returns solar potential, groundwater depth, flood zones, and county regulations for any property address.