Kontxt
Indexes local code repositories to provide codebase context to AI clients.
Marketplace badges
Kontxt MCP Server
A Model Context Protocol (MCP) server that tries to solve condebase indexing (until agents can).
Features
- Connects to a user-specified local code repository.
- Provides the (
get_codebase_context) tool for AI clients (like Cursor, Claude Desktop). - Uses Gemini 2.0 Flash's 1M input window internally to analyze the codebase and generate context based on the user's client querry.
- Flash itself can use internal tools (
list_repository_structure,read_files,grep_codebase) to understand the code. - Supports both SSE (recommended) and stdio transport protocols.
- Supports user-attached files/docs/context from client's queries for more targeted analysis.
- Tracks token usage and provides detailed analysis of API consumption.
- User-configurable token limit for context generation (options: 500k, 800k, or 1M tokens; default: 800k).
Setup
- Clone/Download: Get the server code.
- Create Environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate - Install Dependencies:
pip install -r requirements.txt - Install
tree: Ensure thetreecommand is available on your system.- macOS:
brew install tree - Debian/Ubuntu:
sudo apt update && sudo apt install tree - Windows: Requires installing a port or using WSL.
- macOS:
- Configure API Key:
- Copy
.env.exampleto.env. - Edit
.envand add your Google Gemini API Key:GEMINI_API_KEY="YOUR_ACTUAL_API_KEY" - Alternatively, you can provide the key via the
--gemini-api-keycommand-line argument.
- Copy
Running as a Standalone Server (Recommended)
By default, the server runs in SSE mode, which allows you to:
- Start the server independently
- Connect from multiple clients
- Keep it running while restarting clients
Run the server:
python kontxt_server.py --repo-path /path/to/your/codebase
PS: you can use pwd to list the project path
The server will start on http://127.0.0.1:8080/sse by default.
For additional options:
python kontxt_server.py --repo-path /path/to/your/codebase --host 0.0.0.0 --port 6900
Shutting Down the Server
The server can be stopped by pressing Ctrl+C in the terminal where it's running. The server will attempt to close gracefully with a 3-second timeout.
Connecting to the Server from client (Cursor example)
Once your server is running, you can connect Cursor to it by editing your ~/.cursor/mcp.json file:
{
"mcpServers": {
"kontxt-server": {
"serverType": "sse",
"url": "http://localhost:8080/sse"
}
}
}
PS: remember to always refresh the MCP server on Cursor Settings or other client to connect to the MCP via sse
Alternative: Running with stdio Transport
If you prefer to have the client start and manage the server process:
python kontxt_server.py --repo-path /path/to/your/codebase --transport stdio
For this mode, configure your ~/.cursor/mcp.json file like this:
{
"mcpServers": {
"kontxt-server": {
"serverType": "stdio",
"command": "python",
"args": ["/absolute/path/to/kontxt_server.py", "--repo-path", "/absolute/path/to/your/codebase", "--transport", "stdio"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
}
}
}
}
Command Line Arguments
--repo-path PATH: Required. Absolute path to the local code repository to analyze.--gemini-api-key KEY: Google Gemini API Key (overrides.envif provided).--token-threshold NUM: Target maximum token count for the context. Allowed values are:- 500000
- 800000 (default)
- 1000000
--gemini-model NAME: Specific Gemini model to use (default:models/gemini-2.5-flash-preview-04-17).--tokenizer-model NAME: Hugging Face tokenizer id for token estimation (default:google/gemma-7b; override viaKONTXT_TOKENIZER_MODEL).--transport {stdio,sse}: Transport protocol to use (default: sse).--host HOST: Host address for the SSE server (default: 127.0.0.1).--port PORT: Port for the SSE server (default: 8080).--cors-origins ORIGINS: Comma-separated list of allowed CORS origins. If omitted, defaults to loopback only.--cors-credentials: Allow credentials for CORS (disabled by default).
CORS Configuration
For security, wildcard CORS is not used. By default, only loopback origins are allowed:
http://127.0.0.1,http://localhost, and the boundhost:port.
To allow specific web clients during development, pass explicit origins or use an env var:
python kontxt_server.py \
--repo-path /path/to/your/codebase \
--cors-origins http://localhost:3000,http://127.0.0.1:5173
# or via environment variable
KONTXT_CORS_ORIGINS="http://localhost:3000,http://127.0.0.1:5173" \
python kontxt_server.py --repo-path /path/to/your/codebase
Notes:
- Allowed methods:
GET,OPTIONS. Headers: all. Credentials: off unless--cors-credentialsis set.
Tokenizer (Gemma) Access & Auto-Recovery
This server uses the google/gemma-7b tokenizer to estimate tokens. The model is gated by Google on Hugging Face.
What happens if you don't have access yet:
- On startup, if the tokenizer cannot be downloaded, the server logs a clear message and auto-opens: https://huggingface.co/google/gemma-7b
- The server keeps running using a heuristic token estimator (does not crash).
- It periodically retries loading the tokenizer; once you gain access, it switches automatically (no restart needed).
How to gain access (free, ~2 minutes):
- Visit https://huggingface.co/google/gemma-7b and log in (create an account if needed).
- Accept Google’s terms on the model page.
- If running headless/CI or a container, authenticate the environment:
huggingface-cli login(or setHF_TOKEN).
Configuration:
--tokenizer-modelorKONTXT_TOKENIZER_MODEL: use a different HF tokenizer id if desired.KONTXT_TOKENIZER_RELOAD_INTERVAL(seconds, default 60): how often the server re-attempts tokenizer loading.
Basic Usage
Example queries:
- "What's this codebase about"
- "How does the authentication system work?"
- "Explain the data flow in the application"
PS: you can further specify the agent to use the MCP tool if it's not using it: "What is the last word of the third codeblock of the auth file? Use the MCP tool available."
Context Attachment
Your referenced files/context in your queries are included as context for analysis:
- "Explain how this file works: @kontxt_server.py"
- "Find all files that interact with @user_model.py"
- "Compare the implementation of @file1.js and @file2.js"
The server will mention these files to Gemini but will NOT automatically read or include their contents. Instead, Gemini will decide which files to read using its tools based on the query context.
This approach allows Gemini to only read files that are actually needed and prevents the context from being bloated with irrelevant file content.
Token Usage Tracking
The server tracks token usage across different operations:
- Repository structure listing
- File reading
- Grep searches
- Attached files from user queries
- Generated responses
This information is logged during operation, helping you monitor API usage and optimize your queries.
PD: want the tool to improve? PR's are open.
Related Servers
Scout Monitoring MCP
sponsorPut performance and error data directly in the hands of your AI assistant.
Alpha Vantage MCP Server
sponsorAccess financial market data: realtime & historical stock, ETF, options, forex, crypto, commodities, fundamentals, technical indicators, & more
Devvit
A companion server for building applications on Reddit's developer platform.
Contrast MCP Server
Remediate vulnerabilities found by Contrast products using LLM and Coding Agent capabilities.
Supra Code Generator MCP
Generates Supra Move contracts and TypeScript SDK code.
Criage MCP Server
An MCP server for the Criage package manager, providing full client functionality via the MCP protocol.
Facets Module
Create and manage Terraform modules for cloud-native infrastructure using the Facets.cloud FTF CLI.
OpenOcean Finance
An MCP server for executing token swaps across multiple decentralized exchanges using OpenOcean's aggregation API
MCP Vaultwarden Connector
Provides a bridge for scripts and AI agents to interact with a self-hosted Vaultwarden instance.
Claude Google Apps Script MCP Guide
Integrate Claude AI with Google Apps Script to automate tasks in Google Sheets and Gmail.
Remote MCP Server (Authless)
An example of a remote MCP server deployable on Cloudflare Workers, without authentication.
MCP-Mem0
Integrate long-term memory into AI agents using Mem0.
