Memvid
Encodes text data into videos that can be quickly looked up with semantic search.
memvid-mcp-server
A Streamable-HTTP MCP Server that uses memvid to encode text data into videos that can be quickly looked up with semantic search.
Supported Actions:
add_chunks: Adds chunks to the memory video. Note: each time you add chunks, it resets the memory.mp4. Unsure if there is a way to incrementally add.search: queries for the top-matching chunks. Returns 5 by default, but can be changed with top_k param.
Running
Set up your environment:
python3.11 -m venv my_env
. ./my_env/bin/activate
pip install -r requirements.txt
Run the server:
python server.py
With a custom port:
PORT=3002 python server.py
Connect a Client
You can connect a client to your MCP Server once it's running. Configure per the client's configuration. There is the mcp-config.json that has an example configuration that looks like this:
{
"mcpServers": {
"memvid": {
"type": "streamable-http",
"url": "http://localhost:3000"
}
}
}
Acknowledgements
- Obviously the modelcontextprotocol and Anthropic teams for the MCP Specification. https://modelcontextprotocol.io/introduction
- HeyFerrante for enabling and sponsoring this project.
เซิร์ฟเวอร์ที่เกี่ยวข้อง
APLCart MCP Server
An MCP server providing semantic search capabilities for APLCart data.
Brave-Gemini Research MCP Server
Perform web searches with the Brave Search API and analyze research papers using Google's Gemini model.
Fish MCP Server
Search for fish species using the FishBase database. Supports natural language queries in both Japanese and English.
Releasebot
Releasebot finds and watches release note sources from hundreds of products and companies.
Cezzis Cocktails
Search for cocktail recipes using the cezzis.com API.
Local Research MCP Server
A private, local research assistant that searches the web and scrapes content using DuckDuckGo.
RSS3
Integrates the RSS3 API to query the Open Web.
SearXNG
A privacy-respecting metasearch engine powered by a self-hosted SearXNG instance.
Local Flow
A minimal, local, GPU-accelerated RAG server for document ingestion and querying.
Perplexity
Web search using the Perplexity API with automatic model selection based on query intent.