Perform NCBI BLAST sequence similarity searches through natural language.
๐ MCP server for NCBI BLAST sequence similarity search
Enable AI assistants to perform BLAST searches through natural language. Search nucleotide and protein databases, create custom databases, and get formatted results instantly.
# Install BLAST+
conda install -c bioconda blast
# Or via package manager
# macOS: brew install blast
# Ubuntu: sudo apt-get install ncbi-blast+
# Install MCP server
git clone https://github.com/bio-mcp/bio-mcp-blast.git
cd bio-mcp-blast
pip install -e .
# Start the server
python -m src.server
# Or with queue support
python -m src.main --mode queue
Add to your MCP client config:
{
"mcpServers": {
"bio-blast": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/bio-mcp-blast"
}
}
}
User: "BLAST this sequence against nr: ATGCGATCGATCG"
AI: [calls blastn] โ Returns top hits with E-values and alignments
User: "Search proteins.fasta against SwissProt database"
AI: [calls blastp] โ Processes file and returns similarity results
User: "Create a BLAST database from reference_genomes.fasta"
AI: [calls makeblastdb] โ Creates searchable database files
User: "BLAST large_dataset.fasta against nt database"
AI: [calls blastn_async] โ "Job submitted! ID: abc123, checking progress..."
blastn
Nucleotide-nucleotide BLAST search
Parameters:
query
(required) - Path to FASTA file or sequence stringdatabase
(required) - Database name (e.g., "nt", "nr") or pathevalue
- E-value threshold (default: 10)max_hits
- Maximum hits to return (default: 50)output_format
- Output format: "tabular", "xml", "json", "pairwise"blastp
Protein-protein BLAST search
Parameters:
makeblastdb
Create BLAST database from FASTA file
Parameters:
input_file
(required) - Path to FASTA filedatabase_name
(required) - Name for output databasedbtype
(required) - "nucl" or "prot"title
- Database title (optional)blastn_async
- Submit nucleotide search to queueblastp_async
- Submit protein search to queueget_job_status
- Check job progressget_job_result
- Retrieve completed results# Basic settings
export BIO_MCP_MAX_FILE_SIZE=100000000 # 100MB max file size
export BIO_MCP_TIMEOUT=300 # 5 minute timeout
export BIO_MCP_BLAST_PATH="blastn" # BLAST executable path
# Queue mode settings
export BIO_MCP_QUEUE_URL="http://localhost:8000"
# Download common databases
mkdir -p ~/blast-databases
cd ~/blast-databases
# NCBI databases (large downloads!)
update_blastdb.pl --decompress nt
update_blastdb.pl --decompress nr
update_blastdb.pl --decompress swissprot
# Set environment variable
export BLASTDB=~/blast-databases
# Build image
docker build -t bio-mcp-blast .
# Run container
docker run -p 5000:5000 \
-v ~/blast-databases:/data/blast-db:ro \
-e BLASTDB=/data/blast-db \
bio-mcp-blast
services:
blast-server:
build: .
ports:
- "5000:5000"
volumes:
- ./databases:/data/blast-db:ro
environment:
- BLASTDB=/data/blast-db
- BIO_MCP_TIMEOUT=600
For long-running BLAST searches, use the queue system:
# Start queue infrastructure
cd ../bio-mcp-queue
./setup-local.sh
# Start BLAST server with queue support
python -m src.main --mode queue --queue-url http://localhost:8000
# Submit async job
job_info = await blast_server.submit_job(
job_type="blastn",
parameters={
"query": "large_sequences.fasta",
"database": "nt",
"evalue": 0.001
}
)
# Check status
status = await blast_server.get_job_status(job_info["job_id"])
# Get results when complete
results = await blast_server.get_job_result(job_info["job_id"])
# Fields: query_id, subject_id, percent_identity, alignment_length, ...
Query_1 gi|123456 98.5 500 7 0 1 500 1000 1499 1e-180 633
{
"BlastOutput2": [{
"report": {
"results": {
"search": {
"query_title": "Query_1",
"hits": [...]
}
}
}
}]
}
Standard BLAST XML format for programmatic parsing.
# Run tests
pytest tests/ -v
# Test with real data
python tests/test_integration.py
# Performance testing
python tests/benchmark.py
export BLAST_NUM_THREADS=8
BLAST not found
# Check installation
which blastn
blastn -version
# Install via conda
conda install -c bioconda blast
Database not found
# Check BLASTDB environment variable
echo $BLASTDB
# List available databases
blastdbcmd -list /path/to/databases
Out of memory
# Reduce max_target_seqs
blastn -max_target_seqs 100
# Use streaming for large outputs
# Increase system swap space
Timeout errors
# Increase timeout
export BIO_MCP_TIMEOUT=3600 # 1 hour
# Or use queue mode for long searches
python -m src.main --mode queue
See CONTRIBUTING.md for detailed guidelines.
MIT License - see LICENSE file.
Happy BLASTing! ๐งฌ๐
Interacting with Perplexity
A Model Context Protocol (MCP) server providing access to Google Search Console.
Provides comprehensive import and export trade data query functions, including trend analysis, product statistics, and geographic distribution.
A server for Google search and webpage content extraction, built on Cloudflare Workers with OAuth support.
Search engine for AI agents (search + extract) powered by Tavily
An MCP server that connects to Perplexity's Sonar API, enabling real-time web-wide research in conversational AI.
MCP server for interacting with the Ordiscan API to query Bitcoin ordinals and inscriptions. Requires an Ordiscan API key.
Integrates Google Maps for route planning, traffic analysis, and cost estimation.
Search YouTube videos and retrieve their transcripts using the YouTube API.
Self-hosted Websearch API