NFT Log Analyser
AI-powered log analysis MCP server. Scans 500MB+ log files locally, analyses errors with Ollama + CrewAI agents, and automatically files structured GitHub Issues. 100% local — no logs leave your machine.
🔍 NFT Log Analyzer
AI-powered log analysis that automatically files GitHub Issues — 100% local via Ollama, zero data leaves your machine.
What It Does
Point it at any log file and it will:
- Scan 500MB+ files in seconds using ripgrep
- Parse error patterns, deduplicate repeated events
- Analyse using local LLM (Ollama + deepseek-r1:14b) via CrewAI agents
- Compose structured GitHub Issues with root cause and suggested fixes
- File Issues automatically to your repo — skipping duplicates
All processing happens locally on your machine. Raw log content never leaves your system.
Architecture
Claude Desktop / Cursor / LangChain
↓ MCP (stdio or HTTP+SSE)
MCP Log Analyzer Server
↓
ripgrep pre-filter (2-4s on 500MB)
↓
mmap streaming parser + deduplicator
↓
CrewAI agents → Ollama (local LLM)
↓
GitHub Issues API
Requirements
| Requirement | Version | Notes |
|---|---|---|
| Python | 3.11+ | 3.14 not supported |
| Ollama | Latest | brew install ollama |
| deepseek-r1:14b | — | ~9GB download |
| ripgrep | Latest | brew install ripgrep |
| RAM | 16GB min | 32GB recommended |
| macOS | Ventura 13+ | Apple Silicon recommended |
Quick Start
1. Install system dependencies
brew install ollama ripgrep
brew services start ollama
ollama pull deepseek-r1:14b # ~9GB — start this first
2. Clone and set up Python environment
git clone https://github.com/YOUR_ORG/mcp-log-analyzer
cd mcp-log-analyzer
/opt/homebrew/bin/python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install mcp "crewai>=0.80.0" crewai-tools langchain-ollama \
litellm fastapi uvicorn httpx httpx-sse \
structlog loguru pydantic python-dotenv \
tenacity rich typer
3. Configure environment
cp .env.example .env
nano .env # fill in your values
GITHUB_PAT=ghp_your_token_here
GITHUB_REPO_OWNER=your-username
GITHUB_REPO_NAME=your-repo
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=deepseek-r1:14b
CREWAI_TELEMETRY_OPT_OUT=true
OTEL_SDK_DISABLED=true
OLLAMA_KEEP_ALIVE=-1
4. Create a GitHub PAT
Go to: github.com → Settings → Developer settings → Personal access tokens → Tokens (classic)
Enable scope: repo (full)
5. Register with Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"mcp-log-analyzer": {
"command": "/path/to/mcp-log-analyzer/.venv/bin/python",
"args": ["/path/to/mcp-log-analyzer/mcp_server/server.py"],
"env": {
"GITHUB_PAT": "ghp_your_token",
"GITHUB_REPO_OWNER": "your-username",
"GITHUB_REPO_NAME": "your-repo",
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "deepseek-r1:14b"
}
}
}
}
Restart Claude Desktop. You should see the 🔨 tools icon appear.
Usage
Via Claude Desktop (natural language)
analyze the log file at /var/log/app.log and file GitHub issues for any errors
use analyze_log_file with path="/var/log/app.log" dry_run=true
check status of job abc12345
Via Python CLI
source .venv/bin/activate
python3 -c "
from dotenv import load_dotenv
load_dotenv()
from mcp_server.tools.analyze_tool import analyze_log_file
import asyncio, json
result = asyncio.run(analyze_log_file({
'path': '/var/log/app.log',
'severity': 'ERROR',
'dry_run': False
}))
print(result[0].text)
"
MCP Tools Reference
ping
Health check — verifies the server and Ollama are running.
{}
Returns: "mcp-log-analyzer online — Ollama: deepseek-r1:14b"
analyze_log_file
Start async log analysis. Returns a job ID immediately — pipeline runs in background.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
path | string | ✅ | — | Absolute path to log file |
severity | string | — | ERROR | Minimum severity: WARN, ERROR, CRITICAL |
dry_run | boolean | — | false | Preview issues without filing to GitHub |
Returns:
{
"job_id": "abc12345",
"status": "started",
"message": "Analysis started. Check progress with get_job_status('abc12345')."
}
get_job_status
Check the status of a running analysis job.
| Parameter | Type | Required | Description |
|---|---|---|---|
job_id | string | ✅ | Job ID returned by analyze_log_file |
Returns (running):
{
"status": "running",
"job_id": "abc12345",
"lines_filtered": 487,
"chunks": 1
}
Returns (done):
{
"status": "done",
"job_id": "abc12345",
"lines_filtered": 487,
"unique_events": 4,
"chunks": 1,
"issues_filed": 2,
"github_issues": [
{
"title": "[CRITICAL][minting-service] DB connection pool exhausted (x117)",
"url": "https://github.com/your-org/your-repo/issues/42",
"number": 42
}
]
}
Compatible MCP Clients
| Client | Transport | Config |
|---|---|---|
| Claude Desktop | stdio | claude_desktop_config.json |
| Claude Code CLI | stdio | .mcp.json in project root |
| Cursor | stdio or HTTP+SSE | .cursor/mcp.json |
| LangChain | HTTP+SSE | url: http://localhost:8000/sse |
| n8n | HTTP+SSE | HTTP Request node → SSE |
HTTP+SSE Transport (for Cursor, LangChain, n8n)
python mcp_server/server.py --transport sse --port 8000
Customising with Skills
Skills are plain English .md files that teach the agents your stack's error patterns. Three built-in skills ship with the project:
| Skill | Purpose |
|---|---|
skills/nft-app-errors.skill.md | NFT/blockchain error classification |
skills/infrastructure-errors.skill.md | Infrastructure error classification |
skills/bug-composition.skill.md | GitHub Issue format rules |
Writing your own skill
Create skills/my-stack-errors.skill.md:
# My Stack Error Classification
## CRITICAL — file bug immediately
- "FATAL: database connection refused" = service down
- "out of memory" = process crash imminent
## HIGH — file bug, non-urgent
- "connection timeout" on external API = degraded performance
## IGNORE — known false positives
- "reconnecting..." during deploys = expected
Then load it in agents/crew.py:
_load_skill("my-stack-errors.skill.md")
Pipeline Internals
500MB log file
↓ ripgrep (2-4 seconds)
↓ Filters: ERROR|FATAL|CRITICAL|WARN|Exception|Traceback
~5MB of error lines
↓ mmap streaming parser
↓ LogEvent objects with timestamp, level, component, message
↓ Deduplicator (fingerprints strip req_id, numbers, hex)
4-20 unique error patterns
↓ Chunker (10 events per chunk, CRITICAL first)
1-3 chunks
↓ Single CrewAI agent → Ollama (local)
↓ Structured bug reports in markdown
↓ Title extractor + label classifier
↓ Duplicate check via GitHub search API
GitHub Issues filed
Performance
Tested on Apple Silicon (M2, 32GB):
| File size | Filter time | Analysis time | Total |
|---|---|---|---|
| 10MB | <1s | 3-5 min | ~5 min |
| 100MB | 1-2s | 3-5 min | ~7 min |
| 500MB | 3-5s | 5-10 min | ~15 min |
Analysis time depends on number of unique error patterns found (not file size).
Troubleshooting
| Symptom | Fix |
|---|---|
ollama ps shows empty | Run ollama run deepseek-r1:14b then /bye to warm the model |
| MCP server disconnected in Claude Desktop | Check ~/Library/Logs/Claude/mcp-server-*.log for Python errors |
Issues filed: 0 | Verify GITHUB_PAT in claude_desktop_config.json is a real token, not placeholder |
| Timeout after 600s | Add OLLAMA_KEEP_ALIVE=-1 to .env and restart Ollama |
crewai install fails | Requires Python 3.11 — not compatible with 3.13/3.14 |
Permission denied on /usr/local/bin | Use /opt/homebrew/bin/ instead on Apple Silicon |
Roadmap
v1 (current)
- Local filesystem log ingestion
- ripgrep + mmap pipeline
- Single-agent CrewAI analysis
- GitHub Issues filing with dedup
- Claude Desktop + stdio MCP transport
v2 (planned)
- Datadog MCP integration
- Splunk MCP integration
- HTTP+SSE transport (Cursor, LangChain, n8n)
- Scheduled analysis triggers
- Parallel chunk processing
- Web dashboard for job history
Contributing
Contributions welcome — especially new skill files for different stacks.
- Fork the repo
- Create
skills/your-stack-errors.skill.md - Test it against a real log file
- Open a PR with example output
License
MIT — see LICENSE
Related Servers
Upstox MCP Server
A Model Context Protocol (MCP) server that integrates with the Upstox Trading API, enabling AI agents like Claude to securely access Indian stock market data, perform technical analysis, and view account information in read-only mode.
memcord
Privacy-first MCP server for AI memory management. Save, search & organize chat history with intelligent summarization.
FinancialData.Net MCP Server
Turn Claude or Cursor into your personal AI Financial Analyst.
Asset Price MCP Server
Provides real-time prices for assets like precious metals and cryptocurrencies.
AbuseIpDB MCP Server
A Model Context Protocol (MCP) server implementation that provides seamless integration with the AbuseIPDB API for IP reputation checking and abuse report management.
Github
The GoReleaser MCP
Cantrip.ai
You built it, now get users! GoToMarket MCP server
isleep
An MCP server that lets AI agents sleep for a specified duration.
Firelinks.cc MCP
Create and manage short links for tracking and distributing traffic.
Audio Player
An MCP server for controlling local audio file playback.