NFT Log Analyser
AI-powered log analysis MCP server. Scans 500MB+ log files locally, analyses errors with Ollama + CrewAI agents, and automatically files structured GitHub Issues. 100% local — no logs leave your machine.
🔍 NFT Log Analyzer
AI-powered log analysis that automatically files GitHub Issues — 100% local via Ollama, zero data leaves your machine.
What It Does
Point it at any log file and it will:
- Scan 500MB+ files in seconds using ripgrep
- Parse error patterns, deduplicate repeated events
- Analyse using local LLM (Ollama + deepseek-r1:14b) via CrewAI agents
- Compose structured GitHub Issues with root cause and suggested fixes
- File Issues automatically to your repo — skipping duplicates
All processing happens locally on your machine. Raw log content never leaves your system.
Architecture
Claude Desktop / Cursor / LangChain
↓ MCP (stdio or HTTP+SSE)
MCP Log Analyzer Server
↓
ripgrep pre-filter (2-4s on 500MB)
↓
mmap streaming parser + deduplicator
↓
CrewAI agents → Ollama (local LLM)
↓
GitHub Issues API
Requirements
| Requirement | Version | Notes |
|---|---|---|
| Python | 3.11+ | 3.14 not supported |
| Ollama | Latest | brew install ollama |
| deepseek-r1:14b | — | ~9GB download |
| ripgrep | Latest | brew install ripgrep |
| RAM | 16GB min | 32GB recommended |
| macOS | Ventura 13+ | Apple Silicon recommended |
Quick Start
1. Install system dependencies
brew install ollama ripgrep
brew services start ollama
ollama pull deepseek-r1:14b # ~9GB — start this first
2. Clone and set up Python environment
git clone https://github.com/YOUR_ORG/mcp-log-analyzer
cd mcp-log-analyzer
/opt/homebrew/bin/python3.11 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install mcp "crewai>=0.80.0" crewai-tools langchain-ollama \
litellm fastapi uvicorn httpx httpx-sse \
structlog loguru pydantic python-dotenv \
tenacity rich typer
3. Configure environment
cp .env.example .env
nano .env # fill in your values
GITHUB_PAT=ghp_your_token_here
GITHUB_REPO_OWNER=your-username
GITHUB_REPO_NAME=your-repo
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=deepseek-r1:14b
CREWAI_TELEMETRY_OPT_OUT=true
OTEL_SDK_DISABLED=true
OLLAMA_KEEP_ALIVE=-1
4. Create a GitHub PAT
Go to: github.com → Settings → Developer settings → Personal access tokens → Tokens (classic)
Enable scope: repo (full)
5. Register with Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"mcp-log-analyzer": {
"command": "/path/to/mcp-log-analyzer/.venv/bin/python",
"args": ["/path/to/mcp-log-analyzer/mcp_server/server.py"],
"env": {
"GITHUB_PAT": "ghp_your_token",
"GITHUB_REPO_OWNER": "your-username",
"GITHUB_REPO_NAME": "your-repo",
"OLLAMA_BASE_URL": "http://localhost:11434",
"OLLAMA_MODEL": "deepseek-r1:14b"
}
}
}
}
Restart Claude Desktop. You should see the 🔨 tools icon appear.
Usage
Via Claude Desktop (natural language)
analyze the log file at /var/log/app.log and file GitHub issues for any errors
use analyze_log_file with path="/var/log/app.log" dry_run=true
check status of job abc12345
Via Python CLI
source .venv/bin/activate
python3 -c "
from dotenv import load_dotenv
load_dotenv()
from mcp_server.tools.analyze_tool import analyze_log_file
import asyncio, json
result = asyncio.run(analyze_log_file({
'path': '/var/log/app.log',
'severity': 'ERROR',
'dry_run': False
}))
print(result[0].text)
"
MCP Tools Reference
ping
Health check — verifies the server and Ollama are running.
{}
Returns: "mcp-log-analyzer online — Ollama: deepseek-r1:14b"
analyze_log_file
Start async log analysis. Returns a job ID immediately — pipeline runs in background.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
path | string | ✅ | — | Absolute path to log file |
severity | string | — | ERROR | Minimum severity: WARN, ERROR, CRITICAL |
dry_run | boolean | — | false | Preview issues without filing to GitHub |
Returns:
{
"job_id": "abc12345",
"status": "started",
"message": "Analysis started. Check progress with get_job_status('abc12345')."
}
get_job_status
Check the status of a running analysis job.
| Parameter | Type | Required | Description |
|---|---|---|---|
job_id | string | ✅ | Job ID returned by analyze_log_file |
Returns (running):
{
"status": "running",
"job_id": "abc12345",
"lines_filtered": 487,
"chunks": 1
}
Returns (done):
{
"status": "done",
"job_id": "abc12345",
"lines_filtered": 487,
"unique_events": 4,
"chunks": 1,
"issues_filed": 2,
"github_issues": [
{
"title": "[CRITICAL][minting-service] DB connection pool exhausted (x117)",
"url": "https://github.com/your-org/your-repo/issues/42",
"number": 42
}
]
}
Compatible MCP Clients
| Client | Transport | Config |
|---|---|---|
| Claude Desktop | stdio | claude_desktop_config.json |
| Claude Code CLI | stdio | .mcp.json in project root |
| Cursor | stdio or HTTP+SSE | .cursor/mcp.json |
| LangChain | HTTP+SSE | url: http://localhost:8000/sse |
| n8n | HTTP+SSE | HTTP Request node → SSE |
HTTP+SSE Transport (for Cursor, LangChain, n8n)
python mcp_server/server.py --transport sse --port 8000
Customising with Skills
Skills are plain English .md files that teach the agents your stack's error patterns. Three built-in skills ship with the project:
| Skill | Purpose |
|---|---|
skills/nft-app-errors.skill.md | NFT/blockchain error classification |
skills/infrastructure-errors.skill.md | Infrastructure error classification |
skills/bug-composition.skill.md | GitHub Issue format rules |
Writing your own skill
Create skills/my-stack-errors.skill.md:
# My Stack Error Classification
## CRITICAL — file bug immediately
- "FATAL: database connection refused" = service down
- "out of memory" = process crash imminent
## HIGH — file bug, non-urgent
- "connection timeout" on external API = degraded performance
## IGNORE — known false positives
- "reconnecting..." during deploys = expected
Then load it in agents/crew.py:
_load_skill("my-stack-errors.skill.md")
Pipeline Internals
500MB log file
↓ ripgrep (2-4 seconds)
↓ Filters: ERROR|FATAL|CRITICAL|WARN|Exception|Traceback
~5MB of error lines
↓ mmap streaming parser
↓ LogEvent objects with timestamp, level, component, message
↓ Deduplicator (fingerprints strip req_id, numbers, hex)
4-20 unique error patterns
↓ Chunker (10 events per chunk, CRITICAL first)
1-3 chunks
↓ Single CrewAI agent → Ollama (local)
↓ Structured bug reports in markdown
↓ Title extractor + label classifier
↓ Duplicate check via GitHub search API
GitHub Issues filed
Performance
Tested on Apple Silicon (M2, 32GB):
| File size | Filter time | Analysis time | Total |
|---|---|---|---|
| 10MB | <1s | 3-5 min | ~5 min |
| 100MB | 1-2s | 3-5 min | ~7 min |
| 500MB | 3-5s | 5-10 min | ~15 min |
Analysis time depends on number of unique error patterns found (not file size).
Troubleshooting
| Symptom | Fix |
|---|---|
ollama ps shows empty | Run ollama run deepseek-r1:14b then /bye to warm the model |
| MCP server disconnected in Claude Desktop | Check ~/Library/Logs/Claude/mcp-server-*.log for Python errors |
Issues filed: 0 | Verify GITHUB_PAT in claude_desktop_config.json is a real token, not placeholder |
| Timeout after 600s | Add OLLAMA_KEEP_ALIVE=-1 to .env and restart Ollama |
crewai install fails | Requires Python 3.11 — not compatible with 3.13/3.14 |
Permission denied on /usr/local/bin | Use /opt/homebrew/bin/ instead on Apple Silicon |
Roadmap
v1 (current)
- Local filesystem log ingestion
- ripgrep + mmap pipeline
- Single-agent CrewAI analysis
- GitHub Issues filing with dedup
- Claude Desktop + stdio MCP transport
v2 (planned)
- Datadog MCP integration
- Splunk MCP integration
- HTTP+SSE transport (Cursor, LangChain, n8n)
- Scheduled analysis triggers
- Parallel chunk processing
- Web dashboard for job history
Contributing
Contributions welcome — especially new skill files for different stacks.
- Fork the repo
- Create
skills/your-stack-errors.skill.md - Test it against a real log file
- Open a PR with example output
License
MIT — see LICENSE
Servidores relacionados
CS2 RCON MCP
A server for managing Counter-Strike 2 servers using the RCON protocol.
Monzo
Access and manage your Monzo banking data, allowing you to check balances and view transactions.
MCP HUB
The Ultimate Control Plane for MCP Unlock the full power of Model Context Protocol with zero friction. One-Click GPT Integration: Bridge the gap between MCP servers and ChatGPT/LLMs instantly. No more manual config hunting. Pro-Level Orchestration: Manage, monitor, and toggle multiple MCP tools from a single, intuitive dashboard. Secure by Design: Built-in support for complex auth flows and 2FA, making enterprise-grade tool integration seamless. Streamlined Debugging: Test queries and inspect tool responses in real-time without leaving the hub. Stop wrestling with JSON configs. Start building agentic workflows that actually work.
SpeedOf.Me Speed Test MCP
Official SpeedOf.Me server for AI agents - accurate speed tests via 129 global edge servers with analytics dashboard.
Time Server
An MCP server that exposes datetime information to agentic systems and chat REPLs.
MCP Seat Reservation Server
A server for managing a comprehensive seat reservation system.
Minecraft MCP Integration
Enables AI assistants to interact with a Minecraft server using the Model Context Protocol (MCP).
mcp-server-gemini-bridge
Bridge to Google Gemini API. Access Gemini Pro and Flash models through MCP.
Government Contracts MCP
SAM.gov federal contract opportunities and USAspending award data. 4 MCP tools for procurement intelligence.
Github
The GoReleaser MCP