openclaw-skill-vetter-mcp

MCP server for security-vetting third-party AI agent extensions before installation — Claude skills, ClawHub plugins, agent tool packs. 41 detection rules; outputs 0-100 risk score + BLOCK/REVIEW/CAUTION/CLEAN.

openclaw-skill-vetter-mcp

MCP server for security-vetting third-party AI agent extensions before installation — Claude skills, ClawHub plugins, agent tool packs, any code-shaped artifact that runs in your agent environment with your API keys. 41 detection rules across prompt-injection patterns, hardcoded exfiltration channels (Discord/Slack/Telegram webhooks, SSH-key reads, AWS-creds reads), dangerous dynamic execution (eval, exec, subprocess shell=True, pickle.loads), manifest/permission drift, and known typosquat dependencies. Outputs a 0-100 risk score + BLOCK/REVIEW/CAUTION/CLEAN bucket + per-finding evidence. Native ClawHub manifest support; the rule engine generalizes to any code-shaped extension via Custom MCP Build adapters. Keywords: AI agent security, plugin vetting, supply-chain security, prompt injection detection, MCP static analysis.

Status: v1.1.2 Tests: 145 passing License: MIT MCP PyPI


What it does

Third-party AI agent extensions — Claude skills, ClawHub plugins, MCP servers themselves, agent tool packs, npm-distributed agent code — are code that runs inside your environment with your API keys, your filesystem access, your network egress. The supply-chain attack surface is now broadly recognized + actively exploited:

The same shape of attack works against any third-party extension a user installs into their AI agent runtime — Claude skills, MCP servers, browser-extension agents, npm-distributed agent code. The defensive question every operator faces before clicking install: "is this safe to run with my API keys?"

This MCP server runs a battery of static-analysis scanners against any skill's directory and produces a single VetReport that an operator can act on:

> claude: vet the data-extractor skill before I install it.
[MCP tool: vet_skill]

Skill 'data-extractor': BLOCK — do not install.
Risk score: 100/100. Findings: 1 critical, 4 high, 1 info.

Critical:
  EXFIL.WEBHOOK_DISCORD (extract.py:5) —
    Hardcoded Discord webhook URL: 'https://discord.com/api/webhooks/...'
    Recommendation: Refuse install unless explicitly justified.

High:
  AST.OS_SYSTEM (extract.py:14) — os.system('curl ... | bash')
  EXFIL.ENV_DUMP (extract.py:9) — dumps full os.environ
  MANIFEST.WILDCARD_PERMISSION — `network.http: *`
  ...

Vet result for data-extractor: REFUSE INSTALL.
> claude: any flagged skills currently installed?
[MCP tool: flagged_skills_report]

2 skills flagged at REVIEW or BLOCK:
  - data-extractor       BLOCK   risk_score=100   1 CRITICAL EXFIL.WEBHOOK_DISCORD
  - markdown-formatter   REVIEW  risk_score=35    1 HIGH AST.EVAL_CALL on user input

Why openclaw-skill-vetter-mcp

Three things existing tools (manual code review, generic SAST, ClawHub trust scores) don't do:

  1. Skill-aware scanning. Generic SAST tools don't know what an OpenClaw skill manifest looks like. They miss the most common malware shape: a "calculator" skill that requests network.http: *. The vetter cross-checks declared purpose against requested permissions.

  2. Risk score the operator can paste into a ticket. Not "high cyclomatic complexity" — BLOCK — Discord webhook at extract.py:5. Each finding has rule_id, file:line, evidence, and a specific recommendation.

  3. Built for review-before-install, not after-the-fact audit. Run it from inside Claude on a skill you're about to add. Get a verdict in seconds. Refuse the install if it's BLOCK; sandbox-test if REVIEW; install if CLEAN.

Built for the production-AI operator who has been bitten (or doesn't want to be) by ClawHavoc-style supply-chain attacks.

How this fits in the OpenClaw security ecosystem

The OpenClaw security crisis has spawned a multi-vendor tooling landscape. This server's place in it:

LayerVendor / projectPosture
Enterprise SaaS / SOCCisco DefenseClaw, ClawSecure Watchtower, Zscaler ThreatLabz, NemoClawServer-side, paid, integration-heavy, SIEM-aimed. Best fit for organizations with existing security teams + SOC infrastructure.
Best-practices guidanceMicrosoft Security Blog, CrowdStrike, ConsciaEducational. No tooling.
Open-source / communitySecureClaw, openclaw-security-monitor, openclaw-dashboard, slowmist's hardening guideSelf-hosted runtime + dashboard tooling. Generally separate process / web UI.
MCP-native (this layer)openclaw-skill-vetter-mcp (this server) + openclaw-output-vetter-mcp (claim verification) + openclaw-upgrade-orchestrator-mcp (regression catalog + provider-fingerprint)Inline in the agent's own conversation — Claude Desktop / Cursor / Cline calls these tools directly during a turn. Sub-second, free, MIT, local, read-only. The operator-tooling layer one step closer to the agent than enterprise SIEM covers.

This server isn't a replacement for the SaaS layer — large organizations should pair both. It's a replacement for manual code review of every ClawHub skill before install, with a verdict an operator can paste into a ticket in seconds.


Tool surface

ToolWhat it returns
vet_skillFull VetReport for one skill: risk_score, risk_level, sorted findings, summary
vet_skill_directoryAggregate report across every skill in the directory + per-bucket counts
installed_skills_overviewLightweight: just bucket counts + flagged skill IDs
flagged_skills_reportJust REVIEW + BLOCK skills with their findings
scan_for_prompt_injectionFocused: only prompt-injection findings on one skill
scan_for_exfiltrationFocused: only exfiltration findings on one skill
list_detection_rulesCatalog of every rule the server applies (transparency)
vet_agent_config (v1.1+)NEW — scan a project DIRECTORY for adversarial agent-config files (AGENTS.md, .gemini/config, .cursor/rules.md / .cursorrules, .claude/CLAUDE.md, auto-firing git hooks). Returns the same Finding shape; covers prompt-injection, exfiltration channels, embedded shell commands, secret-file references, and git-hook-install patterns. Targets the Cursor CVE-2026-26268 / Gemini-CLI-yolo trust-boundary failure mode.

Resources:

  • skill-vetter://overview — installed-skills risk overview
  • skill-vetter://flagged — currently-flagged skills
  • skill-vetter://rules — detection rules catalog

Prompts:

  • pre-install-skill-check — vet a specific skill before installation
  • weekly-skill-audit — compose a 200-word weekly audit of all installed skills
  • agent-config-audit (v1.1+) — vet a project directory's agent-config files for adversarial content

Quickstart

Install

pip install openclaw-skill-vetter-mcp

Quick verify (~30 seconds, no config)

After install, run the bundled demo to see the vetter catch real malicious skill patterns:

openclaw-skill-vetter-mcp-demo

You'll see 6 hand-shaped skills vetted: typically 2 BLOCK (a data-extractor with hardcoded Discord webhook + os.system at risk_score 100/100; a requestz-typosquat of requests at 55/100) + 2 REVIEW (eval() + manifest-purpose drift) + 2 CLEAN. No external I/O, no API keys — safe to run anywhere. Useful first-30-seconds check before pointing at your real ~/.openclaw/skills/ directory.

Configure for Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "openclaw-skill-vetter": {
      "command": "python",
      "args": ["-m", "openclaw_skill_vetter_mcp"],
      "env": {
        "OPENCLAW_SKILL_VETTER_BACKEND": "mock"
      }
    }
  }
}

Backends

BackendStatusDescription
mock✅ v1.06 demo skills with deliberate findings spanning all severities — for protocol verification and README/CLI demos
openclaw-skills-dir✅ v1.0Reads ~/.openclaw/skills/ (override via OPENCLAW_SKILLS_DIR); each subdirectory is parsed as one skill
clawhub-fetch⏳ v1.1Fetches a candidate skill from the ClawHub registry directly for vet-before-install workflows

Skill manifest format

Each skill directory contains a skill.yaml (or skill.json):

id: weather-fetch
name: Weather Fetch
version: 1.0.0
author: [email protected]
description: Fetches current weather for a city using OpenWeatherMap.
purpose: Live weather data lookup
runtime: python3.11
entry_point: main.py
permissions:
  - network.http: api.openweathermap.org
dependencies:
  - requests>=2.31
  - pydantic>=2.0
signature: ed25519:abcd1234efgh5678

Plus the actual code files (*.py, *.js, *.ts, *.sh, *.rb, *.go, *.rs) and any prompt files (*.prompt, *.md, *.txt).

If your OpenClaw deployment uses a different on-disk shape, see the Custom MCP Build section below.


Detection rules (v1.0)

Four scanner modules cover the v1.0 ruleset:

ManifestMANIFEST.MISSING, MANIFEST.PURPOSE_NETWORK_DRIFT, MANIFEST.WILDCARD_PERMISSION, MANIFEST.BROAD_FILESYSTEM_WRITE, MANIFEST.EMPTY_DESCRIPTION, MANIFEST.NO_AUTHOR, MANIFEST.UNSIGNED

Static patterns (text regex over code + prompts) —

  • Prompt-injection: PROMPT_INJ.IGNORE_PRIOR, PROMPT_INJ.ROLE_OVERRIDE, PROMPT_INJ.EXTRACT_SYSTEM, PROMPT_INJ.JAILBREAK_DAN, PROMPT_INJ.NEW_USER_MARKER
  • Exfiltration: EXFIL.WEBHOOK_DISCORD, EXFIL.WEBHOOK_SLACK, EXFIL.WEBHOOK_TELEGRAM, EXFIL.PASTEBIN_LITERAL, EXFIL.SSH_KEY_READ, EXFIL.AWS_CREDS_READ, EXFIL.ENV_DUMP, EXFIL.SUBPROCESS_CURL
  • Dynamic execution: DYN_EXEC.SHELL_TRUE, DYN_EXEC.OS_SYSTEM, DYN_EXEC.EVAL_LITERAL, DYN_EXEC.EXEC_LITERAL, DYN_EXEC.PICKLE_LOADS, DYN_EXEC.DYNAMIC_IMPORT
  • Obfuscation: OBFUSCATION.LARGE_BASE64, OBFUSCATION.LARGE_HEX

Python AST (catches what regex misses) — AST.EVAL_CALL, AST.EXEC_CALL, AST.COMPILE_CALL, AST.OS_SYSTEM, AST.OS_POPEN, AST.OS_EXECV, AST.SUBPROCESS_RUN_SHELL_TRUE, AST.SUBPROCESS_POPEN_SHELL_TRUE, AST.DYNAMIC_IMPORT

DependenciesDEP.TYPOSQUAT, DEP.HOMOGLYPH, DEP.UNTRUSTED_GIT_SOURCE, DEP.LOCAL_PATH

Use list_detection_rules to query the live catalog.


Risk scoring

Each finding contributes by severity:

SeverityWeight
CRITICAL40
HIGH15
MEDIUM5
LOW1
INFO0

Final risk_score = min(sum, 100). Bucketing (first match wins):

BucketTrigger
BLOCK≥1 CRITICAL or score ≥ 80
REVIEW≥1 HIGH or score ≥ 50
CAUTION≥1 MEDIUM or score ≥ 20
CLEANno findings or only INFO

Conservative-by-design: false positives are OK, missed criticals are not. If your operator workflow disagrees with a specific rule, you can filter by category on the client side, or fork + customize.


Roadmap

VersionScopeStatus
v1.0mock + openclaw-skills-dir backends, 7 tools / 3 resources / 2 prompts, 4 scanner modules with 41 detection rules, GitHub Actions CI matrix, PyPI Trusted Publishing
v1.1clawhub-fetch backend (vet a skill from ClawHub before install); CVE-DB lookup for dependencies; signature verification against ClawHub publisher keys
v1.2Sandbox-execution scanner (run skill in isolated process, observe network attempts); whitelist/allowlist per-operator
v1.xCustom rule packs; integration with existing SAST tools; per-rule severity overrides

Need this adapted to your stack?

If your AI deployment doesn't use the OpenClaw skill format — different agent harness, custom skill schema, monolithic skill files, internal-registry distribution — and you want the same vet-before-install discipline, that's a Custom MCP Build engagement.

TierScopeInvestmentTimeline
SimpleSingle backend adapter for your existing skill format$8,000–$12,0001–2 weeks
StandardCustom backend + custom rule pack tuned to your ecosystem + CI integration$15,000–$25,0002–4 weeks
ComplexMulti-format ingestion + sandbox-execution + signed-publisher allowlist + rule-tuning workshop$30,000–$45,0004–8 weeks

To engage:

  1. Email [email protected] with subject Custom MCP Build inquiry — skill vetting
  2. Include: 1-paragraph description of your skill ecosystem + which tier you're considering
  3. Reply within 2 business days with a 30-min discovery call slot

This server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp, openclaw-health-mcp, and openclaw-cost-tracker-mcp. Install all four for full operational visibility.


Production AI audits

If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (ClawHavoc-style skill malware being one of the most damaging), and write the corrective-action plan:

TierScopeInvestmentTimeline
Audit LiteOne system, top-5 findings, written report$1,5001 week
Audit StandardFull audit, all 14 patterns, 5 Cs findings, 90-day follow-up$3,0002–3 weeks
Audit + WorkshopStandard audit + 2-day team workshop + first monthly audit included$7,5003–4 weeks

Same email channel: [email protected] with subject AI audit inquiry.


Contributing

PRs welcome. Scanners are pluggable — see src/openclaw_skill_vetter_mcp/scanners/ for the contract.

To add a new scanner:

  1. Create scanners/<your_scanner>.py exporting SCANNER_NAME: str and def scan(skill: Skill) -> list[Finding]
  2. Optionally export def all_rules() -> list[tuple[...]] for the rules catalog
  3. Register in analysis.vet_skill (the orchestrator iterates over a fixed tuple of scanner modules)
  4. Add tests in tests/test_scanners.py

To add a new backend:

  1. Subclass SkillBackend in backends/<your_backend>.py
  2. Implement get_skills, get_skill_by_id, get_directory
  3. Register in backends/__init__.py
  4. Add tests in tests/test_backend_<your_backend>.py

Bug reports + feature requests: open a GitHub issue. False-positive reports: include the skill snippet that fired the wrong rule and we'll tune.


License

MIT — see LICENSE.


Related


Built by Temur Khan — production AI engineer. Contact: [email protected]

Máy chủ liên quan

NotebookLM Web Importer

Nhập trang web và video YouTube vào NotebookLM chỉ với một cú nhấp. Được tin dùng bởi hơn 200.000 người dùng.

Cài đặt tiện ích Chrome