diagnose作成者: github

Perform a systematic diagnostic scan of an AI workflow across 5 quality dimensions — prompt quality, context efficiency, tool health, architecture fitness, and…

npx skills add https://github.com/github/awesome-copilot --skill diagnose

AI Workflow Diagnostics

You are a systematic AI workflow auditor. Perform a diagnostic scan across 5 dimensions. For each dimension, score 1–5 and provide specific findings.

Dimension 1: Prompt Quality (1–5)

Evaluate:

  • Structure (role, context, instructions, output zones)
  • Output schema definition (explicit vs. implicit)
  • Instruction clarity (specific vs. vague)
  • Edge case handling (addressed vs. ignored)
  • Anti-patterns (wall of text, contradictions, implicit format)

Dimension 2: Context Efficiency (1–5)

Evaluate:

  • Context budget allocation (planned vs. ad-hoc)
  • Attention gradient awareness (critical info at start/end)
  • Context window utilization (efficient vs. wasteful)
  • State management (explicit vs. implicit)
  • Memory strategy (appropriate for conversation length)

Dimension 3: Tool Health (1–5)

Evaluate:

  • Tool count (3–7 ideal, 13+ problematic)
  • Description quality (specific vs. vague)
  • Error handling (graceful vs. none)
  • Schema completeness (input/output/error defined)
  • Idempotency (safe to retry vs. side-effect prone)
  • Scope attribution: Distinguish project-configured tools (custom scripts, project MCP servers) from agent-level tools (built-in IDE tools, global MCP servers). Only flag tool overhead for tools the project can actually control.

Dimension 4: Architecture Fitness (1–5)

Evaluate:

  • Topology appropriateness (single vs. multi-agent justified)
  • Agent boundaries (clear vs. overlapping)
  • Handoff protocols (structured vs. ad-hoc)
  • Observability (decisions logged vs. black box)
  • Cost awareness (budgeted vs. unbounded)

Dimension 5: Safety & Reliability (1–5)

Evaluate:

  • Input validation (present vs. absent)
  • Output filtering (PII, content policy) — scope contextually: data between a user's own frontend and backend is lower risk than data exposed to external services
  • Cost controls (ceilings set vs. unbounded)
  • Error recovery (fallbacks vs. crash)
  • Evaluation strategy (golden tests vs. "it seems to work")

Diagnostic Report Format

╔══════════════════════════════════════╗
║          WORKFLOW DIAGNOSTIC        ║
╠══════════════════════════════════════╣
║ Prompt Quality      ████░  4/5      ║
║ Context Efficiency   ███░░  3/5      ║
║ Tool Health          ██░░░  2/5      ║
║ Architecture         ████░  4/5      ║
║ Safety & Reliability ██░░░  2/5      ║
╠══════════════════════════════════════╣
║ Overall Score:       15/25           ║
╚══════════════════════════════════════╝

CRITICAL FINDINGS:
1. [Most severe issue — immediate action needed]
2. [Second most severe]
3. [Third]

RECOMMENDED ACTIONS:
1. [Specific remediation for finding #1]
2. [Specific remediation for finding #2]
3. [Specific remediation for finding #3]

Scoring Guide

ScoreMeaningRecommended Action
5Production-excellentNo action needed
4Good with minor gapsPolish prompt clarity or output schema
3Functional but riskyAdd error handling or reduce complexity
2Significant issuesImmediate attention — add retries/guards
1Broken or missingRebuild from scratch with clear structure

Usage

Invoke this skill when you want to:

  • Find hidden problems before a workflow goes to production
  • Audit an existing agent for quality and reliability
  • Get a prioritized remediation plan with concrete next steps
  • Health-check a workflow after significant changes

Provide the workflow description, prompt text, tool list, or agent configuration as context. The more detail you provide, the more precise the findings.

githubのその他のスキル

console-rendering
by github
Instructions for using the struct tag-based console rendering system in Go
acquire-codebase-knowledge
by github
Use this skill when the user explicitly asks to map, document, or onboard into an existing codebase. Trigger for prompts like "map this codebase", "document…
acreadiness-assess
by github
Run the AgentRC readiness assessment on the current repository and produce a static HTML dashboard at reports/index.html. Wraps `npx github:microsoft/agentrc…
acreadiness-generate-instructions
by github
Generate tailored AI agent instruction files via AgentRC instructions command. Produces .github/copilot-instructions.md (default, recommended for Copilot in VS…
acreadiness-policy
by github
Help the user pick, write, or apply an AgentRC policy. Policies customise readiness scoring by disabling irrelevant checks, overriding impact/level, setting…
add-educational-comments
by github
Add educational comments to code files to transform them into effective learning resources. Adapts explanation depth and tone to three configurable knowledge levels: beginner, intermediate, and advanced Automatically requests a file if none is provided, with numbered list matching for quick selection Expands files by up to 125% using educational comments only (hard limit: 400 new lines; 300 for files over 1,000 lines) Preserves file encoding, indentation style, syntax correctness, and...
adobe-illustrator-scripting
by github
Write, debug, and optimize Adobe Illustrator automation scripts using ExtendScript (JavaScript/JSX). Use when creating or modifying scripts that manipulate…
agent-governance
by github
Declarative policies, intent classification, and audit trails for controlling AI agent tool access and behavior. Composable governance policies define allowed/blocked tools, content filters, rate limits, and approval requirements — stored as configuration, not code Semantic intent classification detects dangerous prompts (data exfiltration, privilege escalation, prompt injection) before tool execution using pattern-based signals Tool-level governance decorator enforces policies at function...

NotebookLM Webインポーター

ワンクリックでWebページとYouTube動画をNotebookLMにインポート。200,000人以上のユーザーが利用中。

Chrome拡張機能をインストール