arize-annotationpor github

INVOKE THIS SKILL when creating, managing, or using annotation configs or annotation queues on Arize (categorical, continuous, freeform), or applying human…

npx skills add https://github.com/github/awesome-copilot --skill arize-annotation

Arize Annotation Skill

SPACE — All --space flags and the ARIZE_SPACE env var accept a space name (e.g., my-workspace) or a base64 space ID (e.g., U3BhY2U6...). Find yours with ax spaces list.

This skill covers annotation configs (the label schema) and annotation queues (human review workflows), as well as programmatically annotating project spans via the Python SDK.

Direction: Human labeling in Arize attaches values defined by configs to spans, dataset examples, experiment-related records, and queue items in the product UI. This skill covers: ax annotation-configs, ax annotation-queues, and bulk span updates with ArizeClient.spans.update_annotations.


Prerequisites

Proceed directly with the task — run the ax command you need. Do NOT check versions, env vars, or profiles upfront.

If an ax command fails, troubleshoot based on the error:

  • command not found or version error → see references/ax-setup.md
  • 401 Unauthorized / missing API key → run ax profiles show to inspect the current profile. If the profile is missing or the API key is wrong, follow references/ax-profiles.md to create/update it. If the user doesn't have their key, direct them to https://app.arize.com/admin > API Keys
  • Space unknown → run ax spaces list to pick by name, or ask the user
  • Security: Never read .env files or search the filesystem for credentials. Use ax profiles for Arize credentials and ax ai-integrations for LLM provider keys. If credentials are not available through these channels, ask the user.

Concepts

What is an Annotation Config?

An annotation config defines the schema for a single type of human feedback label. Before anyone can annotate a span, dataset record, experiment output, or queue item, a config must exist for that label in the space.

FieldDescription
NameDescriptive identifier (e.g. Correctness, Helpfulness). Must be unique within the space.
Typecategorical (pick from a list), continuous (numeric range), or freeform (free text).
ValuesFor categorical: array of {"label": str, "score": number} pairs.
Min/Max ScoreFor continuous: numeric bounds.
Optimization DirectionWhether higher scores are better (maximize) or worse (minimize). Used to render trends in the UI.

Where labels get applied (surfaces)

SurfaceTypical path
Project spansPython SDK spans.update_annotations (below) and/or the Arize UI
Dataset examplesArize UI (human labeling flows); configs must exist in the space
Experiment outputsOften reviewed alongside datasets or traces in the UI — see arize-experiment, arize-dataset
Annotation queue itemsax annotation-queues CLI (below) and/or the Arize UI; configs must exist

Always ensure the relevant annotation config exists in the space before expecting labels to persist.


Basic CRUD: Annotation Configs

List

ax annotation-configs list --space SPACE
ax annotation-configs list --space SPACE -o json
ax annotation-configs list --space SPACE --limit 20

Create — Categorical

Categorical configs present a fixed set of labels for reviewers to choose from.

ax annotation-configs create \
  --name "Correctness" \
  --space SPACE \
  --type categorical \
  --value correct \
  --value incorrect \
  --optimization-direction maximize

Common binary label pairs:

  • correct / incorrect
  • helpful / unhelpful
  • safe / unsafe
  • relevant / irrelevant
  • pass / fail

Create — Continuous

Continuous configs let reviewers enter a numeric score within a defined range.

ax annotation-configs create \
  --name "Quality Score" \
  --space SPACE \
  --type continuous \
  --min-score 0 \
  --max-score 10 \
  --optimization-direction maximize

Create — Freeform

Freeform configs collect open-ended text feedback. No additional flags needed beyond name, space, and type.

ax annotation-configs create \
  --name "Reviewer Notes" \
  --space SPACE \
  --type freeform

Get

ax annotation-configs get NAME_OR_ID
ax annotation-configs get NAME_OR_ID -o json
ax annotation-configs get NAME_OR_ID --space SPACE   # required when using name instead of ID

Delete

ax annotation-configs delete NAME_OR_ID
ax annotation-configs delete NAME_OR_ID --space SPACE   # required when using name instead of ID
ax annotation-configs delete NAME_OR_ID --force   # skip confirmation

Note: Deletion is irreversible. Any annotation queue associations to this config are also removed in the product (queues may remain; fix associations in the Arize UI if needed).


Annotation Queues: ax annotation-queues

Annotation queues route records (spans, dataset examples, experiment runs) to human reviewers. Each queue is linked to one or more annotation configs that define what labels reviewers can apply.

List / Get

ax annotation-queues list --space SPACE
ax annotation-queues list --space SPACE -o json

ax annotation-queues get NAME_OR_ID --space SPACE
ax annotation-queues get NAME_OR_ID --space SPACE -o json

Create

At least one --annotation-config-id is required.

ax annotation-queues create \
  --name "Correctness Review" \
  --space SPACE \
  --annotation-config-id CONFIG_ID \
  --annotator-email [email protected] \
  --instructions "Label each response as correct or incorrect." \
  --assignment-method all   # or: random

Repeat --annotation-config-id and --annotator-email to attach multiple configs or reviewers.

Update

List flags (--annotation-config-id, --annotator-email) fully replace existing values when provided — pass all desired values, not just the new ones.

ax annotation-queues update NAME_OR_ID --space SPACE --name "New Name"
ax annotation-queues update NAME_OR_ID --space SPACE --instructions "Updated instructions"
ax annotation-queues update NAME_OR_ID --space SPACE \
  --annotation-config-id CONFIG_ID_A \
  --annotation-config-id CONFIG_ID_B

Delete

ax annotation-queues delete NAME_OR_ID --space SPACE
ax annotation-queues delete NAME_OR_ID --space SPACE --force   # skip confirmation

List Records

ax annotation-queues list-records NAME_OR_ID --space SPACE
ax annotation-queues list-records NAME_OR_ID --space SPACE --limit 50 -o json

Submit an Annotation for a Record

Annotations are upserted by config name — call once per annotation config. Supply at least one of --score, --label, or --text.

ax annotation-queues annotate-record NAME_OR_ID RECORD_ID \
  --annotation-name "Correctness" \
  --label "correct" \
  --space SPACE

ax annotation-queues annotate-record NAME_OR_ID RECORD_ID \
  --annotation-name "Quality Score" \
  --score 8.5 \
  --text "Response was accurate but slightly verbose." \
  --space SPACE

Assign a Record

Assign users to review a specific record:

ax annotation-queues assign-record NAME_OR_ID RECORD_ID --space SPACE

Delete Records

ax annotation-queues delete-records NAME_OR_ID --space SPACE

Applying Annotations to Spans (Python SDK)

Use the Python SDK to bulk-apply annotations to project spans when you already have labels (e.g., from a review export or an external labeling tool).

import pandas as pd
from arize import ArizeClient

import os

client = ArizeClient(api_key=os.environ["ARIZE_API_KEY"])

# Build a DataFrame with annotation columns
# Required: context.span_id + at least one annotation.<name>.label or annotation.<name>.score
annotations_df = pd.DataFrame([
    {
        "context.span_id": "span_001",
        "annotation.Correctness.label": "correct",
        "annotation.Correctness.updated_by": "[email protected]",
    },
    {
        "context.span_id": "span_002",
        "annotation.Correctness.label": "incorrect",
        "annotation.Correctness.updated_by": "[email protected]",
    },
])

response = client.spans.update_annotations(
    space_id=os.environ["ARIZE_SPACE"],
    project_name="your-project",
    dataframe=annotations_df,
    validate=True,
)

DataFrame column schema:

ColumnRequiredDescription
context.span_idyesThe span to annotate
annotation.<name>.labelone ofCategorical or freeform label
annotation.<name>.scoreone ofNumeric score
annotation.<name>.updated_bynoAnnotator identifier (email or name)
annotation.<name>.updated_atnoTimestamp in milliseconds since epoch
annotation.notesnoFreeform notes on the span

Limitation: Annotations apply only to spans within 31 days prior to submission.


Troubleshooting

ProblemSolution
ax: command not foundSee references/ax-setup.md
401 UnauthorizedAPI key may not have access to this space. Verify at https://app.arize.com/admin > API Keys
Annotation config not foundax annotation-configs list --space SPACE (or use ax annotation-configs get NAME_OR_ID --space SPACE)
409 Conflict on createName already exists in the space. Use a different name or get the existing config ID.
Queue not foundax annotation-queues list --space SPACE; verify the queue name or ID
Record not appearing in queueEnsure the annotation config linked to the queue exists; check ax annotation-configs list --space SPACE
Span SDK errors or missing spansConfirm project_name, space_id, and span IDs; use arize-trace to export spans

Related Skills

  • arize-trace: Export spans to find span IDs and time ranges
  • arize-dataset: Find dataset IDs and example IDs
  • arize-evaluator: Automated LLM-as-judge alongside human annotation
  • arize-experiment: Experiments tied to datasets and evaluation workflows
  • arize-link: Deep links to annotation configs and queues in the Arize UI

Save Credentials for Future Use

See references/ax-profiles.md § Save Credentials for Future Use.

Más skills de github

console-rendering
by github
Instructions for using the struct tag-based console rendering system in Go
acquire-codebase-knowledge
by github
Use this skill when the user explicitly asks to map, document, or onboard into an existing codebase. Trigger for prompts like "map this codebase", "document…
acreadiness-assess
by github
Run the AgentRC readiness assessment on the current repository and produce a static HTML dashboard at reports/index.html. Wraps `npx github:microsoft/agentrc…
acreadiness-generate-instructions
by github
Generate tailored AI agent instruction files via AgentRC instructions command. Produces .github/copilot-instructions.md (default, recommended for Copilot in VS…
acreadiness-policy
by github
Help the user pick, write, or apply an AgentRC policy. Policies customise readiness scoring by disabling irrelevant checks, overriding impact/level, setting…
add-educational-comments
by github
Add educational comments to code files to transform them into effective learning resources. Adapts explanation depth and tone to three configurable knowledge levels: beginner, intermediate, and advanced Automatically requests a file if none is provided, with numbered list matching for quick selection Expands files by up to 125% using educational comments only (hard limit: 400 new lines; 300 for files over 1,000 lines) Preserves file encoding, indentation style, syntax correctness, and...
adobe-illustrator-scripting
by github
Write, debug, and optimize Adobe Illustrator automation scripts using ExtendScript (JavaScript/JSX). Use when creating or modifying scripts that manipulate…
agent-governance
by github
Declarative policies, intent classification, and audit trails for controlling AI agent tool access and behavior. Composable governance policies define allowed/blocked tools, content filters, rate limits, and approval requirements — stored as configuration, not code Semantic intent classification detects dangerous prompts (data exfiltration, privilege escalation, prompt injection) before tool execution using pattern-based signals Tool-level governance decorator enforces policies at function...

NotebookLM Web Importer

Importa páginas web y videos de YouTube a NotebookLM con un clic. Utilizado por más de 200,000 usuarios.

Instalar extensión de Chrome