Hugging Face Tool Builderby huggingface

Build reusable scripts and tools using the Hugging Face API. Useful when chaining or combining API calls, or when tasks will be repeated/automated. Creates reusable command line scripts to fetch, enrich, or process data from Hugging Face Hub.

npx skills add https://github.com/huggingface/skills --skill hugging-face-tool-builder

Hugging Face API Tool Builder

Your purpose is now is to create reusable command line scripts and utilities for using the Hugging Face API, allowing chaining, piping and intermediate processing where helpful. You can access the API directly, as well as use the hf command line tool. Model and Dataset cards can be accessed from repositories directly.

Script Rules

Make sure to follow these rules:

  • Scripts must take a --help command line argument to describe their inputs and outputs
  • Non-destructive scripts should be tested before handing over to the User
  • Shell scripts are preferred, but use Python or TSX if complexity or user need requires it.
  • IMPORTANT: Use the HF_TOKEN environment variable as an Authorization header. For example: curl -H "Authorization: Bearer ${HF_TOKEN}" https://huggingface.co/api/. This provides higher rate limits and appropriate authorization for data access.
  • Investigate the shape of the API results before commiting to a final design; make use of piping and chaining where composability would be an advantage - prefer simple solutions where possible.
  • Share usage examples once complete.

Be sure to confirm User preferences where there are questions or clarifications needed.

Sample Scripts

Paths below are relative to this skill directory.

Reference examples:

  • references/hf_model_papers_auth.sh — uses HF_TOKEN automatically and chains trending → model metadata → model card parsing with fallbacks; it demonstrates multi-step API usage plus auth hygiene for gated/private content.
  • references/find_models_by_paper.sh — optional HF_TOKEN usage via --token, consistent authenticated search, and a retry path when arXiv-prefixed searches are too narrow; it shows resilient query strategy and clear user-facing help.
  • references/hf_model_card_frontmatter.sh — uses the hf CLI to download model cards, extracts YAML frontmatter, and emits NDJSON summaries (license, pipeline tag, tags, gated prompt flag) for easy filtering.

Baseline examples (ultra-simple, minimal logic, raw JSON output with HF_TOKEN header):

  • references/baseline_hf_api.sh — bash
  • references/baseline_hf_api.py — python
  • references/baseline_hf_api.tsx — typescript executable

Composable utility (stdin → NDJSON):

  • references/hf_enrich_models.sh — reads model IDs from stdin, fetches metadata per ID, emits one JSON object per line for streaming pipelines.

Composability through piping (shell-friendly JSON output):

  • references/baseline_hf_api.sh 25 | jq -r '.[].id' | references/hf_enrich_models.sh | jq -s 'sort_by(.downloads) | reverse | .[:10]'
  • references/baseline_hf_api.sh 50 | jq '[.[] | {id, downloads}] | sort_by(.downloads) | reverse | .[:10]'
  • printf '%s\n' openai/gpt-oss-120b meta-llama/Meta-Llama-3.1-8B | references/hf_model_card_frontmatter.sh | jq -s 'map({id, license, has_extra_gated_prompt})'

High Level Endpoints

The following are the main API endpoints available at https://huggingface.co

/api/datasets
/api/models
/api/spaces
/api/collections
/api/daily_papers
/api/notifications
/api/settings
/api/whoami-v2
/api/trending
/oauth/userinfo

Accessing the API

The API is documented with the OpenAPI standard at https://huggingface.co/.well-known/openapi.json.

IMPORTANT: DO NOT ATTEMPT to read https://huggingface.co/.well-known/openapi.json directly as it is too large to process.

IMPORTANT Use jq to query and extract relevant parts. For example,

Command to Get All 160 Endpoints

curl -s "https://huggingface.co/.well-known/openapi.json" | jq '.paths | keys | sort'

Model Search Endpoint Details

curl -s "https://huggingface.co/.well-known/openapi.json" | jq '.paths["/api/models"]'

You can also query endpoints to see the shape of the data. When doing so constrain results to low numbers to make them easy to process, yet representative.

Using the HF command line tool

The hf command line tool gives you further access to Hugging Face repository content and infrastructure.

❯ hf --help
Usage: hf [OPTIONS] COMMAND [ARGS]...

  Hugging Face Hub CLI

Options:
  --help                Show this message and exit.

Commands:
  auth                 Manage authentication (login, logout, etc.).
  cache                Manage local cache directory.
  download             Download files from the Hub.
  endpoints            Manage Hugging Face Inference Endpoints.
  env                  Print information about the environment.
  jobs                 Run and manage Jobs on the Hub.
  repo                 Manage repos on the Hub.
  repo-files           Manage files in a repo on the Hub.
  upload               Upload a file or a folder to the Hub.
  upload-large-folder  Upload a large folder to the Hub.
  version              Print information about the hf version.

The hf CLI command has replaced the now deprecated huggingface_hub CLI command.

More skills from huggingface

Hugging Face Cli
by huggingface
Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Hugging Face Datasets
by huggingface
Create and manage datasets on Hugging Face Hub. Supports initializing repos, defining configs/system prompts, streaming row updates, and SQL-based dataset querying/transformation. Designed to work alongside HF MCP server for comprehensive dataset workflows.
Hugging Face Evaluation
by huggingface
Add and manage evaluation results in Hugging Face model cards. Supports extracting eval tables from README content, importing scores from Artificial Analysis API, and running custom model evaluations with vLLM/lighteval. Works with the model-index metadata format.
Hugging Face Jobs
by huggingface
Run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, cost estimation, authentication with tokens, secrets management, timeout configuration, and result persistence. Designed for general-purpose compute workloads including data processing, inference, experiments, batch jobs, and any Python-based tasks.
Hugging Face Model Trainer
by huggingface
Train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on dataset preparation, hardware selection, cost estimation, and model persistence.
Hugging Face Paper Publisher
by huggingface
Publish and manage research papers on Hugging Face Hub. Supports creating paper pages, linking papers to models/datasets, claiming authorship, and generating professional markdown-based research articles.
Hugging Face Trackio
by huggingface
Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API) or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, HF Space syncing, and JSON output for automation.